Sort of trailing on this thread - Is a bonded active-active 10gig ethernet
network enough bandwidth to run data and heartbeat/admin on the same
network? I assume it comes down to a question of latency and congestion
but would like to hear others' stories.
Is anyone doing anything fancy with QOS
All,
Is there anything special (BIOS option / kernel option) that needs to be
done when running GPFS on a Broadwell powered NSD server?
Thank you,
Brian
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
s
> dangerous and unsupported."
>
>
>
> http://www.ibm.com/support/knowledgecenter/en/STXKQY/gpfsclustersfaq.html
>
>
>
> Bob Oesterlin
> Sr Storage Engineer, Nuance HPC Grid
> 507-269-0413
>
>
>
>
>
> *From: *<gpfsug-discuss-boun...@spec
for that?
Thank you,
Brian Marshall
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss
GPFS protocol servers that allow nova computes to mount
of NFS?
All advice is welcome.
Best,
Brian Marshall
Virginia Tech
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss
think from memory the best performance scenario they had was, when they
> installed the scale client locally into the virtual machines
>
>
> Andrew Beattie
> Software Defined Storage - IT Specialist
> Phone: 614-2133-7927
> E-mail: abeat...@au1.ibm.com
>
>
>
> - Orig
As background, we recently upgraded GPFS from 4.2.0 to 4.2.1 and updated
the Mellanox OFED on our compute cluster to allow it to move from CentOS
7.1 to 7.2
We do some transient warnings from the Mellanox switch gear about various
port counters that we are tracking down with them.
Jobs and
020%204202%205699> (W), +91-9860082042
> <+91%2098600%2082042>(Cell)
> -----
>
>
>
> From:Brian Marshall <mimar...@vt.edu>
> To:gpfsug main discussion list <gpfsug-discuss@spectrumscale.org>
> D
a place in the docs that talked about this specific point.
Thank you,
Brian Marshall
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss
r for this NSD) and one to a
>> different NSD in a different GPFS failure group (using quite likely, but
>> not necessarily, a different NSD server that is the primary server for this
>> alternate NSD).
>>
>> Cheers,
>>
>> -Bryan
>>
>>
>>
&g
cent files to the
IF150 with a replication of 2?
Any other comments on the proposed usage strategy are helpful.
Thank you,
Brian Marshall
On Wed, Aug 31, 2016 at 10:32 AM, Daniel Kidger <daniel.kid...@uk.ibm.com>
wrote:
> The other 'Exception' is when a rule is used to convert a 1 way replica
All,
I see in the GPFS FAQ A6.3 the statement below. Is it possible to have
GPFS do RDMA over EDR infiniband and non-RDMA communication over omnipath
(IP over fabric) when each NSD server has an EDR card and a OPA card
installed?
RDMA is not supported on a node when both Mellanox HCAs and
All,
Is there a way to "test" GPFS commands and see what the output or result
would be before running them? For example, I'd like to verify a command
string before actually running it on a production system.
Does IBM offer "test" licenses for setting up a small debug/devel
environment?
I'd be
All,
Is there any best practice or recommendation for the Snoop Mode memory
setting for NSD Servers?
Default is Early Snoop. On compute nodes, I am using Cluster On Die,
which creates 2 NUMA nodes per processor. This setup has 2 x 16-core
Broadwell processors in each NSD server.
Brian
dwidth available within a node (e.g. between your local disks and the
> host CPU).
>
> -Aaron
>
> On 8/22/16 10:23 PM, Brian Marshall wrote:
>
>> Does anyone have any experiences to share (good or bad) about setting up
>> and utilizing FPO for hadoop compute on top of G
cluster/node?
i.e. once we have subnets setup how can we tell GPFS is actually using
them. Currently we just do a large transfer and check tcpdump for any
packets flowing on the high-speed/data/non-admin subnet.
Thank you,
Brian Marshall
___
gpfsug-discuss
When creating a "fast tier" storage pool in a filesystem is the normal case
to create a placement policy that places all files in the fast tier and
migrates out old and large files?
Brian Marshall
On Mon, Oct 31, 2016 at 1:20 PM, Jez Tucker <jez.tuc...@gpfsug.org> wro
, but I
can give at least 15 minutes on:
DeepFlash: a computational scientist's first impressions
Best,
Brian Marshall
On Mon, Oct 10, 2016 at 6:59 PM, GPFS UG USA Principal <
usa-princi...@gpfsug.org> wrote:
> Hello all,
>
> There have been some questions about the Spectrum Scale Us
All,
I am in the same boat. I'd like to copy ~500 TB from one filesystem to
another. Both are being served by the same NSD servers.
We've done the multiple rsync script method in the past (and yes it's a bit
of a pain). Would love to have an easier utility.
Best,
Brian Marshall
On Mon, Dec
jbf1z4 nsd
>>
>> 40962034 No
>>
>> Yes ready up
>>
>> sas_ssd4T
>> jbf2z4 nsd
>>
>> 40962034 No
>>
>> Yes ready up
>>
>> sas_ssd4T
>> jbf3z4 nsd
>>
>> 4
the filesystem to be
replicated without affecting the performance of all other pools(which only
have a single failure group)
Thanks,
Brian Marshall
VT - ARC
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman
move data from SSD to HDD
(and vice versa)?
Do you nightly move large/old files to HDD or wait until the fast tier hit
some capacity limit?
Do you use QOS to limit the migration from SSD to HDD i.e. try not to kill
the file system with migration work?
Thanks,
Brian Marshall
On Thu, Dec 15, 2016
All,
Does the mmlsdisk command generate a lot of admin traffic or take up a lot
of GPFS resources?
In our case, we have it in some of our monitoring routines that run on all
nodes. It is kind of nice info to have, but I am wondering if hitting the
filesystem with a bunch of mmlsdisk commands is
All,
What is your favorite method for stopping a user process from eating up all
the system memory and saving 1 GB (or more) for the GPFS / system
processes? We have always kicked around the idea of cgroups but never
moved on it.
The problem: A user launches a job which uses all the memory on
24 matches
Mail list logo