t;r.so...@imperial.ac.uk>
Sent: Monday, April 24, 2017 11:11 AM
To: gpfsug main discussion list; Jan-Frode Myklebust
Subject: Re: [gpfsug-discuss] Protocol node recommendations
What’s your SSD going to help with… will you implement it as a LROC device?
Otherwise I can’t see the benefit to using it to boot o
-Frode Myklebust <janfr...@tanso.net>; gpfsug-discuss@spectrumscale.org
Subject: Re: [gpfsug-discuss] Protocol node recommendations
Hi,
Nice ! didn't pay attention at the revision and the spreadsheet. If someone
still have a copy somewhere it could be useful, Google didn't help :(
.org
Subject: Re: [gpfsug-discuss] Protocol node recommendations
The protocol sizing tool should be available from
https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/General%20Parallel%20File%20System%20(GPFS)/page/Sizing%20Guidance%20for%20Protocol%20Node/version/70a4c7c0-a5
nso.net>
> *Sent:* Saturday, April 22, 2017 10:50 AM
> *To:* gpfsug-discuss@spectrumscale.org
> *Subject:* Re: [gpfsug-discuss] Protocol node recommendations
>
> That's a tiny maxFilesToCache...
>
> I would start by implementing the settings from
> /usr/lpp/mmfs/*/gpfspr
le.org
Subject: Re: [gpfsug-discuss] Protocol node recommendations
That's a tiny maxFilesToCache...
I would start by implementing the settings from
/usr/lpp/mmfs/*/gpfsprotocolldefaul* plus a 64GB pagepool for your protocoll
nodes, and leave further tuning to when you see you have issues.
Regarding
.
The main topic here is to sustain 'normal' throughput for all users during peak.
Thank for your help.
From: valdis.kletni...@vt.edu <valdis.kletni...@vt.edu>
Sent: Saturday, April 22, 2017 6:30 AM
To: gpfsug main discussion list
Subject: Re: [gp
That's a tiny maxFilesToCache...
I would start by implementing the settings from
/usr/lpp/mmfs/*/gpfsprotocolldefaul* plus a 64GB pagepool for your
protocoll nodes, and leave further tuning to when you see you have issues.
Regarding sizing, we have a spreadsheet somewhere where you can input
Hi,
We have here around 2PB GPFS (4.2.2) accessed through an HPC cluster with GPFS
client on each node.
We will have to open GPFS to all our users over CIFS and kerberized NFS with
ACL support for both protocol for around +1000 users
All users have different use case and needs:
- some will do
On Thu, 20 Apr 2017 12:27:13 -, Frank Tower said:
> - some will do large I/O (e.g: store 1TB files)
> - some will read/write more than 10k files in a raw
> - other will do only sequential read
> But I wondering if some people have recommendations regarding hardware sizing
> and software
Hi,
We have here around 2PB GPFS where users access oney through GPFS client (used
by an HPC cluster), but we will have to setup protocols nodes.
We will have to share GPFS data to ~ 1000 users, where each users will have
different access usage, meaning:
- some will do large I/O (e.g:
10 matches
Mail list logo