Re: [gpfsug-discuss] Sub-block size not quite as expected on GPFS 5 filesystem?

2018-08-06 Thread Hoang Nguyen
That comment in the Administration guide is a legacy comment when Metadata 
sub-block size was restricted to 1/32 of the Metadata block size. In the past, 
creating large Metadata block sizes also meant large sub-blocks and hence large 
directory blocks which wasted a lot of space.

From:  on behalf of "Buterbaugh, 
Kevin L" 
Reply-To: gpfsug main discussion list 
Date: Monday, August 6, 2018 at 11:37 AM
To: gpfsug main discussion list 
Subject: Re: [gpfsug-discuss] Sub-block size not quite as expected on GPFS 5 
filesystem?

Hi All,

So I was just reading the GPFS 5.0.0 Administration Guide (yes, I actually do 
look at the documentation even if it seems sometimes that I don’t!) for some 
other information and happened to come across this at the bottom of page 358:

The --metadata-block-size flag on the mmcrfs command can be used to create a 
system pool with a different block size from the user pools. This can be 
especially beneficial if the default block size is larger than 1 MB. If data 
and metadata block sizes differ, the system pool must contain only metadataOnly 
disks.
Given that one of the responses I received during this e-mail thread was from 
an IBM engineer basically pointing out that there is no benefit in setting the 
metadata-block-size to less than 4 MB if that’s what I want for the filesystem 
block size, this might be a candidate for a documentation update.

Thanks…

Kevin

—
Kevin Buterbaugh - Senior System Administrator
Vanderbilt University - Advanced Computing Center for Research and Education
kevin.buterba...@vanderbilt.edu - 
(615)875-9633




___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] Sub-block size not quite as expected on GPFS 5 filesystem?

2018-08-06 Thread Buterbaugh, Kevin L
Hi All,

So I was just reading the GPFS 5.0.0 Administration Guide (yes, I actually do 
look at the documentation even if it seems sometimes that I don’t!) for some 
other information and happened to come across this at the bottom of page 358:


The --metadata-block-size flag on the mmcrfs command can be used to create a 
system pool with a different block size from the user pools. This can be 
especially beneficial if the default block size is larger than 1 MB. If data 
and metadata block sizes differ, the system pool must contain only metadataOnly 
disks.

Given that one of the responses I received during this e-mail thread was from 
an IBM engineer basically pointing out that there is no benefit in setting the 
metadata-block-size to less than 4 MB if that’s what I want for the filesystem 
block size, this might be a candidate for a documentation update.

Thanks…

Kevin

—
Kevin Buterbaugh - Senior System Administrator
Vanderbilt University - Advanced Computing Center for Research and Education
kevin.buterba...@vanderbilt.edu - 
(615)875-9633



___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] Sub-block size not quite as expected on GPFS 5 filesystem?

2018-08-03 Thread Buterbaugh, Kevin L
Hi All,

Aargh - now I really do feel like an idiot!  I had set up the stanza file over 
a week ago … then had to work on production issues … and completely forgot 
about setting the block size in the pool stanzas there.  But at least we all 
now know that stanza files override command line arguments to mmcrfs.

My apologies…

Kevin

On Aug 3, 2018, at 1:01 AM, Olaf Weiser 
mailto:olaf.wei...@de.ibm.com>> wrote:

Can u share your stanza file ?

Von meinem iPhone gesendet

Am 02.08.2018 um 23:15 schrieb Buterbaugh, Kevin L 
mailto:kevin.buterba...@vanderbilt.edu>>:

OK, so hold on … NOW what’s going on???  I deleted the filesystem … went to 
lunch … came back an hour later … recreated the filesystem with a metadata 
block size of 4 MB … and I STILL have a 1 MB block size in the system pool and 
the wrong fragment size in other pools…

Kevin

/root/gpfs
root@testnsd1# mmdelfs gpfs5
All data on the following disks of gpfs5 will be destroyed:
test21A3nsd
test21A4nsd
test21B3nsd
test21B4nsd
test23Ansd
test23Bnsd
test23Cnsd
test24Ansd
test24Bnsd
test24Cnsd
test25Ansd
test25Bnsd
test25Cnsd
Completed deletion of file system /dev/gpfs5.
mmdelfs: Propagating the cluster configuration data to all
  affected nodes.  This is an asynchronous process.
/root/gpfs
root@testnsd1# mmcrfs gpfs5 -F ~/gpfs/gpfs5.stanza -A yes -B 4M -E yes -i 4096 
-j scatter -k all -K whenpossible -m 2 -M 3 -n 32 -Q yes -r 1 -R 3 -T /gpfs5 -v 
yes --nofilesetdf --metadata-block-size 4M

The following disks of gpfs5 will be formatted on node testnsd3:
test21A3nsd: size 953609 MB
test21A4nsd: size 953609 MB
test21B3nsd: size 953609 MB
test21B4nsd: size 953609 MB
test23Ansd: size 15259744 MB
test23Bnsd: size 15259744 MB
test23Cnsd: size 1907468 MB
test24Ansd: size 15259744 MB
test24Bnsd: size 15259744 MB
test24Cnsd: size 1907468 MB
test25Ansd: size 15259744 MB
test25Bnsd: size 15259744 MB
test25Cnsd: size 1907468 MB
Formatting file system ...
Disks up to size 8.29 TB can be added to storage pool system.
Disks up to size 16.60 TB can be added to storage pool raid1.
Disks up to size 132.62 TB can be added to storage pool raid6.
Creating Inode File
  12 % complete on Thu Aug  2 13:16:26 2018
  25 % complete on Thu Aug  2 13:16:31 2018
  38 % complete on Thu Aug  2 13:16:36 2018
  50 % complete on Thu Aug  2 13:16:41 2018
  62 % complete on Thu Aug  2 13:16:46 2018
  74 % complete on Thu Aug  2 13:16:52 2018
  85 % complete on Thu Aug  2 13:16:57 2018
  96 % complete on Thu Aug  2 13:17:02 2018
 100 % complete on Thu Aug  2 13:17:03 2018
Creating Allocation Maps
Creating Log Files
   3 % complete on Thu Aug  2 13:17:09 2018
  28 % complete on Thu Aug  2 13:17:15 2018
  53 % complete on Thu Aug  2 13:17:20 2018
  78 % complete on Thu Aug  2 13:17:26 2018
 100 % complete on Thu Aug  2 13:17:27 2018
Clearing Inode Allocation Map
Clearing Block Allocation Map
Formatting Allocation Map for storage pool system
  98 % complete on Thu Aug  2 13:17:34 2018
 100 % complete on Thu Aug  2 13:17:34 2018
Formatting Allocation Map for storage pool raid1
  52 % complete on Thu Aug  2 13:17:39 2018
 100 % complete on Thu Aug  2 13:17:43 2018
Formatting Allocation Map for storage pool raid6
  24 % complete on Thu Aug  2 13:17:48 2018
  50 % complete on Thu Aug  2 13:17:53 2018
  74 % complete on Thu Aug  2 13:17:58 2018
  99 % complete on Thu Aug  2 13:18:03 2018
 100 % complete on Thu Aug  2 13:18:03 2018
Completed creation of file system /dev/gpfs5.
mmcrfs: Propagating the cluster configuration data to all
  affected nodes.  This is an asynchronous process.
/root/gpfs
root@testnsd1# mmlsfs gpfs5
flagvaluedescription
---  ---
 -f 8192 Minimum fragment (subblock) size 
in bytes (system pool)
32768Minimum fragment (subblock) size 
in bytes (other pools)
 -i 4096 Inode size in bytes
 -I 32768Indirect block size in bytes
 -m 2Default number of metadata replicas
 -M 3Maximum number of metadata replicas
 -r 1Default number of data replicas
 -R 3Maximum number of data replicas
 -j scatter  Block allocation type
 -D nfs4 File locking semantics in effect
 -k all  ACL semantics in effect
 -n 32   Estimated number of nodes that 
will mount file system
 -B 1048576  Block size (system pool)
4194304  Block size (other pools)
 -Q user;group;fileset 

Re: [gpfsug-discuss] Sub-block size not quite as expected on GPFS 5 filesystem?

2018-08-03 Thread Olaf Weiser
Can u share your stanza file ?

Von meinem iPhone gesendet

> Am 02.08.2018 um 23:15 schrieb Buterbaugh, Kevin L 
> :
> 
> OK, so hold on … NOW what’s going on???  I deleted the filesystem … went to 
> lunch … came back an hour later … recreated the filesystem with a metadata 
> block size of 4 MB … and I STILL have a 1 MB block size in the system pool 
> and the wrong fragment size in other pools…
> 
> Kevin
> 
> /root/gpfs
> root@testnsd1# mmdelfs gpfs5
> All data on the following disks of gpfs5 will be destroyed:
> test21A3nsd
> test21A4nsd
> test21B3nsd
> test21B4nsd
> test23Ansd
> test23Bnsd
> test23Cnsd
> test24Ansd
> test24Bnsd
> test24Cnsd
> test25Ansd
> test25Bnsd
> test25Cnsd
> Completed deletion of file system /dev/gpfs5.
> mmdelfs: Propagating the cluster configuration data to all
>   affected nodes.  This is an asynchronous process.
> /root/gpfs
> root@testnsd1# mmcrfs gpfs5 -F ~/gpfs/gpfs5.stanza -A yes -B 4M -E yes -i 
> 4096 -j scatter -k all -K whenpossible -m 2 -M 3 -n 32 -Q yes -r 1 -R 3 -T 
> /gpfs5 -v yes --nofilesetdf --metadata-block-size 4M
> 
> The following disks of gpfs5 will be formatted on node testnsd3:
> test21A3nsd: size 953609 MB
> test21A4nsd: size 953609 MB
> test21B3nsd: size 953609 MB
> test21B4nsd: size 953609 MB
> test23Ansd: size 15259744 MB
> test23Bnsd: size 15259744 MB
> test23Cnsd: size 1907468 MB
> test24Ansd: size 15259744 MB
> test24Bnsd: size 15259744 MB
> test24Cnsd: size 1907468 MB
> test25Ansd: size 15259744 MB
> test25Bnsd: size 15259744 MB
> test25Cnsd: size 1907468 MB
> Formatting file system ...
> Disks up to size 8.29 TB can be added to storage pool system.
> Disks up to size 16.60 TB can be added to storage pool raid1.
> Disks up to size 132.62 TB can be added to storage pool raid6.
> Creating Inode File
>   12 % complete on Thu Aug  2 13:16:26 2018
>   25 % complete on Thu Aug  2 13:16:31 2018
>   38 % complete on Thu Aug  2 13:16:36 2018
>   50 % complete on Thu Aug  2 13:16:41 2018
>   62 % complete on Thu Aug  2 13:16:46 2018
>   74 % complete on Thu Aug  2 13:16:52 2018
>   85 % complete on Thu Aug  2 13:16:57 2018
>   96 % complete on Thu Aug  2 13:17:02 2018
>  100 % complete on Thu Aug  2 13:17:03 2018
> Creating Allocation Maps
> Creating Log Files
>3 % complete on Thu Aug  2 13:17:09 2018
>   28 % complete on Thu Aug  2 13:17:15 2018
>   53 % complete on Thu Aug  2 13:17:20 2018
>   78 % complete on Thu Aug  2 13:17:26 2018
>  100 % complete on Thu Aug  2 13:17:27 2018
> Clearing Inode Allocation Map
> Clearing Block Allocation Map
> Formatting Allocation Map for storage pool system
>   98 % complete on Thu Aug  2 13:17:34 2018
>  100 % complete on Thu Aug  2 13:17:34 2018
> Formatting Allocation Map for storage pool raid1
>   52 % complete on Thu Aug  2 13:17:39 2018
>  100 % complete on Thu Aug  2 13:17:43 2018
> Formatting Allocation Map for storage pool raid6
>   24 % complete on Thu Aug  2 13:17:48 2018
>   50 % complete on Thu Aug  2 13:17:53 2018
>   74 % complete on Thu Aug  2 13:17:58 2018
>   99 % complete on Thu Aug  2 13:18:03 2018
>  100 % complete on Thu Aug  2 13:18:03 2018
> Completed creation of file system /dev/gpfs5.
> mmcrfs: Propagating the cluster configuration data to all
>   affected nodes.  This is an asynchronous process.
> /root/gpfs
> root@testnsd1# mmlsfs gpfs5
> flagvaluedescription
> ---  
> ---
>  -f 8192 Minimum fragment (subblock) size 
> in bytes (system pool)
> 32768Minimum fragment (subblock) size 
> in bytes (other pools)
>  -i 4096 Inode size in bytes
>  -I 32768Indirect block size in bytes
>  -m 2Default number of metadata 
> replicas
>  -M 3Maximum number of metadata 
> replicas
>  -r 1Default number of data replicas
>  -R 3Maximum number of data replicas
>  -j scatter  Block allocation type
>  -D nfs4 File locking semantics in effect
>  -k all  ACL semantics in effect
>  -n 32   Estimated number of nodes that 
> will mount file system
>  -B 1048576  Block size (system pool)
> 4194304  Block size (other pools)
>  -Q user;group;fileset   Quotas accounting enabled
> user;group;fileset   Quotas enforced
> none Default quotas enabled
>  --perfileset-quota No   Per-fileset quota 

Re: [gpfsug-discuss] Sub-block size not quite as expected on GPFS 5 filesystem?

2018-08-02 Thread Buterbaugh, Kevin L
OK, so hold on … NOW what’s going on???  I deleted the filesystem … went to 
lunch … came back an hour later … recreated the filesystem with a metadata 
block size of 4 MB … and I STILL have a 1 MB block size in the system pool and 
the wrong fragment size in other pools…

Kevin

/root/gpfs
root@testnsd1# mmdelfs gpfs5
All data on the following disks of gpfs5 will be destroyed:
test21A3nsd
test21A4nsd
test21B3nsd
test21B4nsd
test23Ansd
test23Bnsd
test23Cnsd
test24Ansd
test24Bnsd
test24Cnsd
test25Ansd
test25Bnsd
test25Cnsd
Completed deletion of file system /dev/gpfs5.
mmdelfs: Propagating the cluster configuration data to all
  affected nodes.  This is an asynchronous process.
/root/gpfs
root@testnsd1# mmcrfs gpfs5 -F ~/gpfs/gpfs5.stanza -A yes -B 4M -E yes -i 4096 
-j scatter -k all -K whenpossible -m 2 -M 3 -n 32 -Q yes -r 1 -R 3 -T /gpfs5 -v 
yes --nofilesetdf --metadata-block-size 4M

The following disks of gpfs5 will be formatted on node testnsd3:
test21A3nsd: size 953609 MB
test21A4nsd: size 953609 MB
test21B3nsd: size 953609 MB
test21B4nsd: size 953609 MB
test23Ansd: size 15259744 MB
test23Bnsd: size 15259744 MB
test23Cnsd: size 1907468 MB
test24Ansd: size 15259744 MB
test24Bnsd: size 15259744 MB
test24Cnsd: size 1907468 MB
test25Ansd: size 15259744 MB
test25Bnsd: size 15259744 MB
test25Cnsd: size 1907468 MB
Formatting file system ...
Disks up to size 8.29 TB can be added to storage pool system.
Disks up to size 16.60 TB can be added to storage pool raid1.
Disks up to size 132.62 TB can be added to storage pool raid6.
Creating Inode File
  12 % complete on Thu Aug  2 13:16:26 2018
  25 % complete on Thu Aug  2 13:16:31 2018
  38 % complete on Thu Aug  2 13:16:36 2018
  50 % complete on Thu Aug  2 13:16:41 2018
  62 % complete on Thu Aug  2 13:16:46 2018
  74 % complete on Thu Aug  2 13:16:52 2018
  85 % complete on Thu Aug  2 13:16:57 2018
  96 % complete on Thu Aug  2 13:17:02 2018
 100 % complete on Thu Aug  2 13:17:03 2018
Creating Allocation Maps
Creating Log Files
   3 % complete on Thu Aug  2 13:17:09 2018
  28 % complete on Thu Aug  2 13:17:15 2018
  53 % complete on Thu Aug  2 13:17:20 2018
  78 % complete on Thu Aug  2 13:17:26 2018
 100 % complete on Thu Aug  2 13:17:27 2018
Clearing Inode Allocation Map
Clearing Block Allocation Map
Formatting Allocation Map for storage pool system
  98 % complete on Thu Aug  2 13:17:34 2018
 100 % complete on Thu Aug  2 13:17:34 2018
Formatting Allocation Map for storage pool raid1
  52 % complete on Thu Aug  2 13:17:39 2018
 100 % complete on Thu Aug  2 13:17:43 2018
Formatting Allocation Map for storage pool raid6
  24 % complete on Thu Aug  2 13:17:48 2018
  50 % complete on Thu Aug  2 13:17:53 2018
  74 % complete on Thu Aug  2 13:17:58 2018
  99 % complete on Thu Aug  2 13:18:03 2018
 100 % complete on Thu Aug  2 13:18:03 2018
Completed creation of file system /dev/gpfs5.
mmcrfs: Propagating the cluster configuration data to all
  affected nodes.  This is an asynchronous process.
/root/gpfs
root@testnsd1# mmlsfs gpfs5
flagvaluedescription
---  ---
 -f 8192 Minimum fragment (subblock) size 
in bytes (system pool)
32768Minimum fragment (subblock) size 
in bytes (other pools)
 -i 4096 Inode size in bytes
 -I 32768Indirect block size in bytes
 -m 2Default number of metadata replicas
 -M 3Maximum number of metadata replicas
 -r 1Default number of data replicas
 -R 3Maximum number of data replicas
 -j scatter  Block allocation type
 -D nfs4 File locking semantics in effect
 -k all  ACL semantics in effect
 -n 32   Estimated number of nodes that 
will mount file system
 -B 1048576  Block size (system pool)
4194304  Block size (other pools)
 -Q user;group;fileset   Quotas accounting enabled
user;group;fileset   Quotas enforced
none Default quotas enabled
 --perfileset-quota No   Per-fileset quota enforcement
 --filesetdfNo   Fileset df enabled?
 -V 19.01 (5.0.1.0)  File system version
 --create-time  Thu Aug  2 13:16:47 2018 File system creation time
 -z No   Is DMAPI enabled?
 -L 33554432 Logfile size
 -E   

Re: [gpfsug-discuss] Sub-block size not quite as expected on GPFS 5 filesystem?

2018-08-02 Thread Buterbaugh, Kevin L
Hi All,

Thanks for all the responses on this, although I have the sneaking suspicion 
that the most significant thing that is going to come out of this thread is the 
knowledge that Sven has left IBM for DDN.  ;-) or :-( or :-O depending on your 
perspective.

Anyway … we have done some testing which has shown that a 4 MB block size is 
best for those workloads that use “normal” sized files.  However, we - like 
many similar institutions - support a mixed workload, so the 128K fragment size 
that comes with that is not optimal for the primarily biomedical type 
applications that literally create millions of very small files.  That’s why we 
settled on 1 MB as a compromise.

So we’re very eager to now test with GPFS 5, a 4 MB block size, and a 8K 
fragment size.  I’m recreating my test cluster filesystem now with that config 
… so 4 MB block size on the metadata only system pool, too.

Thanks to all who took the time to respond to this thread.  I hope it’s been 
beneficial to others as well…

Kevin

—
Kevin Buterbaugh - Senior System Administrator
Vanderbilt University - Advanced Computing Center for Research and Education
kevin.buterba...@vanderbilt.edu - 
(615)875-9633

On Aug 1, 2018, at 7:11 PM, Andrew Beattie 
mailto:abeat...@au1.ibm.com>> wrote:

I too would second the comment about doing testing specific to your environment

We recently deployed a number of ESS building blocks into a customer site that 
was specifically being used for a mixed HPC workload.

We spent more than a week playing with different block sizes for both data and 
metadata trying to identify which variation would provide the best mix of both 
metadata performance and data performance.  one thing we noticed very early on 
is that MDtest and IOR both respond very differently as you play with both 
block size and subblock size.  What works for one use case may be a very poor 
option for another use case.

Interestingly enough it turned out that the best overall option for our 
particular use case was an 8MB block size with 32k sub blocks -- as that gave 
us good Metadata performance and good sequential data performance

which is probably why 32k sub block was the default for so many years 
Andrew Beattie
Software Defined Storage  - IT Specialist
Phone: 614-2133-7927
E-mail: abeat...@au1.ibm.com


- Original message -
From: "Marc A Kaplan" mailto:makap...@us.ibm.com>>
Sent by: 
gpfsug-discuss-boun...@spectrumscale.org
To: gpfsug main discussion list 
mailto:gpfsug-discuss@spectrumscale.org>>
Cc:
Subject: Re: [gpfsug-discuss] Sub-block size not quite as expected on GPFS 5 
filesystem?
Date: Thu, Aug 2, 2018 10:01 AM

Firstly, I do suggest that you run some tests and see how much, if any, 
difference the settings that are available make in performance and/or storage 
utilization.

Secondly, as I and others have hinted at, deeper in the system, there may be 
additional parameters and settings.  Sometimes they are available via commands, 
and/or configuration settings, sometimes not.

Sometimes that's just because we didn't want to overwhelm you or ourselves with 
yet more "tuning knobs".

Sometimes it's because we made some component more tunable than we really 
needed, but did not make all the interconnected components equally or as widely 
tunable.
Sometimes it's because we want to save you from making ridiculous settings that 
would lead to problems...

OTOH, as I wrote before, if a burning requirement surfaces, things may change 
from release to release... Just as for so many years subblocks per block seemed 
forever frozen at the number 32.  Now it varies... and then the discussion 
shifts to why can't it be even more flexible?


___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fgpfsug.org%2Fmailman%2Flistinfo%2Fgpfsug-discussdata=02%7C01%7CKevin.Buterbaugh%40vanderbilt.edu%7Cb821b9e8a6db4408fff308d5f80c907d%7Cba5a7f39e3be4ab3b45067fa80faecad%7C0%7C0%7C636687655210056012sdata=SCzz05SABDQ0vxprDYfdKGOY1VES%2Fm0tIr2kRnGlY4c%3Dreserved=0

___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] Sub-block size not quite as expected on GPFS 5 filesystem?

2018-08-02 Thread IBM Spectrum Scale
In released GPFS, we only support one subblocks-per-fullblock in one file
system, like Sven mentioned that the subblocks-per-fullblock is derived by
the smallest block size of metadata and data pools, the smallest block size
decides the subblocks-per-fullblock and subblock size of all pools.

There's an enhancement plan to have pools with different block sizes and/or
subblocks-per-fullblock.

Thanks,
Yuan, Zheng Cai



From:   "Andrew Beattie" 
To: gpfsug-discuss@spectrumscale.org
Date:   2018/08/02 09:10
Subject:Re: [gpfsug-discuss] Sub-block size not quite as expected on
GPFS 5 filesystem?
Sent by:gpfsug-discuss-boun...@spectrumscale.org



Stephen,

Sorry your right, I had to go back and look up what we were doing for
metadata.

but we ended up with 1MB block for metadata and 8MB for data and a 32k
subblock based on the 1MB metadata block size, effectively a 256k subblock
for the Data


Andrew Beattie
Software Defined Storage  - IT Specialist
Phone: 614-2133-7927
E-mail: abeat...@au1.ibm.com


 - Original message -
 From: Stephen Ulmer 
 Sent by: gpfsug-discuss-boun...@spectrumscale.org
 To: gpfsug main discussion list 
 Cc:
 Subject: Re: [gpfsug-discuss] Sub-block size not quite as expected on GPFS
 5 filesystem?
 Date: Thu, Aug 2, 2018 11:00 AM



   On Aug 1, 2018, at 8:11 PM, Andrew Beattie 
   wrote:


 […]


   which is probably why 32k sub block was the default for so many
   years 

 I may not be remembering correctly, but I thought the default block size
 was 256k, and the sub-block size was always fixed at 1/32nd of the block
 size — which only yields 32k sub-blocks for a 1MB block size.

 I also think there used to be something special about a 16k block size…
 but I haven’t slept well in about a week, so I might just be losing it.

 --
 Stephen

 ___
 gpfsug-discuss mailing list
 gpfsug-discuss at spectrumscale.org
 http://gpfsug.org/mailman/listinfo/gpfsug-discuss

___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss



___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] Sub-block size not quite as expected on GPFS 5 filesystem?

2018-08-01 Thread Andrew Beattie
Stephen,
 
Sorry your right, I had to go back and look up what we were doing for metadata.
 
but we ended up with 1MB block for metadata and 8MB for data and a 32k subblock based on the 1MB metadata block size, effectively a 256k subblock for the Data  
 
 
Andrew Beattie
Software Defined Storage  - IT Specialist
Phone: 614-2133-7927
E-mail: abeat...@au1.ibm.com
 
 
- Original message -From: Stephen Ulmer Sent by: gpfsug-discuss-boun...@spectrumscale.orgTo: gpfsug main discussion list Cc:Subject: Re: [gpfsug-discuss] Sub-block size not quite as expected on GPFS 5 filesystem?Date: Thu, Aug 2, 2018 11:00 AM 
 
 
On Aug 1, 2018, at 8:11 PM, Andrew Beattie  wrote:
 […]
 
 
which is probably why 32k sub block was the default for so many years 
 
I may not be remembering correctly, but I thought the default block size was 256k, and the sub-block size was always fixed at 1/32nd of the block size — which only yields 32k sub-blocks for a 1MB block size.
 
I also think there used to be something special about a 16k block size… but I haven’t slept well in about a week, so I might just be losing it.
 
-- Stephen 
___gpfsug-discuss mailing listgpfsug-discuss at spectrumscale.orghttp://gpfsug.org/mailman/listinfo/gpfsug-discuss
 

___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] Sub-block size not quite as expected on GPFS 5 filesystem?

2018-08-01 Thread Stephen Ulmer


> On Aug 1, 2018, at 8:11 PM, Andrew Beattie  wrote:
> 

[…]

>  
> which is probably why 32k sub block was the default for so many years 


I may not be remembering correctly, but I thought the default block size was 
256k, and the sub-block size was always fixed at 1/32nd of the block size — 
which only yields 32k sub-blocks for a 1MB block size.

I also think there used to be something special about a 16k block size… but I 
haven’t slept well in about a week, so I might just be losing it.

-- 
Stephen

___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] Sub-block size not quite as expected on GPFS 5 filesystem?

2018-08-01 Thread Andrew Beattie
I too would second the comment about doing testing specific to your environment
 
We recently deployed a number of ESS building blocks into a customer site that was specifically being used for a mixed HPC workload.
 
We spent more than a week playing with different block sizes for both data and metadata trying to identify which variation would provide the best mix of both metadata performance and data performance.  one thing we noticed very early on is that MDtest and IOR both respond very differently as you play with both block size and subblock size.  What works for one use case may be a very poor option for another use case.
 
Interestingly enough it turned out that the best overall option for our particular use case was an 8MB block size with 32k sub blocks -- as that gave us good Metadata performance and good sequential data performance 
 
which is probably why 32k sub block was the default for so many years 
Andrew Beattie
Software Defined Storage  - IT Specialist
Phone: 614-2133-7927
E-mail: abeat...@au1.ibm.com
 
 
- Original message -From: "Marc A Kaplan" Sent by: gpfsug-discuss-boun...@spectrumscale.orgTo: gpfsug main discussion list Cc:Subject: Re: [gpfsug-discuss] Sub-block size not quite as expected on GPFS 5 filesystem?Date: Thu, Aug 2, 2018 10:01 AM Firstly, I do suggest that you run some tests and see how much, if any, difference the settings that are available make in performance and/or storage utilization.Secondly, as I and others have hinted at, deeper in the system, there may be additional parameters and settings.  Sometimes they are available via commands, and/or configuration settings, sometimes not.Sometimes that's just because we didn't want to overwhelm you or ourselves with yet more "tuning knobs".Sometimes it's because we made some component more tunable than we really needed, but did not make all the interconnected components equally or as widely tunable.Sometimes it's because we want to save you from making ridiculous settings that would lead to problems...OTOH, as I wrote before, if a burning requirement surfaces, things may change from release to release... Just as for so many years subblocks per block seemed forever frozen at the number 32.  Now it varies... and then the discussion shifts to why can't it be even more flexible? 
___gpfsug-discuss mailing listgpfsug-discuss at spectrumscale.orghttp://gpfsug.org/mailman/listinfo/gpfsug-discuss
 

___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] Sub-block size not quite as expected on GPFS 5 filesystem?

2018-08-01 Thread Marc A Kaplan
Firstly, I do suggest that you run some tests and see how much, if any, 
difference the settings that are available make in performance and/or 
storage utilization.

Secondly, as I and others have hinted at, deeper in the system, there may 
be additional parameters and settings.  Sometimes they are available via 
commands, and/or configuration settings, sometimes not.

Sometimes that's just because we didn't want to overwhelm you or ourselves 
with yet more "tuning knobs".

Sometimes it's because we made some component more tunable than we really 
needed, but did not make all the interconnected components equally or as 
widely tunable.
Sometimes it's because we want to save you from making ridiculous settings 
that would lead to problems...

OTOH, as I wrote before, if a burning requirement surfaces, things may 
change from release to release... Just as for so many years subblocks per 
block seemed forever frozen at the number 32.  Now it varies... and then 
the discussion shifts to why can't it be even more flexible?



___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss