Thank you all guys for the answers. Just a quick question: what is the 
difference between gpfsperf and tsqosperf ? I knew the former, but not the 
latter (mentioned in your presentation). Do they do I/O test in different ways ?

thanks,

   Alvise
________________________________
From: [email protected] 
[[email protected]] on behalf of Sven Oehme 
[[email protected]]
Sent: Thursday, March 21, 2019 6:35 PM
To: gpfsug main discussion list
Subject: Re: [gpfsug-discuss] Clarification about blocksize in stardanrd gpfs 
and GNR

Lots of details in a presentation I did last year before I left IBM --> 
http://files.gpfsug.org/presentations/2018/Singapore/Sven_Oehme_ESS_in_CORAL_project_update.pdf

Sven

From: <[email protected]> on behalf of Daniel Kidger 
<[email protected]>
Reply-To: gpfsug main discussion list <[email protected]>
Date: Thursday, March 21, 2019 at 10:15 AM
To: <[email protected]>
Cc: <[email protected]>
Subject: Re: [gpfsug-discuss] Clarification about blocksize in stardanrd gpfs 
and GNR

Alvise,

Also note that that DeveloperWorks page was maintained by Scott Faddon. He has 
since left IBM and that page has unfortunately not been updated for almost 2 
years. :-(

This page predates the current version 5.x of SpectrumScale which has been 
available since the beginning of 2018.

In version 5.x. the statement that there are 32 sub-blocks in one block is no 
longer true.
Now, by default you get a 4MiB Filesystem blocksize that has 512 sub0clocks, 
each 8192 bytes long.
Daniel

_________________________________________________________
Daniel Kidger
IBM Technical Sales Specialist
Spectrum Scale, Spectrum NAS and IBM Cloud Object Store

+44-(0)7818 522 266
[email protected]
<https://www.youracclaim.com/badges/687cf790-fe65-4a92-b129-d23ae41862ac/public_url>

<https://www.youracclaim.com/badges/8153c6a7-3e02-40be-87ee-24e27ae9459c/public_url>

<https://www.youracclaim.com/badges/78197e2c-4277-4ec9-808b-ad6abe1e1b16/public_url>




----- Original message -----
From: [email protected]
Sent by: [email protected]
To: gpfsug main discussion list <[email protected]>
Cc:
Subject: Re: [gpfsug-discuss] Clarification about blocksize in stardanrd gpfs 
and GNR
Date: Thu, Mar 21, 2019 1:32 PM

The underlying device in this context is the NSD, network storage device. This 
has relation at all to 512 byte or 4K disk blocks. Usually around a meg, always 
a power of two.

  -- ddj
Dave Johnson

On Mar 21, 2019, at 9:22 AM, Dorigo Alvise (PSI) 
<[email protected]<mailto:[email protected]>> wrote:

Hi,
I'm a little bit puzzled about different meanings of blocksize for different 
GPFS installation (standard and gnr).

>From this page 
>https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/General%20Parallel%20File%20System%20(GPFS)/page/File%20System%20Planning

I read:

  *   The blocksize is the largest size IO that GPFS can issue to the 
underlying device
  *   A subblock is 1/32nd of blocksize. This is the smallest allocation to a 
single file
For non-gnr GPFS device is quite clear to me (I hope): it is a single spinning 
disk (or ssd). And I verified this on a small cluster composed of nsd using 
their local hard drive.

Can someone explain what is the "device" in the case of GNR ? a single pdisk ?

Thanks,

  Alvise

_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org<http://spectrumscale.org>
http://gpfsug.org/mailman/listinfo/gpfsug-discuss
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss

Unless stated otherwise above:
IBM United Kingdom Limited - Registered in England and Wales with number 741598.
Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU

_______________________________________________ gpfsug-discuss mailing list 
gpfsug-discuss at spectrumscale.org 
http://gpfsug.org/mailman/listinfo/gpfsug-discuss
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss

Reply via email to