Orlando,

Thanks, that proved to be exactly the cause of my hiccups.  I only
realized this after reading some more in the mmcrfs manual page and
source.

But has GPFS' behavior on this actually changed between 3.5.0.0 and
3.5.0.7, or is it only mmcrfs which has become more strict in
enforcing what tscrfs actually does?

Asgeir

On Wed, Feb 12, 2014 at 12:34 AM, Orlando Richards
<[email protected]> wrote:
> Hi Asgeir,
>
> From memory, you need to have the data disks going in as a separate storage 
> pool to have the split block size - so metadata disks in the "system" pool 
> and data disks in , say, the "data" pool. Have you got that split here?
>
> ----
> Orlando
>
> Sent from my phone
>
>> On 11 Feb 2014, at 17:23, Asgeir Storesund Nilsen <[email protected]> wrote:
>>
>> Hi,
>>
>> I want to create a file system with 16MB for data blocks and 256k for 
>> metadata blocks.  Under filesystem version 13.01 (3.5.0.0) this worked just 
>> fine, even when upgrading GPFS later.
>>
>> However, for a filesystem created with version 13.23 (3.5.0.7), if I specify 
>> both data and metadata block sizes, the metadata block size applies for 
>> both.  If I do not specify metadata block size, the data block size (-B) is 
>> used for both.
>>
>> This has a detrimental impact on our metadataOnly NSDs, as they fill up 
>> pretty quickly.
>>
>>
>> Are any of you aware of updates / bugs in GPFS that might help explain and 
>> alleviate this issue?  Any hints would be appreciated.
>>
>> Regards,
>> Asgeir
>> _______________________________________________
>> gpfsug-discuss mailing list
>> gpfsug-discuss at gpfsug.org
>> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>
> --
> The University of Edinburgh is a charitable body, registered in
> Scotland, with registration number SC005336.
>
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at gpfsug.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at gpfsug.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss

Reply via email to