Hi,

I am currently doing an AFM based synchronization between 2 GPFS filesystems 
using a multicluster connection.

It works quite well apart from the fact that on cache FS we noticed directory 
take 4x the size they have on home FS:

[root@node ~]# stat /newfs/fileset/dir
  File: /newfs/fileset/dir
  Size: 16384           Blocks: 32         IO Block: 262144 directory
Device: 2eh/46d Inode: 14893057    Links: 25
Access: (2775/drwxrwsr-x)  Uid: ( xxxx/ UNKNOWN)   Gid: ( yyyy/ UNKNOWN)
Access: 2023-06-02 08:09:25.659095673 +0000
Modify: 2023-01-27 08:56:09.636343000 +0000
Change: 2023-06-01 13:22:08.972571000 +0000
Birth: -

[root@node ~]# stat /oldFS/fileset/dir

  File: /oldFS/fileset/dir
  Size: 4096            Blocks: 1          IO Block: 131072 directory
Device: 32h/50d Inode: 8590516352  Links: 25
Access: (2775/drwxrwsr-x)  Uid: ( xxxx/ UNKNOWN)   Gid: ( yyyy/ UNKNOWN)
Access: 2023-06-02 09:09:40.483041330 +0000
Modify: 2023-01-27 08:56:09.636343000 +0000
Change: 2023-01-27 08:56:09.644167000 +0000
Birth: -

I saw somewhere that AFM extended attributes should take around 200 bytes so I 
am a bit puzzled on why this much difference here.

I disables the AFM relationship between synced filesets but the size stay the 
same.

If I create a directory manually on the new filesystem, size is 4k as expected.

Any idea why we get this behaviour ? GPFS version is 5.1.6.1 on new cluster, 
5.1.2.8 on old cluster.

Thanks,

Dieter
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at gpfsug.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org

Reply via email to