Good Morning,

    We have encountered a very odd issue.  Where files are being created that 
show as double in size under du, then they do using ls or du --apparent-size.

under ls we see 119G
~> ls -lh \
> szscl21_24_128_b1p50_t_x4p300_um0p0840_sm0p0743_n1p265.genprop.n162.strange.t_0_22_26_28_31.sdb3160b
-rw-rw-r-- 1 edwards lattice 119G Sep 14 21:48 
szscl21_24_128_b1p50_t_x4p300_um0p0840_sm0p0743_n1p265.genprop.n162.strange.t_0_22_26_28_31.sdb3160b

which du --apparent-size agrees with
~> du -h --apparent-size \
> szscl21_24_128_b1p50_t_x4p300_um0p0840_sm0p0743_n1p265.genprop.n162.strange.t_0_22_26_28_31.sdb3160b
119G    
szscl21_24_128_b1p50_t_x4p300_um0p0840_sm0p0743_n1p265.genprop.n162.strange.t_0_22_26_28_31.sdb3160b
under du we see 273G

However du itself shows more then double (so we are beyond "padding out a 
block" size).
~> du -h \
> szscl21_24_128_b1p50_t_x4p300_um0p0840_sm0p0743_n1p265.genprop.n162.strange.t_0_22_26_28_31.sdb3160b
273G    
szscl21_24_128_b1p50_t_x4p300_um0p0840_sm0p0743_n1p265.genprop.n162.strange.t_0_22_26_28_31.sdb3160b

There is nothing unusual going on via the file layout according to lfs 
getstripe:
~> lfs getstripe \
> szscl21_24_128_b1p50_t_x4p300_um0p0840_sm0p0743_n1p265.genprop.n162.strange.t_0_22_26_28_31.sdb3160b
szscl21_24_128_b1p50_t_x4p300_um0p0840_sm0p0743_n1p265.genprop.n162.strange.t_0_22_26_28_31.sdb3160b
lmm_stripe_count:  1
lmm_stripe_size:   1048576
lmm_pattern:       raid0
lmm_layout_gen:    0
lmm_stripe_offset: 0
lmm_pool:          production
        obdidx           objid           objid           group
             0         7431775       0x71665f                0

Client is running:
lustre-client-2.12.6-1.el7.centos.x86_64

lustre servers are:
lustre-osd-zfs-mount-2.12.9-1.el7.x86_64
kmod-lustre-osd-zfs-2.12.9-1.el7.x86_64
kernel-3.10.0-1127.8.2.el7_lustre.x86_64
lustre-2.12.9-1.el7.x86_64
kernel-devel-3.10.0-1127.8.2.el7_lustre.x86_64
kmod-lustre-2.12.9-1.el7.x86_64
kmod-zfs-0.7.13-1.el7.jlab.x86_64
libzfs2-0.7.13-1.el7.x86_64
zfs-0.7.13-1.el7.x86_64

w/r,
Kurt J. Strosahl (he/him)

System Administrator: Lustre, HPC
Scientific Computing Group, Thomas Jefferson National Accelerator Facility
_______________________________________________
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org

Reply via email to