Hi,
   It's observerd  up to 10 times space is consumed when concurrent 200 files 
iozone writing test ,
  with erasure code profile (k=8,m=4) data pool, mounted with ceph fuse, but 
disk usage is normal if only has one writing task .
Furthermore everything is  normal using replicated data pool, no matter how 
many writing operations
at the same time.
  
Regards,
Dai
              
#iozone -s 100M -r 1M -i 0 -u 200 -l 200 -+n -w

​
#df -h /data01
ceph-fuse on /data01 type fuse.ceph-fuse 
(rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other)

# du -sh /data01
801M    /data01

#ceph df
RAW STORAGE:
    CLASS     SIZE        AVAIL       USED        RAW USED     %RAW USED
    hdd       357 TiB     356 TiB      60 GiB      132 GiB          0.04
    ssd       1.3 TiB     1.3 TiB     2.6 GiB      5.6 GiB          0.42
    TOTAL     358 TiB     358 TiB      62 GiB      137 GiB          0.04
 
POOLS:
    POOL            ID     STORED      OBJECTS     USED        %USED     MAX 
AVAIL
    meta_data01      1     166 MiB          64     509 MiB      0.04       423 
GiB
    data_data01      2     800 MiB         201     8.9 GiB         0       226 
TiB


#ceph osd pool get data_data01  erasure_code_profile
erasure_code_profile: profile_data01

#ceph osd erasure-code-profile get profile_data01
crush-device-class=
crush-failure-domain=host
crush-root=default
jerasure-per-chunk-alignment=false
k=8
m=4
plugin=jerasure
technique=reed_sol_van
w=8
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to