I believe this was discussed a while ago, but I was unable to find clear 
answers, so I’ll re-ask in hopefully a slightly different way.

On an OST, I have 30 drives, each at 7.6TB. I create 3 raidz2 zpools of 10 
devices (ashift=12):

[root@lustre47b ~]# zpool list
NAME            SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    
HEALTH  ALTROOT
oss55-0  69.9T  37.3M  69.9T        -         -     0%     0%  1.00x    ONLINE  
-
oss55-1  69.9T  37.3M  69.9T        -         -     0%     0%  1.00x    ONLINE  
-
oss55-2  69.9T  37.4M  69.9T        -         -     0%     0%  1.00x    ONLINE  
-
[root@lustre47b ~]#

Running a mkfs.lustre against these (and the lustre mount) and I see:

[root@lustre47b ~]# df -h | grep ost
oss55-0/ost165             52T   27M   52T   1% /lustre/ost165
oss55-1/ost166             52T   27M   52T   1% /lustre/ost166
oss55-2/ost167             52T   27M   52T   1% /lustre/ost167
[root@lustre47b ~]#

Basically, we’re seeing a pretty dramatic loss in capacity (156TB vs 209.7TB, 
so a loss of about 50TB). Is there any insight on where this capacity is 
disappearing to? If there some mkfs.lustre or zpool option I missed in creating 
this? Is something just reporting slightly off and that space really is there?

Thanks.

—

Makia Minich
Chief Architect
System Fabric Works
"Fabric Computing that Works”

"Oh, I don't know. I think everything is just as it should be, y'know?”
- Frank Fairfield

_______________________________________________
lustre-discuss mailing list
[email protected]
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org

Reply via email to