Re: [lustre-discuss] ZFS and OST Space Difference

2021-04-06 Thread Saravanaraj Ayyampalayam via lustre-discuss
I think you are correct. ‘zpool list’ shows raw space, ‘zfs list’ shows the 
space after reservation for parity, etc.. In a 10 disk raidz2 ~24% of the space 
is reserved for parity.
This website helps in calculating ZFS capacity. 
https://wintelguy.com/zfs-calc.pl <https://wintelguy.com/zfs-calc.pl>

-Raj

> On Apr 6, 2021, at 4:56 PM, Laura Hild via lustre-discuss 
>  wrote:
> 
> > I am not sure about the discrepancy of 3T.  Maybe that is due to some ZFS 
> > and/or Lustre overhead?
> 
> Slop space?
> 
>
> https://openzfs.github.io/openzfs-docs/Performance%20and%20Tuning/Module%20Parameters.html#spa-slop-shift
>  
> <https://openzfs.github.io/openzfs-docs/Performance%20and%20Tuning/Module%20Parameters.html#spa-slop-shift>
> 
> -Laura
> 
> 
> Od: lustre-discuss  <mailto:lustre-discuss-boun...@lists.lustre.org>> v imenu Mohr, Rick via 
> lustre-discuss  <mailto:lustre-discuss@lists.lustre.org>>
> Poslano: torek, 06. april 2021 16:34
> Za: Makia Minich  <mailto:ma...@systemfabricworks.com>>; lustre-discuss@lists.lustre.org 
> <mailto:lustre-discuss@lists.lustre.org>  <mailto:lustre-discuss@lists.lustre.org>>
> Zadeva: Re: [lustre-discuss] [EXTERNAL] ZFS and OST Space Difference
>  
> Makia,
> 
> The drive sizes are 7.6 TB which translates to about 6.9 TiB (which is the 
> unit that zpool uses for "T").  So the zpool sizes as just 10 x 6.9T = 69T 
> since zpool shows the total amount of disk space available to the pool.  The 
> usable space (which is what df is reporting) should be more like 0.8 x 69T = 
> 55T.  I am not sure about the discrepancy of 3T.  Maybe that is due to some 
> ZFS and/or Lustre overhead?
> 
> --Rick
> 
> On 4/6/21, 3:49 PM, "lustre-discuss on behalf of Makia Minich" 
>  ma...@systemfabricworks.com> wrote:
> 
> I believe this was discussed a while ago, but I was unable to find clear 
> answers, so I’ll re-ask in hopefully a slightly different way.
> On an OST, I have 30 drives, each at 7.6TB. I create 3 raidz2 zpools of 
> 10 devices (ashift=12):
> 
> [root@lustre47b ~]# zpool list
> NAMESIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAGCAP  
> DEDUPHEALTH  ALTROOT
> oss55-0  69.9T  37.3M  69.9T- - 0% 0%  1.00x
> ONLINE  -
> oss55-1  69.9T  37.3M  69.9T- - 0% 0%  1.00x
> ONLINE  -
> oss55-2  69.9T  37.4M  69.9T- - 0% 0%  1.00x
> ONLINE  -
> [root@lustre47b ~]#
> 
> 
> Running a mkfs.lustre against these (and the lustre mount) and I see:
> 
> [root@lustre47b ~]# df -h | grep ost
> oss55-0/ost165 52T   27M   52T   1% /lustre/ost165
> oss55-1/ost166 52T   27M   52T   1% /lustre/ost166
> oss55-2/ost167 52T   27M   52T   1% /lustre/ost167
> [root@lustre47b ~]#
> 
> 
> Basically, we’re seeing a pretty dramatic loss in capacity (156TB vs 
> 209.7TB, so a loss of about 50TB). Is there any insight on where this 
> capacity is disappearing to? If there some mkfs.lustre or zpool option I 
> missed in creating this? Is something just reporting slightly off and that 
> space really is there?
> 
> Thanks.
> 
> —
> 
> 
> Makia Minich
> 
> Chief Architect
> 
> System Fabric Works
> "Fabric Computing that Works”
> 
> "Oh, I don't know. I think everything is just as it should be, y'know?”
> - Frank Fairfield
> 
> 
> 
> 
> 
> 
> 
> ___
> lustre-discuss mailing list
> lustre-discuss@lists.lustre.org
> https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.lustre.org_listinfo.cgi_lustre-2Ddiscuss-2Dlustre.org=DwIGaQ=CJqEzB1piLOyyvZjb8YUQw=897kjkV-MEeU1IVizIfc5Q=habzcIRCKUXYLTbJVvgv2fPgmEuBnVtUdsgTfIsAHZY=M7RWFzL5Xm7uDovhMY_cI9Hvk-jWavZyfLWjpMSAs1E=
>  
> <https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.lustre.org_listinfo.cgi_lustre-2Ddiscuss-2Dlustre.org=DwIGaQ=CJqEzB1piLOyyvZjb8YUQw=897kjkV-MEeU1IVizIfc5Q=habzcIRCKUXYLTbJVvgv2fPgmEuBnVtUdsgTfIsAHZY=M7RWFzL5Xm7uDovhMY_cI9Hvk-jWavZyfLWjpMSAs1E=>
>  
> ___
> lustre-discuss mailing list
> lustre-discuss@lists.lustre.org <mailto:lustre-discuss@lists.lustre.org>
> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org 
> <http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org>
___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


Re: [lustre-discuss] ZFS and OST Space Difference

2021-04-06 Thread Laura Hild via lustre-discuss
> I am not sure about the discrepancy of 3T.  Maybe that is due to some ZFS 
> and/or Lustre overhead?

Slop space?

   
https://openzfs.github.io/openzfs-docs/Performance%20and%20Tuning/Module%20Parameters.html#spa-slop-shift

-Laura



Od: lustre-discuss  v imenu Mohr, Rick 
via lustre-discuss 
Poslano: torek, 06. april 2021 16:34
Za: Makia Minich ; lustre-discuss@lists.lustre.org 

Zadeva: Re: [lustre-discuss] [EXTERNAL] ZFS and OST Space Difference

Makia,

The drive sizes are 7.6 TB which translates to about 6.9 TiB (which is the unit 
that zpool uses for "T").  So the zpool sizes as just 10 x 6.9T = 69T since 
zpool shows the total amount of disk space available to the pool.  The usable 
space (which is what df is reporting) should be more like 0.8 x 69T = 55T.  I 
am not sure about the discrepancy of 3T.  Maybe that is due to some ZFS and/or 
Lustre overhead?

--Rick

On 4/6/21, 3:49 PM, "lustre-discuss on behalf of Makia Minich" 
 wrote:

I believe this was discussed a while ago, but I was unable to find clear 
answers, so I’ll re-ask in hopefully a slightly different way.
On an OST, I have 30 drives, each at 7.6TB. I create 3 raidz2 zpools of 10 
devices (ashift=12):

[root@lustre47b ~]# zpool list
NAMESIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAGCAP  DEDUP  
  HEALTH  ALTROOT
oss55-0  69.9T  37.3M  69.9T- - 0% 0%  1.00x
ONLINE  -
oss55-1  69.9T  37.3M  69.9T- - 0% 0%  1.00x
ONLINE  -
oss55-2  69.9T  37.4M  69.9T- - 0% 0%  1.00x
ONLINE  -
[root@lustre47b ~]#


Running a mkfs.lustre against these (and the lustre mount) and I see:

[root@lustre47b ~]# df -h | grep ost
oss55-0/ost165 52T   27M   52T   1% /lustre/ost165
oss55-1/ost166 52T   27M   52T   1% /lustre/ost166
oss55-2/ost167 52T   27M   52T   1% /lustre/ost167
[root@lustre47b ~]#


Basically, we’re seeing a pretty dramatic loss in capacity (156TB vs 
209.7TB, so a loss of about 50TB). Is there any insight on where this capacity 
is disappearing to? If there some mkfs.lustre or zpool option I missed in 
creating this? Is something just reporting slightly off and that space really 
is there?

Thanks.

—


Makia Minich

Chief Architect

System Fabric Works
"Fabric Computing that Works”

"Oh, I don't know. I think everything is just as it should be, y'know?”
- Frank Fairfield







___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.lustre.org_listinfo.cgi_lustre-2Ddiscuss-2Dlustre.org=DwIGaQ=CJqEzB1piLOyyvZjb8YUQw=897kjkV-MEeU1IVizIfc5Q=habzcIRCKUXYLTbJVvgv2fPgmEuBnVtUdsgTfIsAHZY=M7RWFzL5Xm7uDovhMY_cI9Hvk-jWavZyfLWjpMSAs1E=
___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


[lustre-discuss] ZFS and OST Space Difference

2021-04-06 Thread Makia Minich
I believe this was discussed a while ago, but I was unable to find clear 
answers, so I’ll re-ask in hopefully a slightly different way.

On an OST, I have 30 drives, each at 7.6TB. I create 3 raidz2 zpools of 10 
devices (ashift=12):

[root@lustre47b ~]# zpool list
NAMESIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAGCAP  DEDUP
HEALTH  ALTROOT
oss55-0  69.9T  37.3M  69.9T- - 0% 0%  1.00xONLINE  
-
oss55-1  69.9T  37.3M  69.9T- - 0% 0%  1.00xONLINE  
-
oss55-2  69.9T  37.4M  69.9T- - 0% 0%  1.00xONLINE  
-
[root@lustre47b ~]#

Running a mkfs.lustre against these (and the lustre mount) and I see:

[root@lustre47b ~]# df -h | grep ost
oss55-0/ost165 52T   27M   52T   1% /lustre/ost165
oss55-1/ost166 52T   27M   52T   1% /lustre/ost166
oss55-2/ost167 52T   27M   52T   1% /lustre/ost167
[root@lustre47b ~]#

Basically, we’re seeing a pretty dramatic loss in capacity (156TB vs 209.7TB, 
so a loss of about 50TB). Is there any insight on where this capacity is 
disappearing to? If there some mkfs.lustre or zpool option I missed in creating 
this? Is something just reporting slightly off and that space really is there?

Thanks.

—

Makia Minich
Chief Architect
System Fabric Works
"Fabric Computing that Works”

"Oh, I don't know. I think everything is just as it should be, y'know?”
- Frank Fairfield

___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org