Re: [lustre-discuss] Lustre Sizing

2018-12-31 Thread ANS
Thank you Jeff. I have created the lustre on ZFS freshly and no other is
having access to it. So when mounted it on client it is showing around 40TB
variation from the actual space.

So what could be the reason for this variation of the size.

Thanks,
ANS

On Tue, Jan 1, 2019 at 12:21 PM Jeff Johnson 
wrote:

> Very forward versions...especially on ZFS.
>
> You build OST volumes in a pool. If no other volumes are defined in a pool
> then 100% of that pool will be available for the OST volume but the way ZFS
> works the capacity doesn’t really belong to the OST volume until blocks are
> allocated for writes. So you have a pool
> Of a known size and you’re the admin. As long as nobody else can create a
> ZFS volume in that pool then all of the capacity in that pool will go to
> the OST eventually when new writes occur. Keep in mind that the same pool
> can contain multiple snapshots (if created) so the pool is a “potential
> capacity” but that capacity could be concurrently allocated to OST volume
> writes, snapshots and other ZFS volumes (if created)
>
> —Jeff
>
>
>
> On Mon, Dec 31, 2018 at 22:20 ANS  wrote:
>
>> Thanks Jeff. Currently i am using
>>
>> modinfo zfs | grep version
>> version:0.8.0-rc2
>> rhelversion:7.4
>>
>> lfs --version
>> lfs 2.12.0
>>
>> And this is a fresh install. So is there any other possibility to show
>> the complete zpool lun has been allocated for lustre alone.
>>
>> Thanks,
>> ANS
>>
>>
>>
>> On Tue, Jan 1, 2019 at 11:44 AM Jeff Johnson <
>> jeff.john...@aeoncomputing.com> wrote:
>>
>>> ANS,
>>>
>>> Lustre on top of ZFS has to estimate capacities and it’s fairly off when
>>> the OSTs are new and empty. As objects are written to OSTs and capacity is
>>> consumed it gets the sizing of capacity more accurate. At the beginning
>>> it’s so off that it appears to be an error.
>>>
>>> What version are you running? Some patches have been added to make this
>>> calculation more accurate.
>>>
>>> —Jeff
>>>
>>> On Mon, Dec 31, 2018 at 22:08 ANS  wrote:
>>>
 Dear Team,

 I am trying to configure lustre with backend ZFS as file system with 2
 servers in HA. But after compiling and creating zfs pools

 zpool list
 NAME   SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAGCAP
 DEDUPHEALTH  ALTROOT
 lustre-data   54.5T  25.8M  54.5T- 16.0E 0% 0%
 1.00xONLINE  -
 lustre-data1  54.5T  25.1M  54.5T- 16.0E 0% 0%
 1.00xONLINE  -
 lustre-data2  54.5T  25.8M  54.5T- 16.0E 0% 0%
 1.00xONLINE  -
 lustre-data3  54.5T  25.8M  54.5T- 16.0E 0% 0%
 1.00xONLINE  -
 lustre-meta832G  3.50M   832G- 16.0E 0% 0%
 1.00xONLINE  -

 and when mounted to client

 lfs df -h
 UUID   bytesUsed   Available Use% Mounted on
 home-MDT_UUID 799.7G3.2M  799.7G   0%
 /home[MDT:0]
 home-OST_UUID  39.9T   18.0M   39.9T   0%
 /home[OST:0]
 home-OST0001_UUID  39.9T   18.0M   39.9T   0%
 /home[OST:1]
 home-OST0002_UUID  39.9T   18.0M   39.9T   0%
 /home[OST:2]
 home-OST0003_UUID  39.9T   18.0M   39.9T   0%
 /home[OST:3]

 filesystem_summary:   159.6T   72.0M  159.6T   0% /home

 So out of total 54.5TX4=218TB i am getting only 159 TB usable. So can
 any one give the information regarding this.

 Also from performance prospective what are the zfs and lustre
 parameters to be tuned.

 --
 Thanks,
 ANS.
 ___
 lustre-discuss mailing list
 lustre-discuss@lists.lustre.org
 http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org

>>> --
>>> --
>>> Jeff Johnson
>>> Co-Founder
>>> Aeon Computing
>>>
>>> jeff.john...@aeoncomputing.com
>>> www.aeoncomputing.com
>>> t: 858-412-3810 x1001   f: 858-412-3845
>>> m: 619-204-9061
>>>
>>> 4170 Morena Boulevard, Suite C - San Diego, CA 92117
>>> 
>>>
>>> High-Performance Computing / Lustre Filesystems / Scale-out Storage
>>>
>>
>>
>> --
>> Thanks,
>> ANS.
>>
> --
> --
> Jeff Johnson
> Co-Founder
> Aeon Computing
>
> jeff.john...@aeoncomputing.com
> www.aeoncomputing.com
> t: 858-412-3810 x1001   f: 858-412-3845
> m: 619-204-9061
>
> 4170 Morena Boulevard, Suite C - San Diego, CA 92117
>
> High-Performance Computing / Lustre Filesystems / Scale-out Storage
>


-- 
Thanks,
ANS.
___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


Re: [lustre-discuss] Lustre Sizing

2018-12-31 Thread Jeff Johnson
Very forward versions...especially on ZFS.

You build OST volumes in a pool. If no other volumes are defined in a pool
then 100% of that pool will be available for the OST volume but the way ZFS
works the capacity doesn’t really belong to the OST volume until blocks are
allocated for writes. So you have a pool
Of a known size and you’re the admin. As long as nobody else can create a
ZFS volume in that pool then all of the capacity in that pool will go to
the OST eventually when new writes occur. Keep in mind that the same pool
can contain multiple snapshots (if created) so the pool is a “potential
capacity” but that capacity could be concurrently allocated to OST volume
writes, snapshots and other ZFS volumes (if created)

—Jeff



On Mon, Dec 31, 2018 at 22:20 ANS  wrote:

> Thanks Jeff. Currently i am using
>
> modinfo zfs | grep version
> version:0.8.0-rc2
> rhelversion:7.4
>
> lfs --version
> lfs 2.12.0
>
> And this is a fresh install. So is there any other possibility to show the
> complete zpool lun has been allocated for lustre alone.
>
> Thanks,
> ANS
>
>
>
> On Tue, Jan 1, 2019 at 11:44 AM Jeff Johnson <
> jeff.john...@aeoncomputing.com> wrote:
>
>> ANS,
>>
>> Lustre on top of ZFS has to estimate capacities and it’s fairly off when
>> the OSTs are new and empty. As objects are written to OSTs and capacity is
>> consumed it gets the sizing of capacity more accurate. At the beginning
>> it’s so off that it appears to be an error.
>>
>> What version are you running? Some patches have been added to make this
>> calculation more accurate.
>>
>> —Jeff
>>
>> On Mon, Dec 31, 2018 at 22:08 ANS  wrote:
>>
>>> Dear Team,
>>>
>>> I am trying to configure lustre with backend ZFS as file system with 2
>>> servers in HA. But after compiling and creating zfs pools
>>>
>>> zpool list
>>> NAME   SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAGCAP
>>> DEDUPHEALTH  ALTROOT
>>> lustre-data   54.5T  25.8M  54.5T- 16.0E 0% 0%
>>> 1.00xONLINE  -
>>> lustre-data1  54.5T  25.1M  54.5T- 16.0E 0% 0%
>>> 1.00xONLINE  -
>>> lustre-data2  54.5T  25.8M  54.5T- 16.0E 0% 0%
>>> 1.00xONLINE  -
>>> lustre-data3  54.5T  25.8M  54.5T- 16.0E 0% 0%
>>> 1.00xONLINE  -
>>> lustre-meta832G  3.50M   832G- 16.0E 0% 0%
>>> 1.00xONLINE  -
>>>
>>> and when mounted to client
>>>
>>> lfs df -h
>>> UUID   bytesUsed   Available Use% Mounted on
>>> home-MDT_UUID 799.7G3.2M  799.7G   0%
>>> /home[MDT:0]
>>> home-OST_UUID  39.9T   18.0M   39.9T   0%
>>> /home[OST:0]
>>> home-OST0001_UUID  39.9T   18.0M   39.9T   0%
>>> /home[OST:1]
>>> home-OST0002_UUID  39.9T   18.0M   39.9T   0%
>>> /home[OST:2]
>>> home-OST0003_UUID  39.9T   18.0M   39.9T   0%
>>> /home[OST:3]
>>>
>>> filesystem_summary:   159.6T   72.0M  159.6T   0% /home
>>>
>>> So out of total 54.5TX4=218TB i am getting only 159 TB usable. So can
>>> any one give the information regarding this.
>>>
>>> Also from performance prospective what are the zfs and lustre parameters
>>> to be tuned.
>>>
>>> --
>>> Thanks,
>>> ANS.
>>> ___
>>> lustre-discuss mailing list
>>> lustre-discuss@lists.lustre.org
>>> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
>>>
>> --
>> --
>> Jeff Johnson
>> Co-Founder
>> Aeon Computing
>>
>> jeff.john...@aeoncomputing.com
>> www.aeoncomputing.com
>> t: 858-412-3810 x1001   f: 858-412-3845
>> m: 619-204-9061
>>
>> 4170 Morena Boulevard, Suite C - San Diego, CA 92117
>> 
>>
>> High-Performance Computing / Lustre Filesystems / Scale-out Storage
>>
>
>
> --
> Thanks,
> ANS.
>
-- 
--
Jeff Johnson
Co-Founder
Aeon Computing

jeff.john...@aeoncomputing.com
www.aeoncomputing.com
t: 858-412-3810 x1001   f: 858-412-3845
m: 619-204-9061

4170 Morena Boulevard, Suite C - San Diego, CA 92117

High-Performance Computing / Lustre Filesystems / Scale-out Storage
___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


Re: [lustre-discuss] Lustre Sizing

2018-12-31 Thread ANS
Thanks Jeff. Currently i am using

modinfo zfs | grep version
version:0.8.0-rc2
rhelversion:7.4

lfs --version
lfs 2.12.0

And this is a fresh install. So is there any other possibility to show the
complete zpool lun has been allocated for lustre alone.

Thanks,
ANS



On Tue, Jan 1, 2019 at 11:44 AM Jeff Johnson 
wrote:

> ANS,
>
> Lustre on top of ZFS has to estimate capacities and it’s fairly off when
> the OSTs are new and empty. As objects are written to OSTs and capacity is
> consumed it gets the sizing of capacity more accurate. At the beginning
> it’s so off that it appears to be an error.
>
> What version are you running? Some patches have been added to make this
> calculation more accurate.
>
> —Jeff
>
> On Mon, Dec 31, 2018 at 22:08 ANS  wrote:
>
>> Dear Team,
>>
>> I am trying to configure lustre with backend ZFS as file system with 2
>> servers in HA. But after compiling and creating zfs pools
>>
>> zpool list
>> NAME   SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAGCAP
>> DEDUPHEALTH  ALTROOT
>> lustre-data   54.5T  25.8M  54.5T- 16.0E 0% 0%
>> 1.00xONLINE  -
>> lustre-data1  54.5T  25.1M  54.5T- 16.0E 0% 0%
>> 1.00xONLINE  -
>> lustre-data2  54.5T  25.8M  54.5T- 16.0E 0% 0%
>> 1.00xONLINE  -
>> lustre-data3  54.5T  25.8M  54.5T- 16.0E 0% 0%
>> 1.00xONLINE  -
>> lustre-meta832G  3.50M   832G- 16.0E 0% 0%
>> 1.00xONLINE  -
>>
>> and when mounted to client
>>
>> lfs df -h
>> UUID   bytesUsed   Available Use% Mounted on
>> home-MDT_UUID 799.7G3.2M  799.7G   0% /home[MDT:0]
>> home-OST_UUID  39.9T   18.0M   39.9T   0% /home[OST:0]
>> home-OST0001_UUID  39.9T   18.0M   39.9T   0% /home[OST:1]
>> home-OST0002_UUID  39.9T   18.0M   39.9T   0% /home[OST:2]
>> home-OST0003_UUID  39.9T   18.0M   39.9T   0% /home[OST:3]
>>
>> filesystem_summary:   159.6T   72.0M  159.6T   0% /home
>>
>> So out of total 54.5TX4=218TB i am getting only 159 TB usable. So can any
>> one give the information regarding this.
>>
>> Also from performance prospective what are the zfs and lustre parameters
>> to be tuned.
>>
>> --
>> Thanks,
>> ANS.
>> ___
>> lustre-discuss mailing list
>> lustre-discuss@lists.lustre.org
>> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
>>
> --
> --
> Jeff Johnson
> Co-Founder
> Aeon Computing
>
> jeff.john...@aeoncomputing.com
> www.aeoncomputing.com
> t: 858-412-3810 x1001   f: 858-412-3845
> m: 619-204-9061
>
> 4170 Morena Boulevard, Suite C - San Diego, CA 92117
>
> High-Performance Computing / Lustre Filesystems / Scale-out Storage
>


-- 
Thanks,
ANS.
___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


Re: [lustre-discuss] Lustre Sizing

2018-12-31 Thread Jeff Johnson
ANS,

Lustre on top of ZFS has to estimate capacities and it’s fairly off when
the OSTs are new and empty. As objects are written to OSTs and capacity is
consumed it gets the sizing of capacity more accurate. At the beginning
it’s so off that it appears to be an error.

What version are you running? Some patches have been added to make this
calculation more accurate.

—Jeff

On Mon, Dec 31, 2018 at 22:08 ANS  wrote:

> Dear Team,
>
> I am trying to configure lustre with backend ZFS as file system with 2
> servers in HA. But after compiling and creating zfs pools
>
> zpool list
> NAME   SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAGCAP  DEDUP
>   HEALTH  ALTROOT
> lustre-data   54.5T  25.8M  54.5T- 16.0E 0% 0%  1.00x
>   ONLINE  -
> lustre-data1  54.5T  25.1M  54.5T- 16.0E 0% 0%  1.00x
>   ONLINE  -
> lustre-data2  54.5T  25.8M  54.5T- 16.0E 0% 0%  1.00x
>   ONLINE  -
> lustre-data3  54.5T  25.8M  54.5T- 16.0E 0% 0%  1.00x
>   ONLINE  -
> lustre-meta832G  3.50M   832G- 16.0E 0% 0%  1.00x
>   ONLINE  -
>
> and when mounted to client
>
> lfs df -h
> UUID   bytesUsed   Available Use% Mounted on
> home-MDT_UUID 799.7G3.2M  799.7G   0% /home[MDT:0]
> home-OST_UUID  39.9T   18.0M   39.9T   0% /home[OST:0]
> home-OST0001_UUID  39.9T   18.0M   39.9T   0% /home[OST:1]
> home-OST0002_UUID  39.9T   18.0M   39.9T   0% /home[OST:2]
> home-OST0003_UUID  39.9T   18.0M   39.9T   0% /home[OST:3]
>
> filesystem_summary:   159.6T   72.0M  159.6T   0% /home
>
> So out of total 54.5TX4=218TB i am getting only 159 TB usable. So can any
> one give the information regarding this.
>
> Also from performance prospective what are the zfs and lustre parameters
> to be tuned.
>
> --
> Thanks,
> ANS.
> ___
> lustre-discuss mailing list
> lustre-discuss@lists.lustre.org
> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
>
-- 
--
Jeff Johnson
Co-Founder
Aeon Computing

jeff.john...@aeoncomputing.com
www.aeoncomputing.com
t: 858-412-3810 x1001   f: 858-412-3845
m: 619-204-9061

4170 Morena Boulevard, Suite C - San Diego, CA 92117

High-Performance Computing / Lustre Filesystems / Scale-out Storage
___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org