Re: [lustre-discuss] Lustre Sizing

2019-01-03 Thread Jongwoo Han
There is zfs ocf_heartbeat agent, which let you can import/export and
failover zpool with PCS.

https://github.com/ClusterLabs/resource-agents/blob/master/heartbeat/ZFS

ZFS based NAS guide will be good reference, except this explains creating
NAS (export with NFS) but not lustre.

https://github.com/ewwhite/zfs-ha/wiki

when import/export from failed server to live server is done, it is
straightforward to mount zfs backed lustre ost with "mount -t lustre
/  " command. This can be integrated to above
zfs heartbeat script.

On Fri, Jan 4, 2019 at 2:42 PM ANS  wrote:

> Thank you Jongwoo Han for the detailed explanation.
>
> Can any one let me know how can i configure HA for the zfs pools in CentOS
> 7.4.
>
> As it is different from the normal lustre HA.
>
> Thanks,
> ANS
>


On Fri, Jan 4, 2019 at 2:42 PM ANS  wrote:

> Thank you Jongwoo Han for the detailed explanation.
>
> Can any one let me know how can i configure HA for the zfs pools in CentOS
> 7.4.
>
> As it is different from the normal lustre HA.
>
> Thanks,
> ANS
>
> On Wed, Jan 2, 2019 at 9:20 PM Jongwoo Han  wrote:
>
>> 1. "zpool list" command shows zpool size as all sum of physical drives.
>> when you create raidz2 volume (which is identical to raid6) with 10 * 6TB
>> drives, the zpool size counts up to 60TB (54TiB), while usable space from
>> "lfs df" is 8 * 6TB = 48TB (about 42TiB). even "lfs df -h" whill show TiB
>> size rounded down, you will see less. try "df -H" at OSS.
>>
>> 2. zpool based MDT assign inode dynamically, unlike ext4 based MDT. The
>> total number of inodes number will start to grow as you create more files.
>> this will be clearly visible after multi million files are created. try
>> recording current inodes and compare it after creating many metadata.
>>
>> On Wed, Jan 2, 2019 at 4:06 PM ANS  wrote:
>>
>>> Thank you Jeff. I got the solution for this it is the variation in zfs
>>> rather than the lustre because of the parity considered.
>>>
>>> But the metadata should occupy the 60% for the inode creation which is
>>> not happening in the zfs compared with ext4 ldisk.
>>>
>>> Thanks,
>>> ANS
>>>
>>> On Tue, Jan 1, 2019 at 1:05 PM ANS  wrote:
>>>
 Thank you Jeff. I have created the lustre on ZFS freshly and no other
 is having access to it. So when mounted it on client it is showing around
 40TB variation from the actual space.

 So what could be the reason for this variation of the size.

 Thanks,
 ANS

 On Tue, Jan 1, 2019 at 12:21 PM Jeff Johnson <
 jeff.john...@aeoncomputing.com> wrote:

> Very forward versions...especially on ZFS.
>
> You build OST volumes in a pool. If no other volumes are defined in a
> pool then 100% of that pool will be available for the OST volume but the
> way ZFS works the capacity doesn’t really belong to the OST volume until
> blocks are allocated for writes. So you have a pool
> Of a known size and you’re the admin. As long as nobody else can
> create a ZFS volume in that pool then all of the capacity in that pool 
> will
> go to the OST eventually when new writes occur. Keep in mind that the same
> pool can contain multiple snapshots (if created) so the pool is a
> “potential capacity” but that capacity could be concurrently allocated to
> OST volume writes, snapshots and other ZFS volumes (if created)
>
> —Jeff
>
>
>
> On Mon, Dec 31, 2018 at 22:20 ANS  wrote:
>
>> Thanks Jeff. Currently i am using
>>
>> modinfo zfs | grep version
>> version:0.8.0-rc2
>> rhelversion:7.4
>>
>> lfs --version
>> lfs 2.12.0
>>
>> And this is a fresh install. So is there any other possibility to
>> show the complete zpool lun has been allocated for lustre alone.
>>
>> Thanks,
>> ANS
>>
>>
>>
>> On Tue, Jan 1, 2019 at 11:44 AM Jeff Johnson <
>> jeff.john...@aeoncomputing.com> wrote:
>>
>>> ANS,
>>>
>>> Lustre on top of ZFS has to estimate capacities and it’s fairly off
>>> when the OSTs are new and empty. As objects are written to OSTs and
>>> capacity is consumed it gets the sizing of capacity more accurate. At 
>>> the
>>> beginning it’s so off that it appears to be an error.
>>>
>>> What version are you running? Some patches have been added to make
>>> this calculation more accurate.
>>>
>>> —Jeff
>>>
>>> On Mon, Dec 31, 2018 at 22:08 ANS  wrote:
>>>
 Dear Team,

 I am trying to configure lustre with backend ZFS as file system
 with 2 servers in HA. But after compiling and creating zfs pools

 zpool list
 NAME   SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAGCAP
 DEDUPHEALTH  ALTROOT
 lustre-data   54.5T  25.8M  54.5T- 16.0E 0% 0%
 1.00xONLINE  -
 lustre-data1  54.5T  25.1M  54.5T- 16.0E 0% 0%

Re: [lustre-discuss] Lustre Sizing

2019-01-02 Thread Jongwoo Han
1. "zpool list" command shows zpool size as all sum of physical drives.
when you create raidz2 volume (which is identical to raid6) with 10 * 6TB
drives, the zpool size counts up to 60TB (54TiB), while usable space from
"lfs df" is 8 * 6TB = 48TB (about 42TiB). even "lfs df -h" whill show TiB
size rounded down, you will see less. try "df -H" at OSS.

2. zpool based MDT assign inode dynamically, unlike ext4 based MDT. The
total number of inodes number will start to grow as you create more files.
this will be clearly visible after multi million files are created. try
recording current inodes and compare it after creating many metadata.

On Wed, Jan 2, 2019 at 4:06 PM ANS  wrote:

> Thank you Jeff. I got the solution for this it is the variation in zfs
> rather than the lustre because of the parity considered.
>
> But the metadata should occupy the 60% for the inode creation which is not
> happening in the zfs compared with ext4 ldisk.
>
> Thanks,
> ANS
>
> On Tue, Jan 1, 2019 at 1:05 PM ANS  wrote:
>
>> Thank you Jeff. I have created the lustre on ZFS freshly and no other is
>> having access to it. So when mounted it on client it is showing around 40TB
>> variation from the actual space.
>>
>> So what could be the reason for this variation of the size.
>>
>> Thanks,
>> ANS
>>
>> On Tue, Jan 1, 2019 at 12:21 PM Jeff Johnson <
>> jeff.john...@aeoncomputing.com> wrote:
>>
>>> Very forward versions...especially on ZFS.
>>>
>>> You build OST volumes in a pool. If no other volumes are defined in a
>>> pool then 100% of that pool will be available for the OST volume but the
>>> way ZFS works the capacity doesn’t really belong to the OST volume until
>>> blocks are allocated for writes. So you have a pool
>>> Of a known size and you’re the admin. As long as nobody else can create
>>> a ZFS volume in that pool then all of the capacity in that pool will go to
>>> the OST eventually when new writes occur. Keep in mind that the same pool
>>> can contain multiple snapshots (if created) so the pool is a “potential
>>> capacity” but that capacity could be concurrently allocated to OST volume
>>> writes, snapshots and other ZFS volumes (if created)
>>>
>>> —Jeff
>>>
>>>
>>>
>>> On Mon, Dec 31, 2018 at 22:20 ANS  wrote:
>>>
 Thanks Jeff. Currently i am using

 modinfo zfs | grep version
 version:0.8.0-rc2
 rhelversion:7.4

 lfs --version
 lfs 2.12.0

 And this is a fresh install. So is there any other possibility to show
 the complete zpool lun has been allocated for lustre alone.

 Thanks,
 ANS



 On Tue, Jan 1, 2019 at 11:44 AM Jeff Johnson <
 jeff.john...@aeoncomputing.com> wrote:

> ANS,
>
> Lustre on top of ZFS has to estimate capacities and it’s fairly off
> when the OSTs are new and empty. As objects are written to OSTs and
> capacity is consumed it gets the sizing of capacity more accurate. At the
> beginning it’s so off that it appears to be an error.
>
> What version are you running? Some patches have been added to make
> this calculation more accurate.
>
> —Jeff
>
> On Mon, Dec 31, 2018 at 22:08 ANS  wrote:
>
>> Dear Team,
>>
>> I am trying to configure lustre with backend ZFS as file system with
>> 2 servers in HA. But after compiling and creating zfs pools
>>
>> zpool list
>> NAME   SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAGCAP
>> DEDUPHEALTH  ALTROOT
>> lustre-data   54.5T  25.8M  54.5T- 16.0E 0% 0%
>> 1.00xONLINE  -
>> lustre-data1  54.5T  25.1M  54.5T- 16.0E 0% 0%
>> 1.00xONLINE  -
>> lustre-data2  54.5T  25.8M  54.5T- 16.0E 0% 0%
>> 1.00xONLINE  -
>> lustre-data3  54.5T  25.8M  54.5T- 16.0E 0% 0%
>> 1.00xONLINE  -
>> lustre-meta832G  3.50M   832G- 16.0E 0% 0%
>> 1.00xONLINE  -
>>
>> and when mounted to client
>>
>> lfs df -h
>> UUID   bytesUsed   Available Use% Mounted
>> on
>> home-MDT_UUID 799.7G3.2M  799.7G   0%
>> /home[MDT:0]
>> home-OST_UUID  39.9T   18.0M   39.9T   0%
>> /home[OST:0]
>> home-OST0001_UUID  39.9T   18.0M   39.9T   0%
>> /home[OST:1]
>> home-OST0002_UUID  39.9T   18.0M   39.9T   0%
>> /home[OST:2]
>> home-OST0003_UUID  39.9T   18.0M   39.9T   0%
>> /home[OST:3]
>>
>> filesystem_summary:   159.6T   72.0M  159.6T   0% /home
>>
>> So out of total 54.5TX4=218TB i am getting only 159 TB usable. So can
>> any one give the information regarding this.
>>
>> Also from performance prospective what are the zfs and lustre
>> parameters to be tuned.
>>
>> --
>> Thanks,
>> ANS.
>> 

Re: [lustre-discuss] Lustre Sizing

2019-01-01 Thread ANS
Thank you Jeff. I got the solution for this it is the variation in zfs
rather than the lustre because of the parity considered.

But the metadata should occupy the 60% for the inode creation which is not
happening in the zfs compared with ext4 ldisk.

Thanks,
ANS

On Tue, Jan 1, 2019 at 1:05 PM ANS  wrote:

> Thank you Jeff. I have created the lustre on ZFS freshly and no other is
> having access to it. So when mounted it on client it is showing around 40TB
> variation from the actual space.
>
> So what could be the reason for this variation of the size.
>
> Thanks,
> ANS
>
> On Tue, Jan 1, 2019 at 12:21 PM Jeff Johnson <
> jeff.john...@aeoncomputing.com> wrote:
>
>> Very forward versions...especially on ZFS.
>>
>> You build OST volumes in a pool. If no other volumes are defined in a
>> pool then 100% of that pool will be available for the OST volume but the
>> way ZFS works the capacity doesn’t really belong to the OST volume until
>> blocks are allocated for writes. So you have a pool
>> Of a known size and you’re the admin. As long as nobody else can create a
>> ZFS volume in that pool then all of the capacity in that pool will go to
>> the OST eventually when new writes occur. Keep in mind that the same pool
>> can contain multiple snapshots (if created) so the pool is a “potential
>> capacity” but that capacity could be concurrently allocated to OST volume
>> writes, snapshots and other ZFS volumes (if created)
>>
>> —Jeff
>>
>>
>>
>> On Mon, Dec 31, 2018 at 22:20 ANS  wrote:
>>
>>> Thanks Jeff. Currently i am using
>>>
>>> modinfo zfs | grep version
>>> version:0.8.0-rc2
>>> rhelversion:7.4
>>>
>>> lfs --version
>>> lfs 2.12.0
>>>
>>> And this is a fresh install. So is there any other possibility to show
>>> the complete zpool lun has been allocated for lustre alone.
>>>
>>> Thanks,
>>> ANS
>>>
>>>
>>>
>>> On Tue, Jan 1, 2019 at 11:44 AM Jeff Johnson <
>>> jeff.john...@aeoncomputing.com> wrote:
>>>
 ANS,

 Lustre on top of ZFS has to estimate capacities and it’s fairly off
 when the OSTs are new and empty. As objects are written to OSTs and
 capacity is consumed it gets the sizing of capacity more accurate. At the
 beginning it’s so off that it appears to be an error.

 What version are you running? Some patches have been added to make this
 calculation more accurate.

 —Jeff

 On Mon, Dec 31, 2018 at 22:08 ANS  wrote:

> Dear Team,
>
> I am trying to configure lustre with backend ZFS as file system with 2
> servers in HA. But after compiling and creating zfs pools
>
> zpool list
> NAME   SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAGCAP
> DEDUPHEALTH  ALTROOT
> lustre-data   54.5T  25.8M  54.5T- 16.0E 0% 0%
> 1.00xONLINE  -
> lustre-data1  54.5T  25.1M  54.5T- 16.0E 0% 0%
> 1.00xONLINE  -
> lustre-data2  54.5T  25.8M  54.5T- 16.0E 0% 0%
> 1.00xONLINE  -
> lustre-data3  54.5T  25.8M  54.5T- 16.0E 0% 0%
> 1.00xONLINE  -
> lustre-meta832G  3.50M   832G- 16.0E 0% 0%
> 1.00xONLINE  -
>
> and when mounted to client
>
> lfs df -h
> UUID   bytesUsed   Available Use% Mounted
> on
> home-MDT_UUID 799.7G3.2M  799.7G   0%
> /home[MDT:0]
> home-OST_UUID  39.9T   18.0M   39.9T   0%
> /home[OST:0]
> home-OST0001_UUID  39.9T   18.0M   39.9T   0%
> /home[OST:1]
> home-OST0002_UUID  39.9T   18.0M   39.9T   0%
> /home[OST:2]
> home-OST0003_UUID  39.9T   18.0M   39.9T   0%
> /home[OST:3]
>
> filesystem_summary:   159.6T   72.0M  159.6T   0% /home
>
> So out of total 54.5TX4=218TB i am getting only 159 TB usable. So can
> any one give the information regarding this.
>
> Also from performance prospective what are the zfs and lustre
> parameters to be tuned.
>
> --
> Thanks,
> ANS.
> ___
> lustre-discuss mailing list
> lustre-discuss@lists.lustre.org
> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
>
 --
 --
 Jeff Johnson
 Co-Founder
 Aeon Computing

 jeff.john...@aeoncomputing.com
 www.aeoncomputing.com
 t: 858-412-3810 x1001   f: 858-412-3845
 m: 619-204-9061

 4170 Morena Boulevard, Suite C - San Diego, CA 92117
 

 High-Performance Computing / Lustre Filesystems / Scale-out Storage

>>>
>>>
>>> --
>>> Thanks,
>>> ANS.
>>>
>> --
>> --
>> Jeff Johnson
>> Co-Founder
>> Aeon Computing
>>
>> jeff.john...@aeoncomputing.com
>> 

Re: [lustre-discuss] Lustre Sizing

2019-01-01 Thread Robin Humble
On Tue, Jan 01, 2019 at 01:05:22PM +0530, ANS wrote:
>So what could be the reason for this variation of the size.

with our ZFS 0.7.9 + Lustre 2.10.6 the "lfs df" numbers seem to be the
same as those from "zfs list" (not "zpool list").

so I think your question is more about ZFS than Lustre.

the number of devices in each ZFS vdev, raid level, what size files you
write, ashift, recordsize, ... all will affect the total space available.
see attached for an example.

ZFS's space estimates are also pessimistic as it doesn't know what size
files are going to be written.

if you want more accurate numbers then perhaps create a small but
realistic zpool and zfs filesystem (using say, 1G files as devices) and
then fill it up with files representative of your workload and see how
many fit on. I just filled them up with large dd's to make the above
graph, so YMMV.

cheers,
robin
___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


Re: [lustre-discuss] Lustre Sizing

2018-12-31 Thread ANS
Thank you Jeff. I have created the lustre on ZFS freshly and no other is
having access to it. So when mounted it on client it is showing around 40TB
variation from the actual space.

So what could be the reason for this variation of the size.

Thanks,
ANS

On Tue, Jan 1, 2019 at 12:21 PM Jeff Johnson 
wrote:

> Very forward versions...especially on ZFS.
>
> You build OST volumes in a pool. If no other volumes are defined in a pool
> then 100% of that pool will be available for the OST volume but the way ZFS
> works the capacity doesn’t really belong to the OST volume until blocks are
> allocated for writes. So you have a pool
> Of a known size and you’re the admin. As long as nobody else can create a
> ZFS volume in that pool then all of the capacity in that pool will go to
> the OST eventually when new writes occur. Keep in mind that the same pool
> can contain multiple snapshots (if created) so the pool is a “potential
> capacity” but that capacity could be concurrently allocated to OST volume
> writes, snapshots and other ZFS volumes (if created)
>
> —Jeff
>
>
>
> On Mon, Dec 31, 2018 at 22:20 ANS  wrote:
>
>> Thanks Jeff. Currently i am using
>>
>> modinfo zfs | grep version
>> version:0.8.0-rc2
>> rhelversion:7.4
>>
>> lfs --version
>> lfs 2.12.0
>>
>> And this is a fresh install. So is there any other possibility to show
>> the complete zpool lun has been allocated for lustre alone.
>>
>> Thanks,
>> ANS
>>
>>
>>
>> On Tue, Jan 1, 2019 at 11:44 AM Jeff Johnson <
>> jeff.john...@aeoncomputing.com> wrote:
>>
>>> ANS,
>>>
>>> Lustre on top of ZFS has to estimate capacities and it’s fairly off when
>>> the OSTs are new and empty. As objects are written to OSTs and capacity is
>>> consumed it gets the sizing of capacity more accurate. At the beginning
>>> it’s so off that it appears to be an error.
>>>
>>> What version are you running? Some patches have been added to make this
>>> calculation more accurate.
>>>
>>> —Jeff
>>>
>>> On Mon, Dec 31, 2018 at 22:08 ANS  wrote:
>>>
 Dear Team,

 I am trying to configure lustre with backend ZFS as file system with 2
 servers in HA. But after compiling and creating zfs pools

 zpool list
 NAME   SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAGCAP
 DEDUPHEALTH  ALTROOT
 lustre-data   54.5T  25.8M  54.5T- 16.0E 0% 0%
 1.00xONLINE  -
 lustre-data1  54.5T  25.1M  54.5T- 16.0E 0% 0%
 1.00xONLINE  -
 lustre-data2  54.5T  25.8M  54.5T- 16.0E 0% 0%
 1.00xONLINE  -
 lustre-data3  54.5T  25.8M  54.5T- 16.0E 0% 0%
 1.00xONLINE  -
 lustre-meta832G  3.50M   832G- 16.0E 0% 0%
 1.00xONLINE  -

 and when mounted to client

 lfs df -h
 UUID   bytesUsed   Available Use% Mounted on
 home-MDT_UUID 799.7G3.2M  799.7G   0%
 /home[MDT:0]
 home-OST_UUID  39.9T   18.0M   39.9T   0%
 /home[OST:0]
 home-OST0001_UUID  39.9T   18.0M   39.9T   0%
 /home[OST:1]
 home-OST0002_UUID  39.9T   18.0M   39.9T   0%
 /home[OST:2]
 home-OST0003_UUID  39.9T   18.0M   39.9T   0%
 /home[OST:3]

 filesystem_summary:   159.6T   72.0M  159.6T   0% /home

 So out of total 54.5TX4=218TB i am getting only 159 TB usable. So can
 any one give the information regarding this.

 Also from performance prospective what are the zfs and lustre
 parameters to be tuned.

 --
 Thanks,
 ANS.
 ___
 lustre-discuss mailing list
 lustre-discuss@lists.lustre.org
 http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org

>>> --
>>> --
>>> Jeff Johnson
>>> Co-Founder
>>> Aeon Computing
>>>
>>> jeff.john...@aeoncomputing.com
>>> www.aeoncomputing.com
>>> t: 858-412-3810 x1001   f: 858-412-3845
>>> m: 619-204-9061
>>>
>>> 4170 Morena Boulevard, Suite C - San Diego, CA 92117
>>> 
>>>
>>> High-Performance Computing / Lustre Filesystems / Scale-out Storage
>>>
>>
>>
>> --
>> Thanks,
>> ANS.
>>
> --
> --
> Jeff Johnson
> Co-Founder
> Aeon Computing
>
> jeff.john...@aeoncomputing.com
> www.aeoncomputing.com
> t: 858-412-3810 x1001   f: 858-412-3845
> m: 619-204-9061
>
> 4170 Morena Boulevard, Suite C - San Diego, CA 92117
>
> High-Performance Computing / Lustre Filesystems / Scale-out Storage
>


-- 
Thanks,
ANS.
___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


Re: [lustre-discuss] Lustre Sizing

2018-12-31 Thread Jeff Johnson
Very forward versions...especially on ZFS.

You build OST volumes in a pool. If no other volumes are defined in a pool
then 100% of that pool will be available for the OST volume but the way ZFS
works the capacity doesn’t really belong to the OST volume until blocks are
allocated for writes. So you have a pool
Of a known size and you’re the admin. As long as nobody else can create a
ZFS volume in that pool then all of the capacity in that pool will go to
the OST eventually when new writes occur. Keep in mind that the same pool
can contain multiple snapshots (if created) so the pool is a “potential
capacity” but that capacity could be concurrently allocated to OST volume
writes, snapshots and other ZFS volumes (if created)

—Jeff



On Mon, Dec 31, 2018 at 22:20 ANS  wrote:

> Thanks Jeff. Currently i am using
>
> modinfo zfs | grep version
> version:0.8.0-rc2
> rhelversion:7.4
>
> lfs --version
> lfs 2.12.0
>
> And this is a fresh install. So is there any other possibility to show the
> complete zpool lun has been allocated for lustre alone.
>
> Thanks,
> ANS
>
>
>
> On Tue, Jan 1, 2019 at 11:44 AM Jeff Johnson <
> jeff.john...@aeoncomputing.com> wrote:
>
>> ANS,
>>
>> Lustre on top of ZFS has to estimate capacities and it’s fairly off when
>> the OSTs are new and empty. As objects are written to OSTs and capacity is
>> consumed it gets the sizing of capacity more accurate. At the beginning
>> it’s so off that it appears to be an error.
>>
>> What version are you running? Some patches have been added to make this
>> calculation more accurate.
>>
>> —Jeff
>>
>> On Mon, Dec 31, 2018 at 22:08 ANS  wrote:
>>
>>> Dear Team,
>>>
>>> I am trying to configure lustre with backend ZFS as file system with 2
>>> servers in HA. But after compiling and creating zfs pools
>>>
>>> zpool list
>>> NAME   SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAGCAP
>>> DEDUPHEALTH  ALTROOT
>>> lustre-data   54.5T  25.8M  54.5T- 16.0E 0% 0%
>>> 1.00xONLINE  -
>>> lustre-data1  54.5T  25.1M  54.5T- 16.0E 0% 0%
>>> 1.00xONLINE  -
>>> lustre-data2  54.5T  25.8M  54.5T- 16.0E 0% 0%
>>> 1.00xONLINE  -
>>> lustre-data3  54.5T  25.8M  54.5T- 16.0E 0% 0%
>>> 1.00xONLINE  -
>>> lustre-meta832G  3.50M   832G- 16.0E 0% 0%
>>> 1.00xONLINE  -
>>>
>>> and when mounted to client
>>>
>>> lfs df -h
>>> UUID   bytesUsed   Available Use% Mounted on
>>> home-MDT_UUID 799.7G3.2M  799.7G   0%
>>> /home[MDT:0]
>>> home-OST_UUID  39.9T   18.0M   39.9T   0%
>>> /home[OST:0]
>>> home-OST0001_UUID  39.9T   18.0M   39.9T   0%
>>> /home[OST:1]
>>> home-OST0002_UUID  39.9T   18.0M   39.9T   0%
>>> /home[OST:2]
>>> home-OST0003_UUID  39.9T   18.0M   39.9T   0%
>>> /home[OST:3]
>>>
>>> filesystem_summary:   159.6T   72.0M  159.6T   0% /home
>>>
>>> So out of total 54.5TX4=218TB i am getting only 159 TB usable. So can
>>> any one give the information regarding this.
>>>
>>> Also from performance prospective what are the zfs and lustre parameters
>>> to be tuned.
>>>
>>> --
>>> Thanks,
>>> ANS.
>>> ___
>>> lustre-discuss mailing list
>>> lustre-discuss@lists.lustre.org
>>> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
>>>
>> --
>> --
>> Jeff Johnson
>> Co-Founder
>> Aeon Computing
>>
>> jeff.john...@aeoncomputing.com
>> www.aeoncomputing.com
>> t: 858-412-3810 x1001   f: 858-412-3845
>> m: 619-204-9061
>>
>> 4170 Morena Boulevard, Suite C - San Diego, CA 92117
>> 
>>
>> High-Performance Computing / Lustre Filesystems / Scale-out Storage
>>
>
>
> --
> Thanks,
> ANS.
>
-- 
--
Jeff Johnson
Co-Founder
Aeon Computing

jeff.john...@aeoncomputing.com
www.aeoncomputing.com
t: 858-412-3810 x1001   f: 858-412-3845
m: 619-204-9061

4170 Morena Boulevard, Suite C - San Diego, CA 92117

High-Performance Computing / Lustre Filesystems / Scale-out Storage
___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


Re: [lustre-discuss] Lustre Sizing

2018-12-31 Thread ANS
Thanks Jeff. Currently i am using

modinfo zfs | grep version
version:0.8.0-rc2
rhelversion:7.4

lfs --version
lfs 2.12.0

And this is a fresh install. So is there any other possibility to show the
complete zpool lun has been allocated for lustre alone.

Thanks,
ANS



On Tue, Jan 1, 2019 at 11:44 AM Jeff Johnson 
wrote:

> ANS,
>
> Lustre on top of ZFS has to estimate capacities and it’s fairly off when
> the OSTs are new and empty. As objects are written to OSTs and capacity is
> consumed it gets the sizing of capacity more accurate. At the beginning
> it’s so off that it appears to be an error.
>
> What version are you running? Some patches have been added to make this
> calculation more accurate.
>
> —Jeff
>
> On Mon, Dec 31, 2018 at 22:08 ANS  wrote:
>
>> Dear Team,
>>
>> I am trying to configure lustre with backend ZFS as file system with 2
>> servers in HA. But after compiling and creating zfs pools
>>
>> zpool list
>> NAME   SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAGCAP
>> DEDUPHEALTH  ALTROOT
>> lustre-data   54.5T  25.8M  54.5T- 16.0E 0% 0%
>> 1.00xONLINE  -
>> lustre-data1  54.5T  25.1M  54.5T- 16.0E 0% 0%
>> 1.00xONLINE  -
>> lustre-data2  54.5T  25.8M  54.5T- 16.0E 0% 0%
>> 1.00xONLINE  -
>> lustre-data3  54.5T  25.8M  54.5T- 16.0E 0% 0%
>> 1.00xONLINE  -
>> lustre-meta832G  3.50M   832G- 16.0E 0% 0%
>> 1.00xONLINE  -
>>
>> and when mounted to client
>>
>> lfs df -h
>> UUID   bytesUsed   Available Use% Mounted on
>> home-MDT_UUID 799.7G3.2M  799.7G   0% /home[MDT:0]
>> home-OST_UUID  39.9T   18.0M   39.9T   0% /home[OST:0]
>> home-OST0001_UUID  39.9T   18.0M   39.9T   0% /home[OST:1]
>> home-OST0002_UUID  39.9T   18.0M   39.9T   0% /home[OST:2]
>> home-OST0003_UUID  39.9T   18.0M   39.9T   0% /home[OST:3]
>>
>> filesystem_summary:   159.6T   72.0M  159.6T   0% /home
>>
>> So out of total 54.5TX4=218TB i am getting only 159 TB usable. So can any
>> one give the information regarding this.
>>
>> Also from performance prospective what are the zfs and lustre parameters
>> to be tuned.
>>
>> --
>> Thanks,
>> ANS.
>> ___
>> lustre-discuss mailing list
>> lustre-discuss@lists.lustre.org
>> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
>>
> --
> --
> Jeff Johnson
> Co-Founder
> Aeon Computing
>
> jeff.john...@aeoncomputing.com
> www.aeoncomputing.com
> t: 858-412-3810 x1001   f: 858-412-3845
> m: 619-204-9061
>
> 4170 Morena Boulevard, Suite C - San Diego, CA 92117
>
> High-Performance Computing / Lustre Filesystems / Scale-out Storage
>


-- 
Thanks,
ANS.
___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


Re: [lustre-discuss] Lustre Sizing

2018-12-31 Thread Jeff Johnson
ANS,

Lustre on top of ZFS has to estimate capacities and it’s fairly off when
the OSTs are new and empty. As objects are written to OSTs and capacity is
consumed it gets the sizing of capacity more accurate. At the beginning
it’s so off that it appears to be an error.

What version are you running? Some patches have been added to make this
calculation more accurate.

—Jeff

On Mon, Dec 31, 2018 at 22:08 ANS  wrote:

> Dear Team,
>
> I am trying to configure lustre with backend ZFS as file system with 2
> servers in HA. But after compiling and creating zfs pools
>
> zpool list
> NAME   SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAGCAP  DEDUP
>   HEALTH  ALTROOT
> lustre-data   54.5T  25.8M  54.5T- 16.0E 0% 0%  1.00x
>   ONLINE  -
> lustre-data1  54.5T  25.1M  54.5T- 16.0E 0% 0%  1.00x
>   ONLINE  -
> lustre-data2  54.5T  25.8M  54.5T- 16.0E 0% 0%  1.00x
>   ONLINE  -
> lustre-data3  54.5T  25.8M  54.5T- 16.0E 0% 0%  1.00x
>   ONLINE  -
> lustre-meta832G  3.50M   832G- 16.0E 0% 0%  1.00x
>   ONLINE  -
>
> and when mounted to client
>
> lfs df -h
> UUID   bytesUsed   Available Use% Mounted on
> home-MDT_UUID 799.7G3.2M  799.7G   0% /home[MDT:0]
> home-OST_UUID  39.9T   18.0M   39.9T   0% /home[OST:0]
> home-OST0001_UUID  39.9T   18.0M   39.9T   0% /home[OST:1]
> home-OST0002_UUID  39.9T   18.0M   39.9T   0% /home[OST:2]
> home-OST0003_UUID  39.9T   18.0M   39.9T   0% /home[OST:3]
>
> filesystem_summary:   159.6T   72.0M  159.6T   0% /home
>
> So out of total 54.5TX4=218TB i am getting only 159 TB usable. So can any
> one give the information regarding this.
>
> Also from performance prospective what are the zfs and lustre parameters
> to be tuned.
>
> --
> Thanks,
> ANS.
> ___
> lustre-discuss mailing list
> lustre-discuss@lists.lustre.org
> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
>
-- 
--
Jeff Johnson
Co-Founder
Aeon Computing

jeff.john...@aeoncomputing.com
www.aeoncomputing.com
t: 858-412-3810 x1001   f: 858-412-3845
m: 619-204-9061

4170 Morena Boulevard, Suite C - San Diego, CA 92117

High-Performance Computing / Lustre Filesystems / Scale-out Storage
___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org