Re: [lustre-discuss] 1 MDS and 1 OSS

2017-10-30 Thread Amjad Syed
Jeff,
We intend to use 10 clients that will mount the file system.

Amjad

On Tue, Oct 31, 2017 at 3:02 AM, Jeff Johnson <
jeff.john...@aeoncomputing.com> wrote:

> Amjad,
>
> You might ask your vendor to propose a single MDT comprised of (8 * 500GB)
> 2.5" disk drives or better, SSDs. With some bio applications you would
> benefit from spreading the MDT I/O across more drives.
>
> How many clients to you expect to mount the file system? A standard filer
> (or ZFS/NFS server) will perform well compared to Lustre until you
> bottleneck somewhere in the server hardware (net, disk, cpu, etc), with
> Lustre you can add simply add one or more OSS/OSTs to the file system and
> performance potential increases by the number of additional OSS/OST servers.
>
> High-availability is nice to have but it isn't necessary unless your
> environment cannot tolerate any interruption or downtime. If your vendor
> proposes quality hardware these cases are infrequent.
>
> --Jeff
>
> On Mon, Oct 30, 2017 at 12:04 PM, Amjad Syed <amjad...@gmail.com> wrote:
>
>> The vendor has proposed a single MDT  ( 4 * 1.2 TB) in RAID 10
>> configuration.
>> The OST will be RAID 6  and proposed are 2 OST.
>>
>>
>> On Mon, Oct 30, 2017 at 7:55 PM, Ben Evans <bev...@cray.com> wrote:
>>
>>> How many OST's are behind that OSS?  How many MDT's behind the MDS?
>>>
>>> From: lustre-discuss <lustre-discuss-boun...@lists.lustre.org> on
>>> behalf of Brian Andrus <toomuc...@gmail.com>
>>> Date: Monday, October 30, 2017 at 12:24 PM
>>> To: "lustre-discuss@lists.lustre.org" <lustre-discuss@lists.lustre.org>
>>> Subject: Re: [lustre-discuss] 1 MDS and 1 OSS
>>>
>>> Hmm. That is an odd one from a quick thought...
>>>
>>> However, IF you are planning on growing and adding OSSes/OSTs, this is
>>> not a bad way to get started and used to how everything works. It is
>>> basically a single stripe storage.
>>>
>>> If you are not planning on growing, I would lean towards gluster on 2
>>> boxes. I do that often, actually. A single MDS/OSS has zero redundancy,
>>> unless something is being done at harware level and that would help in
>>> availability.
>>> NFS is quite viable too, but you would be splitting the available
>>> storage on 2 boxes.
>>>
>>> Brian Andrus
>>>
>>>
>>>
>>> On 10/30/2017 12:47 AM, Amjad Syed wrote:
>>>
>>> Hello
>>> We are in process in procuring one small Lustre filesystem giving us 120
>>> TB  of storage using Lustre 2.X.
>>> The vendor has proposed only 1 MDS and 1 OSS as a solution.
>>> The query we have is that is this configuration enough , or we need more
>>> OSS?
>>> The MDS and OSS server are identical  with regards to RAM (64 GB) and
>>> HDD (300GB)
>>>
>>> Thanks
>>> Majid
>>>
>>>
>>> ___
>>> lustre-discuss mailing 
>>> listlustre-discuss@lists.lustre.orghttp://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
>>>
>>>
>>>
>>> ___
>>> lustre-discuss mailing list
>>> lustre-discuss@lists.lustre.org
>>> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
>>>
>>>
>>
>> ___
>> lustre-discuss mailing list
>> lustre-discuss@lists.lustre.org
>> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
>>
>>
>
>
> --
> --
> Jeff Johnson
> Co-Founder
> Aeon Computing
>
> jeff.john...@aeoncomputing.com
> www.aeoncomputing.com
> t: 858-412-3810 x1001 <(858)%20412-3810>   f: 858-412-3845
> <(858)%20412-3845>
> m: 619-204-9061 <(619)%20204-9061>
>
> 4170 Morena Boulevard, Suite D - San Diego, CA 92117
> <https://maps.google.com/?q=4170+Morena+Boulevard,+Suite+D+-+San+Diego,+CA+92117=gmail=g>
>
> High-Performance Computing / Lustre Filesystems / Scale-out Storage
>
___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


Re: [lustre-discuss] 1 MDS and 1 OSS

2017-10-30 Thread Jeff Johnson
Amjad,

You might ask your vendor to propose a single MDT comprised of (8 * 500GB)
2.5" disk drives or better, SSDs. With some bio applications you would
benefit from spreading the MDT I/O across more drives.

How many clients to you expect to mount the file system? A standard filer
(or ZFS/NFS server) will perform well compared to Lustre until you
bottleneck somewhere in the server hardware (net, disk, cpu, etc), with
Lustre you can add simply add one or more OSS/OSTs to the file system and
performance potential increases by the number of additional OSS/OST servers.

High-availability is nice to have but it isn't necessary unless your
environment cannot tolerate any interruption or downtime. If your vendor
proposes quality hardware these cases are infrequent.

--Jeff

On Mon, Oct 30, 2017 at 12:04 PM, Amjad Syed <amjad...@gmail.com> wrote:

> The vendor has proposed a single MDT  ( 4 * 1.2 TB) in RAID 10
> configuration.
> The OST will be RAID 6  and proposed are 2 OST.
>
>
> On Mon, Oct 30, 2017 at 7:55 PM, Ben Evans <bev...@cray.com> wrote:
>
>> How many OST's are behind that OSS?  How many MDT's behind the MDS?
>>
>> From: lustre-discuss <lustre-discuss-boun...@lists.lustre.org> on behalf
>> of Brian Andrus <toomuc...@gmail.com>
>> Date: Monday, October 30, 2017 at 12:24 PM
>> To: "lustre-discuss@lists.lustre.org" <lustre-discuss@lists.lustre.org>
>> Subject: Re: [lustre-discuss] 1 MDS and 1 OSS
>>
>> Hmm. That is an odd one from a quick thought...
>>
>> However, IF you are planning on growing and adding OSSes/OSTs, this is
>> not a bad way to get started and used to how everything works. It is
>> basically a single stripe storage.
>>
>> If you are not planning on growing, I would lean towards gluster on 2
>> boxes. I do that often, actually. A single MDS/OSS has zero redundancy,
>> unless something is being done at harware level and that would help in
>> availability.
>> NFS is quite viable too, but you would be splitting the available storage
>> on 2 boxes.
>>
>> Brian Andrus
>>
>>
>>
>> On 10/30/2017 12:47 AM, Amjad Syed wrote:
>>
>> Hello
>> We are in process in procuring one small Lustre filesystem giving us 120
>> TB  of storage using Lustre 2.X.
>> The vendor has proposed only 1 MDS and 1 OSS as a solution.
>> The query we have is that is this configuration enough , or we need more
>> OSS?
>> The MDS and OSS server are identical  with regards to RAM (64 GB) and
>> HDD (300GB)
>>
>> Thanks
>> Majid
>>
>>
>> ___
>> lustre-discuss mailing 
>> listlustre-discuss@lists.lustre.orghttp://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
>>
>>
>>
>> ___
>> lustre-discuss mailing list
>> lustre-discuss@lists.lustre.org
>> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
>>
>>
>
> ___
> lustre-discuss mailing list
> lustre-discuss@lists.lustre.org
> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
>
>


-- 
--
Jeff Johnson
Co-Founder
Aeon Computing

jeff.john...@aeoncomputing.com
www.aeoncomputing.com
t: 858-412-3810 x1001   f: 858-412-3845
m: 619-204-9061

4170 Morena Boulevard, Suite D - San Diego, CA 92117

High-Performance Computing / Lustre Filesystems / Scale-out Storage
___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


Re: [lustre-discuss] 1 MDS and 1 OSS

2017-10-30 Thread Dilger, Andreas
On Oct 31, 2017, at 07:35, Andrew Elwell  wrote:
> 
> 
> 
> On 31 Oct. 2017 07:20, "Dilger, Andreas"  wrote:
>> 
>> Having a larger MDT isn't bad if you plan future expansion.  That said, you 
>> would get better performance over FDR if you used SSDs for the MDT rather 
>> than HDDs (if you aren't already planning this), and for a single OSS you 
>> probably don't need the extra MDT capacity.  With both ldiskfs+LVM and ZFS 
>> you can also expand the MDT size in the future if you need more capacity.
> 
> Can someone with wiki editing rights summarise the advantages of different 
> hardware combinations? For example I remember Daniel @ NCI had some nice 
> comments about which components (MDS v OSS) benefited from faster cores over 
> thread count and where more RAM was important.
> 
> I feel this would be useful for people building small test systems and 
> comparing vendor responses for large tenders.

Everyone has wiki editing rights, you just need to register...

Cheers, Andreas
--
Andreas Dilger
Lustre Principal Architect
Intel Corporation







___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


Re: [lustre-discuss] 1 MDS and 1 OSS

2017-10-30 Thread Andrew Elwell
On 31 Oct. 2017 07:20, "Dilger, Andreas"  wrote:


Having a larger MDT isn't bad if you plan future expansion.  That said, you
would get better performance over FDR if you used SSDs for the MDT rather
than HDDs (if you aren't already planning this), and for a single OSS you
probably don't need the extra MDT capacity.  With both ldiskfs+LVM and ZFS
you can also expand the MDT size in the future if you need more capacity.


Can someone with wiki editing rights summarise the advantages of different
hardware combinations? For example I remember Daniel @ NCI had some nice
comments about which components (MDS v OSS) benefited from faster cores
over thread count and where more RAM was important.

I feel this would be useful for people building small test systems and
comparing vendor responses for large tenders.

Many thanks,
Andrew
___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


Re: [lustre-discuss] 1 MDS and 1 OSS

2017-10-30 Thread Dilger, Andreas
On Oct 31, 2017, at 05:46, Mohr Jr, Richard Frank (Rick Mohr)  
wrote:
> 
>> On Oct 30, 2017, at 4:46 PM, Brian Andrus  wrote:
>> 
>> Someone please correct me if I am wrong, but that seems a bit large of an 
>> MDT. Of course drives these days are pretty good sized, so the extra is 
>> probably very inexpensive.
> 
> That probably depends on what the primary usage will be.  If the applications 
> create lots of small files (like some biomed programs), then a larger MDT 
> would result in more inodes allowing more Lustre files to be created.

With mirroring the MDT ends up as ~2.4TB (about 1.2B files for ldiskfs, 600M 
files for ZFS), which gives an minimum average file size of 120TB/1.2B = 100KB 
on the OSTs (200KB for ZFS).  That said, by default you won't be able to create 
so many files on the OSTs unless you reduce the inode ratio for ldiskfs at 
format time, or use ZFS (which doesn't have a fixed inode count, but uses twice 
as much space per inode ob the MDT). 

Having a larger MDT isn't bad if you plan future expansion.  That said, you 
would get better performance over FDR if you used SSDs for the MDT rather than 
HDDs (if you aren't already planning this), and for a single OSS you probably 
don't need the extra MDT capacity.  With both ldiskfs+LVM and ZFS you can also 
expand the MDT size in the future if you need more capacity.

Cheers, Andreas
--
Andreas Dilger
Lustre Principal Architect
Intel Corporation







___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


Re: [lustre-discuss] 1 MDS and 1 OSS

2017-10-30 Thread Mohr Jr, Richard Frank (Rick Mohr)

> On Oct 30, 2017, at 4:46 PM, Brian Andrus  wrote:
> 
> Someone please correct me if I am wrong, but that seems a bit large of an 
> MDT. Of course drives these days are pretty good sized, so the extra is 
> probably very inexpensive.

That probably depends on what the primary usage will be.  If the applications 
create lots of small files (like some biomed programs), then a larger MDT would 
result in more inodes allowing more Lustre files to be created.

--
Rick Mohr
Senior HPC System Administrator
National Institute for Computational Sciences
http://www.nics.tennessee.edu

___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


Re: [lustre-discuss] 1 MDS and 1 OSS

2017-10-30 Thread Brian Andrus
Someone please correct me if I am wrong, but that seems a bit large of 
an MDT. Of course drives these days are pretty good sized, so the extra 
is probably very inexpensive.


Also, isn't it better to have 1 OST per OSS for parallelism rather than 
adding OSTs to an OSS? I've been doing most of my OSTs as ZFS and 
letting that handle parallel writes across drives within an OSS, which 
has performed well.


Brian Andrus


On 10/30/2017 12:04 PM, Amjad Syed wrote:
The vendor has proposed a single MDT  ( 4 * 1.2 TB) in RAID 10 
configuration.

The OST will be RAID 6  and proposed are 2 OST.


On Mon, Oct 30, 2017 at 7:55 PM, Ben Evans <bev...@cray.com 
<mailto:bev...@cray.com>> wrote:


How many OST's are behind that OSS?  How many MDT's behind the MDS?

From: lustre-discuss <lustre-discuss-boun...@lists.lustre.org
<mailto:lustre-discuss-boun...@lists.lustre.org>> on behalf of
Brian Andrus <toomuc...@gmail.com <mailto:toomuc...@gmail.com>>
Date: Monday, October 30, 2017 at 12:24 PM
To: "lustre-discuss@lists.lustre.org
<mailto:lustre-discuss@lists.lustre.org>"
<lustre-discuss@lists.lustre.org
<mailto:lustre-discuss@lists.lustre.org>>
Subject: Re: [lustre-discuss] 1 MDS and 1 OSS

Hmm. That is an odd one from a quick thought...

However, IF you are planning on growing and adding OSSes/OSTs,
this is not a bad way to get started and used to how everything
works. It is basically a single stripe storage.

If you are not planning on growing, I would lean towards gluster
on 2 boxes. I do that often, actually. A single MDS/OSS has zero
redundancy, unless something is being done at harware level and
that would help in availability.
NFS is quite viable too, but you would be splitting the available
storage on 2 boxes.

Brian Andrus



On 10/30/2017 12:47 AM, Amjad Syed wrote:

Hello
We are in process in procuring one small Lustre filesystem giving
us 120 TB of storage using Lustre 2.X.
The vendor has proposed only 1 MDS and 1 OSS as a solution.
The query we have is that is this configuration enough , or we
need more OSS?
The MDS and OSS server are identical with regards to RAM (64 GB)
and  HDD (300GB)

Thanks
Majid


___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org

<mailto:lustre-discuss@lists.lustre.org>http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
<http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org>



___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
<mailto:lustre-discuss@lists.lustre.org>
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
<http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org>




___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


Re: [lustre-discuss] 1 MDS and 1 OSS

2017-10-30 Thread Ben Evans
It's not going to matter.  There aren't enough physical drives to push the 
Infiniband link, unless they're all SSDs.

From: Simon Guilbault 
<simon.guilba...@calculquebec.ca<mailto:simon.guilba...@calculquebec.ca>>
Date: Monday, October 30, 2017 at 3:13 PM
To: Amjad Syed <amjad...@gmail.com<mailto:amjad...@gmail.com>>
Cc: Ben Evans <bev...@cray.com<mailto:bev...@cray.com>>, 
"lustre-discuss@lists.lustre.org<mailto:lustre-discuss@lists.lustre.org>" 
<lustre-discuss@lists.lustre.org<mailto:lustre-discuss@lists.lustre.org>>
Subject: Re: [lustre-discuss] 1 MDS and 1 OSS

Hi,

If everything is connected with SAS JBOD and controllers, you could probably 
run 1 OST on each server and get better performance that way. With both server 
reaching the same SAS drives, you could also have a failover in case one server 
does not work.

You can forget about failover if you are using SATA drives.

On Mon, Oct 30, 2017 at 3:04 PM, Amjad Syed 
<amjad...@gmail.com<mailto:amjad...@gmail.com>> wrote:
The vendor has proposed a single MDT  ( 4 * 1.2 TB) in RAID 10 configuration.
The OST will be RAID 6  and proposed are 2 OST.


On Mon, Oct 30, 2017 at 7:55 PM, Ben Evans 
<bev...@cray.com<mailto:bev...@cray.com>> wrote:
How many OST's are behind that OSS?  How many MDT's behind the MDS?

From: lustre-discuss 
<lustre-discuss-boun...@lists.lustre.org<mailto:lustre-discuss-boun...@lists.lustre.org>>
 on behalf of Brian Andrus <toomuc...@gmail.com<mailto:toomuc...@gmail.com>>
Date: Monday, October 30, 2017 at 12:24 PM
To: "lustre-discuss@lists.lustre.org<mailto:lustre-discuss@lists.lustre.org>" 
<lustre-discuss@lists.lustre.org<mailto:lustre-discuss@lists.lustre.org>>
Subject: Re: [lustre-discuss] 1 MDS and 1 OSS


Hmm. That is an odd one from a quick thought...

However, IF you are planning on growing and adding OSSes/OSTs, this is not a 
bad way to get started and used to how everything works. It is basically a 
single stripe storage.

If you are not planning on growing, I would lean towards gluster on 2 boxes. I 
do that often, actually. A single MDS/OSS has zero redundancy, unless something 
is being done at harware level and that would help in availability.
NFS is quite viable too, but you would be splitting the available storage on 2 
boxes.

Brian Andrus


On 10/30/2017 12:47 AM, Amjad Syed wrote:
Hello
We are in process in procuring one small Lustre filesystem giving us 120 TB  of 
storage using Lustre 2.X.
The vendor has proposed only 1 MDS and 1 OSS as a solution.
The query we have is that is this configuration enough , or we need more OSS?
The MDS and OSS server are identical  with regards to RAM (64 GB) and  HDD 
(300GB)

Thanks
Majid



___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org<mailto:lustre-discuss@lists.lustre.org>http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org<mailto:lustre-discuss@lists.lustre.org>
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org



___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org<mailto:lustre-discuss@lists.lustre.org>
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


Re: [lustre-discuss] 1 MDS and 1 OSS

2017-10-30 Thread Simon Guilbault
Hi,

If everything is connected with SAS JBOD and controllers, you could
probably run 1 OST on each server and get better performance that way. With
both server reaching the same SAS drives, you could also have a failover in
case one server does not work.

You can forget about failover if you are using SATA drives.

On Mon, Oct 30, 2017 at 3:04 PM, Amjad Syed <amjad...@gmail.com> wrote:

> The vendor has proposed a single MDT  ( 4 * 1.2 TB) in RAID 10
> configuration.
> The OST will be RAID 6  and proposed are 2 OST.
>
>
> On Mon, Oct 30, 2017 at 7:55 PM, Ben Evans <bev...@cray.com> wrote:
>
>> How many OST's are behind that OSS?  How many MDT's behind the MDS?
>>
>> From: lustre-discuss <lustre-discuss-boun...@lists.lustre.org> on behalf
>> of Brian Andrus <toomuc...@gmail.com>
>> Date: Monday, October 30, 2017 at 12:24 PM
>> To: "lustre-discuss@lists.lustre.org" <lustre-discuss@lists.lustre.org>
>> Subject: Re: [lustre-discuss] 1 MDS and 1 OSS
>>
>> Hmm. That is an odd one from a quick thought...
>>
>> However, IF you are planning on growing and adding OSSes/OSTs, this is
>> not a bad way to get started and used to how everything works. It is
>> basically a single stripe storage.
>>
>> If you are not planning on growing, I would lean towards gluster on 2
>> boxes. I do that often, actually. A single MDS/OSS has zero redundancy,
>> unless something is being done at harware level and that would help in
>> availability.
>> NFS is quite viable too, but you would be splitting the available storage
>> on 2 boxes.
>>
>> Brian Andrus
>>
>>
>>
>> On 10/30/2017 12:47 AM, Amjad Syed wrote:
>>
>> Hello
>> We are in process in procuring one small Lustre filesystem giving us 120
>> TB  of storage using Lustre 2.X.
>> The vendor has proposed only 1 MDS and 1 OSS as a solution.
>> The query we have is that is this configuration enough , or we need more
>> OSS?
>> The MDS and OSS server are identical  with regards to RAM (64 GB) and
>> HDD (300GB)
>>
>> Thanks
>> Majid
>>
>>
>> ___
>> lustre-discuss mailing 
>> listlustre-discuss@lists.lustre.orghttp://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
>>
>>
>>
>> ___
>> lustre-discuss mailing list
>> lustre-discuss@lists.lustre.org
>> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
>>
>>
>
> ___
> lustre-discuss mailing list
> lustre-discuss@lists.lustre.org
> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
>
>
___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


Re: [lustre-discuss] 1 MDS and 1 OSS

2017-10-30 Thread Amjad Syed
Andreas,
Thank you for your email.
The  interconnect  proposed by Vendor is  Infiniband FDR , 56 GB/s.  Each
MDS and OSS will have only FDR Card.
This Lustre will be used to run  Life Sciences/Bioinformatics/genomics
applications .

Will single OSS handle  FDR interconnect.?

On 30 Oct 2017 4:56 p.m., "Dilger, Andreas" <andreas.dil...@intel.com>
wrote:

> First, to answer Amjad's question - the number of OSS nodes you have
> depends
> on the capacity and performance you need.  For 120TB of total storage
> (assume 30x4TB drives, or 20x60TB drives) a single OSS is definitely
> capable of handling this many drives.  I'd also assume you are using 10Gb
> Ethernet (~= 1GB/s), which  a single OSS should be able to saturate (at
> either 40MB/s or 60MB/s per data drive with RAID-6 8+2 LUNs).  If you want
> more capacity or bandwidth, you can add more OSS nodes now or in the future.
>
> As Ravi mentioned, with a single OSS and MDS, you will need to reboot the
> single server in case of failures instead of having automatic failover, but
> for some systems this is fine.
>
> Finally, as for whether Lustre on a single MDS+OSS is better than running
> NFS on a single server, that depends mostly on the application workload.
> NFS is easier to administer than Lustre, and will provide better small file
> performance than Lustre.  NFS also has the benefit that it works with every
> client available.
>
> Interestingly, there are some workloads that users have reported to us
> where a single Lustre OSS will perform better than NFS, because Lustre does
> proper data locking/caching, while NFS has only close-to-open consistency
> semantics, and cannot cache data on the client for a long time.  Any
> workloads where there are multiple writers/readers to the same file will
> just not function properly with NFS.  Lustre will handle a large number of
> clients better than NFS.  For streaming IO loads, Lustre is better able to
> saturate the network (though for slower networks this doesn't really make
> much difference).  Lustre can drive faster networks (e.g. IB) much better
> with LNet than NFS with IPoIB.
>
> And of course, if you think your performance/capacity needs will increase
> in the future, then Lustre can easily scale to virtually any size and
> performance you need, while NFS will not.
>
> In general I wouldn't necessarily recommend Lustre for a single MDS+OSS
> installation, but this depends on your workload and future plans.
>
> Cheers, Andreas
>
> On Oct 30, 2017, at 15:59, E.S. Rosenberg <esr+lus...@mail.hebrew.edu>
> wrote:
> >
> > Maybe someone can answer this in the context of this question, is there
> any performance gain over classic filers when you are using only a single
> OSS?
> >
> > On Mon, Oct 30, 2017 at 9:56 AM, Ravi Konila <ravibh...@gmail.com>
> wrote:
> > Hi Majid
> >
> > It is better to go for HA for both OSS and MDS. You would need 2 nos of
> MDS and 2 nos of OSS (identical configuration).
> > Also use latest Lustre 2.10.1 release.
> >
> > Regards
> > Ravi Konila
> >
> >
> >> From: Amjad Syed
> >> Sent: Monday, October 30, 2017 1:17 PM
> >> To: lustre-discuss@lists.lustre.org
> >> Subject: [lustre-discuss] 1 MDS and 1 OSS
> >>
> >> Hello
> >> We are in process in procuring one small Lustre filesystem giving us
> 120 TB  of storage using Lustre 2.X.
> >> The vendor has proposed only 1 MDS and 1 OSS as a solution.
> >> The query we have is that is this configuration enough , or we need
> more OSS?
> >> The MDS and OSS server are identical  with regards to RAM (64 GB) and
> HDD (300GB)
> >>
> >> Thanks
> >> Majid
>
> Cheers, Andreas
> --
> Andreas Dilger
> Lustre Principal Architect
> Intel Corporation
>
>
>
>
>
>
>
>
___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


Re: [lustre-discuss] 1 MDS and 1 OSS

2017-10-30 Thread Ben Evans
How many OST's are behind that OSS?  How many MDT's behind the MDS?

From: lustre-discuss 
<lustre-discuss-boun...@lists.lustre.org<mailto:lustre-discuss-boun...@lists.lustre.org>>
 on behalf of Brian Andrus <toomuc...@gmail.com<mailto:toomuc...@gmail.com>>
Date: Monday, October 30, 2017 at 12:24 PM
To: "lustre-discuss@lists.lustre.org<mailto:lustre-discuss@lists.lustre.org>" 
<lustre-discuss@lists.lustre.org<mailto:lustre-discuss@lists.lustre.org>>
Subject: Re: [lustre-discuss] 1 MDS and 1 OSS


Hmm. That is an odd one from a quick thought...

However, IF you are planning on growing and adding OSSes/OSTs, this is not a 
bad way to get started and used to how everything works. It is basically a 
single stripe storage.

If you are not planning on growing, I would lean towards gluster on 2 boxes. I 
do that often, actually. A single MDS/OSS has zero redundancy, unless something 
is being done at harware level and that would help in availability.
NFS is quite viable too, but you would be splitting the available storage on 2 
boxes.

Brian Andrus


On 10/30/2017 12:47 AM, Amjad Syed wrote:
Hello
We are in process in procuring one small Lustre filesystem giving us 120 TB  of 
storage using Lustre 2.X.
The vendor has proposed only 1 MDS and 1 OSS as a solution.
The query we have is that is this configuration enough , or we need more OSS?
The MDS and OSS server are identical  with regards to RAM (64 GB) and  HDD 
(300GB)

Thanks
Majid



___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org<mailto:lustre-discuss@lists.lustre.org>http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org

___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


Re: [lustre-discuss] 1 MDS and 1 OSS

2017-10-30 Thread Brian Andrus

Hmm. That is an odd one from a quick thought...

However, IF you are planning on growing and adding OSSes/OSTs, this is 
not a bad way to get started and used to how everything works. It is 
basically a single stripe storage.


If you are not planning on growing, I would lean towards gluster on 2 
boxes. I do that often, actually. A single MDS/OSS has zero redundancy, 
unless something is being done at harware level and that would help in 
availability.
NFS is quite viable too, but you would be splitting the available 
storage on 2 boxes.


Brian Andrus



On 10/30/2017 12:47 AM, Amjad Syed wrote:

Hello
We are in process in procuring one small Lustre filesystem giving us 
120 TB  of storage using Lustre 2.X.

The vendor has proposed only 1 MDS and 1 OSS as a solution.
The query we have is that is this configuration enough , or we need 
more OSS?
The MDS and OSS server are identical  with regards to RAM (64 GB) and  
HDD (300GB)


Thanks
Majid


___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


Re: [lustre-discuss] 1 MDS and 1 OSS

2017-10-30 Thread Dilger, Andreas
First, to answer Amjad's question - the number of OSS nodes you have depends
on the capacity and performance you need.  For 120TB of total storage (assume 
30x4TB drives, or 20x60TB drives) a single OSS is definitely capable of 
handling this many drives.  I'd also assume you are using 10Gb Ethernet (~= 
1GB/s), which  a single OSS should be able to saturate (at either 40MB/s or 
60MB/s per data drive with RAID-6 8+2 LUNs).  If you want more capacity or 
bandwidth, you can add more OSS nodes now or in the future.

As Ravi mentioned, with a single OSS and MDS, you will need to reboot the 
single server in case of failures instead of having automatic failover, but for 
some systems this is fine.

Finally, as for whether Lustre on a single MDS+OSS is better than running NFS 
on a single server, that depends mostly on the application workload.  NFS is 
easier to administer than Lustre, and will provide better small file 
performance than Lustre.  NFS also has the benefit that it works with every 
client available.

Interestingly, there are some workloads that users have reported to us where a 
single Lustre OSS will perform better than NFS, because Lustre does proper data 
locking/caching, while NFS has only close-to-open consistency semantics, and 
cannot cache data on the client for a long time.  Any workloads where there are 
multiple writers/readers to the same file will just not function properly with 
NFS.  Lustre will handle a large number of clients better than NFS.  For 
streaming IO loads, Lustre is better able to saturate the network (though for 
slower networks this doesn't really make much difference).  Lustre can drive 
faster networks (e.g. IB) much better with LNet than NFS with IPoIB.

And of course, if you think your performance/capacity needs will increase in 
the future, then Lustre can easily scale to virtually any size and performance 
you need, while NFS will not.

In general I wouldn't necessarily recommend Lustre for a single MDS+OSS 
installation, but this depends on your workload and future plans.

Cheers, Andreas

On Oct 30, 2017, at 15:59, E.S. Rosenberg <esr+lus...@mail.hebrew.edu> wrote:
> 
> Maybe someone can answer this in the context of this question, is there any 
> performance gain over classic filers when you are using only a single OSS?
> 
> On Mon, Oct 30, 2017 at 9:56 AM, Ravi Konila <ravibh...@gmail.com> wrote:
> Hi Majid
>  
> It is better to go for HA for both OSS and MDS. You would need 2 nos of MDS 
> and 2 nos of OSS (identical configuration).
> Also use latest Lustre 2.10.1 release.
>  
> Regards
> Ravi Konila
>  
>  
>> From: Amjad Syed
>> Sent: Monday, October 30, 2017 1:17 PM
>> To: lustre-discuss@lists.lustre.org
>> Subject: [lustre-discuss] 1 MDS and 1 OSS
>>  
>> Hello
>> We are in process in procuring one small Lustre filesystem giving us 120 TB  
>> of storage using Lustre 2.X.
>> The vendor has proposed only 1 MDS and 1 OSS as a solution.
>> The query we have is that is this configuration enough , or we need more OSS?
>> The MDS and OSS server are identical  with regards to RAM (64 GB) and  HDD 
>> (300GB)
>>  
>> Thanks
>> Majid

Cheers, Andreas
--
Andreas Dilger
Lustre Principal Architect
Intel Corporation







___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


Re: [lustre-discuss] 1 MDS and 1 OSS

2017-10-30 Thread E.S. Rosenberg
Maybe someone can answer this in the context of this question, is there any
performance gain over classic filers when you are using only a single OSS?

On Mon, Oct 30, 2017 at 9:56 AM, Ravi Konila <ravibh...@gmail.com> wrote:

> Hi Majid
>
> It is better to go for HA for both OSS and MDS. You would need 2 nos of
> MDS and 2 nos of OSS (identical configuration).
> Also use latest Lustre 2.10.1 release.
>
> Regards
> *Ravi Konila*
>
>
> *From:* Amjad Syed
> *Sent:* Monday, October 30, 2017 1:17 PM
> *To:* lustre-discuss@lists.lustre.org
> *Subject:* [lustre-discuss] 1 MDS and 1 OSS
>
> Hello
> We are in process in procuring one small Lustre filesystem giving us 120
> TB  of storage using Lustre 2.X.
> The vendor has proposed only 1 MDS and 1 OSS as a solution.
> The query we have is that is this configuration enough , or we need more
> OSS?
> The MDS and OSS server are identical  with regards to RAM (64 GB) and  HDD
> (300GB)
>
> Thanks
> Majid
>
> --
> ___
> lustre-discuss mailing list
> lustre-discuss@lists.lustre.org
> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
>
>
> ___
> lustre-discuss mailing list
> lustre-discuss@lists.lustre.org
> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
>
>
___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


Re: [lustre-discuss] 1 MDS and 1 OSS

2017-10-30 Thread Ravi Konila
Hi Majid

It is better to go for HA for both OSS and MDS. You would need 2 nos of MDS and 
2 nos of OSS (identical configuration).
Also use latest Lustre 2.10.1 release.

Regards
Ravi Konila


From: Amjad Syed 
Sent: Monday, October 30, 2017 1:17 PM
To: lustre-discuss@lists.lustre.org 
Subject: [lustre-discuss] 1 MDS and 1 OSS

Hello 
We are in process in procuring one small Lustre filesystem giving us 120 TB  of 
storage using Lustre 2.X.
The vendor has proposed only 1 MDS and 1 OSS as a solution.
The query we have is that is this configuration enough , or we need more OSS?
The MDS and OSS server are identical  with regards to RAM (64 GB) and  HDD 
(300GB)

Thanks
Majid



___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


[lustre-discuss] 1 MDS and 1 OSS

2017-10-30 Thread Amjad Syed
Hello
We are in process in procuring one small Lustre filesystem giving us 120
TB  of storage using Lustre 2.X.
The vendor has proposed only 1 MDS and 1 OSS as a solution.
The query we have is that is this configuration enough , or we need more
OSS?
The MDS and OSS server are identical  with regards to RAM (64 GB) and  HDD
(300GB)

Thanks
Majid
___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org