[lustre-discuss] Updating kernel will require recompilation of lustre kernel modules?

2018-09-29 Thread Amjad Syed
Hello
We have an HPC running RHEL 7.4. We are using lustre 2.0
Red hat   last week released an advisory to update kernel  to fix mutagen
astronomy bug.

Now question is we updrade kernel on MDS/OSS and linux client, do we need
to recompile lustre against the updated kernel version ?
___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


[lustre-discuss] Single server as mds and oss with a single ib card for lustre 2.x???

2017-11-26 Thread Amjad Syed
Hello
I know this is not lustre recommended architecture. But the vendor has
proposed  us to use single server as mds and oss with one ib card
connecting to storage of 120 tb.
Is it even possible  ? The performance  will go down  but beside
performance doea lustre 2.x support this kind of configuration?
___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


[lustre-discuss] HPC Head node clustering and lustre

2017-11-21 Thread Amjad Syed
Hello,
We have a small 10 node compute cluster . We have single  management/head
node which is used as login node as well. The management consider this head
node as single point of failure.  They are planning to buy another
head/login/management node and make an active-passive cluster using Red hat
pcs. The question i have is the Lustre file system which is mounted on
these two nodes , will it be effected when the transfer between two nodes
happen.
___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


Re: [lustre-discuss] 1 MDS and 1 OSS

2017-10-30 Thread Amjad Syed
Jeff,
We intend to use 10 clients that will mount the file system.

Amjad

On Tue, Oct 31, 2017 at 3:02 AM, Jeff Johnson <
jeff.john...@aeoncomputing.com> wrote:

> Amjad,
>
> You might ask your vendor to propose a single MDT comprised of (8 * 500GB)
> 2.5" disk drives or better, SSDs. With some bio applications you would
> benefit from spreading the MDT I/O across more drives.
>
> How many clients to you expect to mount the file system? A standard filer
> (or ZFS/NFS server) will perform well compared to Lustre until you
> bottleneck somewhere in the server hardware (net, disk, cpu, etc), with
> Lustre you can add simply add one or more OSS/OSTs to the file system and
> performance potential increases by the number of additional OSS/OST servers.
>
> High-availability is nice to have but it isn't necessary unless your
> environment cannot tolerate any interruption or downtime. If your vendor
> proposes quality hardware these cases are infrequent.
>
> --Jeff
>
> On Mon, Oct 30, 2017 at 12:04 PM, Amjad Syed <amjad...@gmail.com> wrote:
>
>> The vendor has proposed a single MDT  ( 4 * 1.2 TB) in RAID 10
>> configuration.
>> The OST will be RAID 6  and proposed are 2 OST.
>>
>>
>> On Mon, Oct 30, 2017 at 7:55 PM, Ben Evans <bev...@cray.com> wrote:
>>
>>> How many OST's are behind that OSS?  How many MDT's behind the MDS?
>>>
>>> From: lustre-discuss <lustre-discuss-boun...@lists.lustre.org> on
>>> behalf of Brian Andrus <toomuc...@gmail.com>
>>> Date: Monday, October 30, 2017 at 12:24 PM
>>> To: "lustre-discuss@lists.lustre.org" <lustre-discuss@lists.lustre.org>
>>> Subject: Re: [lustre-discuss] 1 MDS and 1 OSS
>>>
>>> Hmm. That is an odd one from a quick thought...
>>>
>>> However, IF you are planning on growing and adding OSSes/OSTs, this is
>>> not a bad way to get started and used to how everything works. It is
>>> basically a single stripe storage.
>>>
>>> If you are not planning on growing, I would lean towards gluster on 2
>>> boxes. I do that often, actually. A single MDS/OSS has zero redundancy,
>>> unless something is being done at harware level and that would help in
>>> availability.
>>> NFS is quite viable too, but you would be splitting the available
>>> storage on 2 boxes.
>>>
>>> Brian Andrus
>>>
>>>
>>>
>>> On 10/30/2017 12:47 AM, Amjad Syed wrote:
>>>
>>> Hello
>>> We are in process in procuring one small Lustre filesystem giving us 120
>>> TB  of storage using Lustre 2.X.
>>> The vendor has proposed only 1 MDS and 1 OSS as a solution.
>>> The query we have is that is this configuration enough , or we need more
>>> OSS?
>>> The MDS and OSS server are identical  with regards to RAM (64 GB) and
>>> HDD (300GB)
>>>
>>> Thanks
>>> Majid
>>>
>>>
>>> ___
>>> lustre-discuss mailing 
>>> listlustre-discuss@lists.lustre.orghttp://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
>>>
>>>
>>>
>>> ___
>>> lustre-discuss mailing list
>>> lustre-discuss@lists.lustre.org
>>> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
>>>
>>>
>>
>> ___
>> lustre-discuss mailing list
>> lustre-discuss@lists.lustre.org
>> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
>>
>>
>
>
> --
> --
> Jeff Johnson
> Co-Founder
> Aeon Computing
>
> jeff.john...@aeoncomputing.com
> www.aeoncomputing.com
> t: 858-412-3810 x1001 <(858)%20412-3810>   f: 858-412-3845
> <(858)%20412-3845>
> m: 619-204-9061 <(619)%20204-9061>
>
> 4170 Morena Boulevard, Suite D - San Diego, CA 92117
> <https://maps.google.com/?q=4170+Morena+Boulevard,+Suite+D+-+San+Diego,+CA+92117=gmail=g>
>
> High-Performance Computing / Lustre Filesystems / Scale-out Storage
>
___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


Re: [lustre-discuss] 1 MDS and 1 OSS

2017-10-30 Thread Amjad Syed
Andreas,
Thank you for your email.
The  interconnect  proposed by Vendor is  Infiniband FDR , 56 GB/s.  Each
MDS and OSS will have only FDR Card.
This Lustre will be used to run  Life Sciences/Bioinformatics/genomics
applications .

Will single OSS handle  FDR interconnect.?

On 30 Oct 2017 4:56 p.m., "Dilger, Andreas" <andreas.dil...@intel.com>
wrote:

> First, to answer Amjad's question - the number of OSS nodes you have
> depends
> on the capacity and performance you need.  For 120TB of total storage
> (assume 30x4TB drives, or 20x60TB drives) a single OSS is definitely
> capable of handling this many drives.  I'd also assume you are using 10Gb
> Ethernet (~= 1GB/s), which  a single OSS should be able to saturate (at
> either 40MB/s or 60MB/s per data drive with RAID-6 8+2 LUNs).  If you want
> more capacity or bandwidth, you can add more OSS nodes now or in the future.
>
> As Ravi mentioned, with a single OSS and MDS, you will need to reboot the
> single server in case of failures instead of having automatic failover, but
> for some systems this is fine.
>
> Finally, as for whether Lustre on a single MDS+OSS is better than running
> NFS on a single server, that depends mostly on the application workload.
> NFS is easier to administer than Lustre, and will provide better small file
> performance than Lustre.  NFS also has the benefit that it works with every
> client available.
>
> Interestingly, there are some workloads that users have reported to us
> where a single Lustre OSS will perform better than NFS, because Lustre does
> proper data locking/caching, while NFS has only close-to-open consistency
> semantics, and cannot cache data on the client for a long time.  Any
> workloads where there are multiple writers/readers to the same file will
> just not function properly with NFS.  Lustre will handle a large number of
> clients better than NFS.  For streaming IO loads, Lustre is better able to
> saturate the network (though for slower networks this doesn't really make
> much difference).  Lustre can drive faster networks (e.g. IB) much better
> with LNet than NFS with IPoIB.
>
> And of course, if you think your performance/capacity needs will increase
> in the future, then Lustre can easily scale to virtually any size and
> performance you need, while NFS will not.
>
> In general I wouldn't necessarily recommend Lustre for a single MDS+OSS
> installation, but this depends on your workload and future plans.
>
> Cheers, Andreas
>
> On Oct 30, 2017, at 15:59, E.S. Rosenberg <esr+lus...@mail.hebrew.edu>
> wrote:
> >
> > Maybe someone can answer this in the context of this question, is there
> any performance gain over classic filers when you are using only a single
> OSS?
> >
> > On Mon, Oct 30, 2017 at 9:56 AM, Ravi Konila <ravibh...@gmail.com>
> wrote:
> > Hi Majid
> >
> > It is better to go for HA for both OSS and MDS. You would need 2 nos of
> MDS and 2 nos of OSS (identical configuration).
> > Also use latest Lustre 2.10.1 release.
> >
> > Regards
> > Ravi Konila
> >
> >
> >> From: Amjad Syed
> >> Sent: Monday, October 30, 2017 1:17 PM
> >> To: lustre-discuss@lists.lustre.org
> >> Subject: [lustre-discuss] 1 MDS and 1 OSS
> >>
> >> Hello
> >> We are in process in procuring one small Lustre filesystem giving us
> 120 TB  of storage using Lustre 2.X.
> >> The vendor has proposed only 1 MDS and 1 OSS as a solution.
> >> The query we have is that is this configuration enough , or we need
> more OSS?
> >> The MDS and OSS server are identical  with regards to RAM (64 GB) and
> HDD (300GB)
> >>
> >> Thanks
> >> Majid
>
> Cheers, Andreas
> --
> Andreas Dilger
> Lustre Principal Architect
> Intel Corporation
>
>
>
>
>
>
>
>
___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


[lustre-discuss] 1 MDS and 1 OSS

2017-10-30 Thread Amjad Syed
Hello
We are in process in procuring one small Lustre filesystem giving us 120
TB  of storage using Lustre 2.X.
The vendor has proposed only 1 MDS and 1 OSS as a solution.
The query we have is that is this configuration enough , or we need more
OSS?
The MDS and OSS server are identical  with regards to RAM (64 GB) and  HDD
(300GB)

Thanks
Majid
___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


[lustre-discuss] Rhel 7 or centos 7 for lustre mds and oss

2017-10-18 Thread Amjad Syed
Hello
We are in process of purchasing a new lustre filesystem for our site that
will be used for life sciences and genomics.
We would like to know if we should buy rhel license or go wit centos.
We will be storing and using dna samples for analysis here.

Thanks
Amjad
___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


[lustre-discuss] Lustre client fails to boot

2017-10-17 Thread Amjad Syed
Hello,
Lustre newbie here
We are using lustre 1.8.7 on Rhel 5.4.
Due to some issues  with our lustre filesystem, the client hangs on reboot.
It hangs on screen kernel alive and mapping.
We tried using single mode , but that fails also,
Is there any way we can remove the entry in /etc/fstab  without having to
use  the rescue mode or  pushing a customized image to client?

Thanks
___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


[lustre-discuss] Recovering data from failed Lustre file system

2017-10-12 Thread Amjad Syed
Hello,
We have a lustre 1.8.7 running on storage provided by vendor.
The storage array crashed and we have had some data loss .
What is the best way to backup the available data from client ?
Scp, rsync from client crash on Lustre client osc read error.
___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


[lustre-discuss] Lustre clients and Lustre servers (MDS/OSS) operating system requirements?

2017-10-08 Thread Amjad Syed
Hello,
We have an existing HPC running Lustre 1.8.7 on RHEL 5.4
The Lustre servers (MDS and OSS) are all running RHEL 5.4
The Lustre clients(HPC compute nodes)  are also running RHEL 5.4
Now the management has decided to upgrade the compute nodes to RHEL 7.
But they do not want to upgrade the OS of Lustre Servers which is still
RHEL 5.4
So the question is will this configuration work where the Lustre clients
are RHEL 7 and Lustre server are all running RHEL 5.4?

Thanks.
___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


[lustre-discuss] Directory based quota on lustre 1.8.7

2016-10-04 Thread Amjad Syed
Hello,

We have lustre  1.8.7. We want to create a directory in this  lustre
filesystem and assign a quota of 10 TB for all users . Is it possible? if
not can some one suggest an alternative?

Thanks,
Amjad
___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


[lustre-discuss] Getting Monthly file usage report on lustre 1.8.7

2016-09-25 Thread Amjad Syed
Hello,
We have lustre 1.8  running . We would like to have a monthly report on
filesystem usage , i.e how much  space has been used   every month.

Any tools that can do this?

Thanks,
Amjad
___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


[lustre-discuss] Lustre OSS and clients on same physical server

2016-07-13 Thread Amjad Syed
Hello,

I am new to lustre and currently doing a small POC for my  organization.

I would like to understand if Lustre OSS server and lustre client can be
the same physical server  .

If this setup can be used for production  ?


Thanks
___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


[Lustre-discuss] Inactive OSTS

2014-03-31 Thread Amjad Syed
Hello,
I am a newbie to Lustre
We are using  Lustre version 1.8.7 on 1 MDS and 3 OSS . The Lustre clients
are running 1.8.9.
Due to some issues we had to switch off our lustre file system. Now when we
swithced it back up, the MDS and OST are inactive.

What is the best way to make the OST active:

 cat /proc/fs/lustre/lov/lustre-mdtlov/target_obd
0: lustre-OST_UUID INACTIVE
1: lustre-OST0001_UUID INACTIVE
2: lustre-OST0002_UUID INACTIVE
3: lustre-OST0003_UUID INACTIVE
4: lustre-OST0004_UUID INACTIVE
5: lustre-OST0005_UUID INACTIVE
6: lustre-OST0006_UUID INACTIVE
7: lustre-OST0007_UUID INACTIVE
8: lustre-OST0008_UUID INACTIVE
9: lustre-OST0009_UUID INACTIVE
10: lustre-OST000a_UUID INACTIVE
11: lustre-OST000b_UUID INACTIVE
12: lustre-OST000c_UUID INACTIVE
13: lustre-OST000d_UUID INACTIVE
14: lustre-OST000e_UUID INACTIVE
15: lustre-OST000f_UUID INACTIVE
16: lustre-OST0010_UUID INACTIVE
17: lustre-OST0011_UUID INACTIVE
18: lustre-OST0012_UUID INACTIVE
19: lustre-OST0013_UUID INACTIVE
20: lustre-OST0014_UUID INACTIVE
21: lustre-OST0015_UUID INACTIVE
22: lustre-OST0016_UUID INACTIVE
23: lustre-OST0017_UUID INACTIVE


 cat /proc/fs/lustre/mds/lustre-MDT/recovery_status
status: INACTIVE

Thanks
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss


[Lustre-discuss] Activation Inactive OST

2013-12-21 Thread Amjad Syed
Hello,

My apologies if this a simple question, but i am a newbie to lustre and
trying to debug an issue of mount.

I have Lustre MDT at 1.8.7 and clients at 1.8.9.

The OSTS are up but not active

 *lctl dl*
*  0 UP mgs MGS MGS 7*
*  1 UP mgc MGC10.129.1.111@o2ib 8b175398-7e96-12ff-78ba-eb735bfdd319 5*
*  2 UP mdt MDS MDS_uuid 3*
*  3 UP lov lustre-mdtlov lustre-mdtlov_UUID 4*
*  4 UP mds lustre-MDT lustre-MDT_UUID 8*
*  5 UP osc lustre-OST-osc lustre-mdtlov_UUID 5*
*  6 UP osc lustre-OST0001-osc lustre-mdtlov_UUID 5*
*  7 UP osc lustre-OST0002-osc lustre-mdtlov_UUID 5*
*  8 UP osc lustre-OST0003-osc lustre-mdtlov_UUID 5*
*  9 UP osc lustre-OST0004-osc lustre-mdtlov_UUID 5*
* 10 UP osc lustre-OST0005-osc lustre-mdtlov_UUID 5*
* 11 UP osc lustre-OST0006-osc lustre-mdtlov_UUID 5*
* 12 UP osc lustre-OST0007-osc lustre-mdtlov_UUID 5*
* 13 UP osc lustre-OST0008-osc lustre-mdtlov_UUID 5*
* 14 UP osc lustre-OST0009-osc lustre-mdtlov_UUID 5*
* 15 UP osc lustre-OST000a-osc lustre-mdtlov_UUID 5*
* 16 UP osc lustre-OST000b-osc lustre-mdtlov_UUID 5*
* 17 UP osc lustre-OST000c-osc lustre-mdtlov_UUID 5*
* 18 UP osc lustre-OST000d-osc lustre-mdtlov_UUID 5*
* 19 UP osc lustre-OST000e-osc lustre-mdtlov_UUID 5*
* 20 UP osc lustre-OST000f-osc lustre-mdtlov_UUID 5*
* 21 UP osc lustre-OST0010-osc lustre-mdtlov_UUID 5*
* 22 UP osc lustre-OST0011-osc lustre-mdtlov_UUID 5*
* 23 UP osc lustre-OST0012-osc lustre-mdtlov_UUID 5*
* 24 UP osc lustre-OST0013-osc lustre-mdtlov_UUID 5*
* 25 UP osc lustre-OST0014-osc lustre-mdtlov_UUID 5*
* 26 UP osc lustre-OST0015-osc lustre-mdtlov_UUID 5*
* 27 UP osc lustre-OST0016-osc lustre-mdtlov_UUID 5*
* 28 UP osc lustre-OST0017-osc lustre-mdtlov_UUID 5*

*cat /proc/fs/lustre/lov/lustre-mdtlov/target_obd*
*0: lustre-OST_UUID INACTIVE*
*1: lustre-OST0001_UUID INACTIVE*
*2: lustre-OST0002_UUID INACTIVE*
*3: lustre-OST0003_UUID INACTIVE*
*4: lustre-OST0004_UUID INACTIVE*
*5: lustre-OST0005_UUID INACTIVE*
*6: lustre-OST0006_UUID INACTIVE*
*7: lustre-OST0007_UUID INACTIVE*
*8: lustre-OST0008_UUID INACTIVE*
*9: lustre-OST0009_UUID INACTIVE*
*10: lustre-OST000a_UUID INACTIVE*
*11: lustre-OST000b_UUID INACTIVE*
*12: lustre-OST000c_UUID INACTIVE*
*13: lustre-OST000d_UUID INACTIVE*
*14: lustre-OST000e_UUID INACTIVE*
1*5: lustre-OST000f_UUID INACTIVE*
*16: lustre-OST0010_UUID INACTIVE*
*17: lustre-OST0011_UUID INACTIVE*
*18: lustre-OST0012_UUID INACTIVE*
*19: lustre-OST0013_UUID INACTIVE*
*20: lustre-OST0014_UUID INACTIVE*
*21: lustre-OST0015_UUID INACTIVE*
*22: lustre-OST0016_UUID INACTIVE*
*23: lustre-OST0017_UUID INACTIVE*

*So the question is how can i activate the INACTIVE OSTS?*

*Thanks*
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss


[Lustre-discuss] Quotas on Lustre File system

2013-08-28 Thread Amjad Syed
Hello,

I am a newbie to Lustre File system.
In out data centre we have 2 Lustre file systems (small 40TB an larger 200
TB).
We using our Lustre File system to perform I/O for Life sciences and
bioinformatics applications.

The vendor has decided to mount home directories on smaller lustre file
system (40TB) and also installed bioinformatics applications on this
smaller Lustre FS.

The larger Lustre FS will only have large data sets used by end users.

The question  on smaller Lustre FS we are planning to implement quotas for
end users having limit on home directories 10GB.

Is using lquota the only way or can we also use traditional Linux method
of adding usrquota in /etc/fstab?

If both methods can be used what are pros and cons of each method?

Sincerely,
Amjad
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss