Hi,

If everything is connected with SAS JBOD and controllers, you could
probably run 1 OST on each server and get better performance that way. With
both server reaching the same SAS drives, you could also have a failover in
case one server does not work.

You can forget about failover if you are using SATA drives.

On Mon, Oct 30, 2017 at 3:04 PM, Amjad Syed <[email protected]> wrote:

> The vendor has proposed a single MDT  ( 4 * 1.2 TB) in RAID 10
> configuration.
> The OST will be RAID 6  and proposed are 2 OST.
>
>
> On Mon, Oct 30, 2017 at 7:55 PM, Ben Evans <[email protected]> wrote:
>
>> How many OST's are behind that OSS?  How many MDT's behind the MDS?
>>
>> From: lustre-discuss <[email protected]> on behalf
>> of Brian Andrus <[email protected]>
>> Date: Monday, October 30, 2017 at 12:24 PM
>> To: "[email protected]" <[email protected]>
>> Subject: Re: [lustre-discuss] 1 MDS and 1 OSS
>>
>> Hmm. That is an odd one from a quick thought...
>>
>> However, IF you are planning on growing and adding OSSes/OSTs, this is
>> not a bad way to get started and used to how everything works. It is
>> basically a single stripe storage.
>>
>> If you are not planning on growing, I would lean towards gluster on 2
>> boxes. I do that often, actually. A single MDS/OSS has zero redundancy,
>> unless something is being done at harware level and that would help in
>> availability.
>> NFS is quite viable too, but you would be splitting the available storage
>> on 2 boxes.
>>
>> Brian Andrus
>>
>>
>>
>> On 10/30/2017 12:47 AM, Amjad Syed wrote:
>>
>> Hello
>> We are in process in procuring one small Lustre filesystem giving us 120
>> TB  of storage using Lustre 2.X.
>> The vendor has proposed only 1 MDS and 1 OSS as a solution.
>> The query we have is that is this configuration enough , or we need more
>> OSS?
>> The MDS and OSS server are identical  with regards to RAM (64 GB) and
>> HDD (300GB)
>>
>> Thanks
>> Majid
>>
>>
>> _______________________________________________
>> lustre-discuss mailing 
>> [email protected]http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
>>
>>
>>
>> _______________________________________________
>> lustre-discuss mailing list
>> [email protected]
>> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
>>
>>
>
> _______________________________________________
> lustre-discuss mailing list
> [email protected]
> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
>
>
_______________________________________________
lustre-discuss mailing list
[email protected]
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org

Reply via email to