Thanks Ben,
is there a public document where I could have found this limit?
Regards, Götz
On Tue, Jul 21, 2015 at 4:33 PM, Ben Evans bev...@cray.com wrote:
128 TB is the current limit
You can force more than that, but it looks like you won't need to.
-Ben Evans
-Original Message-
1) You mention they are on the same host. Are they on separate partitions
already?
As you have failover configured I'm assuming that both servers can see the
storage. In which case this will not be too difficult (depending on your
failover software of course) if they have separate partitions.
1) You mention they are on the same host. Are they on separate partitions
already?
As you have failover configured I'm assuming that both servers can see the
storage. In which case this will not be too difficult (depending on your
failover software of course) if they have separate partitions.
Hi ...,
One of our customers has a 3 x 240 Disk SAN Storage Array and would like to
convert it to Lustre.
They have around 150 Workstations and around 200 Compute (Render) nodes.
The File Sizes they generally work with are -
1 to 1.5 million files (images) of 10-20MB in size.
And a few thousand
I was wondering because while looking at the man page for mkfs.lustre I saw the
below option:
--replace
Used to initialize a target with the same --index as a
previously used target if the old target was permanently lost for some reason
(e.g. multiple disk failure or
It is available in the Lustre Operations Manual, Table 1.1
https://build.hpdd.intel.com/job/lustre-manual/lastSuccessfulBuild/artifact/lustre_manual.xhtml#idp162992
- Justin
On 7/21/15, 11:02 AM, lustre-discuss on behalf of Götz Waschk
lustre-discuss-boun...@lists.lustre.org on behalf of
Hi Scott,
The 3 - SAN Storages with 240 disks each has its own 3 NAS Headers (NAS
Appliances).
However, even with 240 10K RPM disk and RAID50, it is only providing around
1.2 - 1.4GB/s per NAS Header.
There is no clustered file system, and each NAS Header has its own
file-system.
It uses some
Note the other email also seemed to suggest that multiple NFS exports of Lustre
wouldn't work. I don't think that's the case, as we have this sort of setup at
a number of our customers without particular trouble. In the abstract, I could
see the possibility of some caching errors between
I’ve seen CTDB + Samba deployed on several sites running Lustre. It’s stable in
my experience, and straightforward to get installed and set up, although the
process is time-consuming. The most significant hurdle is integrating with AD
and maybe load balancing for the CTDB servers (RR DNS is the
Indivar,
Since your CIFS or NFS gateways operate as Lustre clients there can be
issues with running multiple NFS or CIFS gateway machines frontending the
same Lustre filesystem. As Lustre clients there are no issues in terms of
file locking but the NFS and CIFS caching and multi-client file
Currently there is no direct connection between Lustre layout and HDF5 file
layout. The only option is RAID-0 striping across OST objects with a fixed
stripe size. If HDF5 is aware of this stripe size and can take advantage of it,
that is great.
There is a project that has started to
I believe this is described in the Lustre Manual, but the basic process to
split a combined MDS+MGS into a separate MGS is to format a new MGS device,
then copy all the files from CONFIGS on the old combined MDT+MGT device into
the new MGS. See the manual for full details.
Cheers, Andreas
On
Having only 3 OSS will limit the performance you can get, and having so many
OSTs on each OSS will give sub-optimal performance. 4-6 OSTs/OSS is more
reasonable.
It also isn't clear why you want RAID-60 instead of just RAID-10?
Finally, for Linux clients it is much better to use direct Lustre
Hi ...,
Currently, Failover and Recovery takes a very long long time in our setup;
almost 20 Minutes. We would like to make it as fast as possible.
I have two queries regarding this -
1.
===
The MGS and MDT are on the same host.
We do however
Hello,
I had a quick question about recreating osts... If I drain all the files off
an ost can I just reformat it and have it added back into lustre, in essence
reusing the same index? The server wouldn't change. Or would I have to
preserve its configuration files?
w/r,
Kurt
I know you'd need to keep the config files, directory structures, etc. How
much of that info you need to keep around, I'm not 100% sure.
To get the MGS to accept it again, you may have to unmount and run
tunefs.lustre --writeconf on all the targets.
-Ben Evans
-Original Message-
Dear Lustre experts,
I'm in the process of installing a new Lustre file system based on
version 2.5. What is the size limit for an OST when using ldiskfs?Can
I format a 60 TB device with ldiskfs?
Regards,
Götz Waschk
___
lustre-discuss mailing list
128 TB is the current limit
You can force more than that, but it looks like you won't need to.
-Ben Evans
-Original Message-
From: lustre-discuss [mailto:lustre-discuss-boun...@lists.lustre.org] On Behalf
Of Götz Waschk
Sent: Tuesday, July 21, 2015 10:18 AM
To:
18 matches
Mail list logo