I can confirm that this setup works (ZFS-MGS/MDT or LDFISKFS-MGS/MDT and
I used a Cent OS 6.4
and the Lustre Packages from
there are no official Debian packages for Lustre 2.3/2.4/2.5.
The instructions on http://wiki.lustre.org/index.php/Debian_Install
are still working for 2.3/2.4/2.5 with some tiny tricks. You can either
switch to the supported RH Kernel and use them in Debian, so you can
Forgot to mention that: I have built Debian Wheezy packages which
are available at:
On Mon, Nov 25, 2013 at 05:48:06PM +0200, E.S. Rosenberg wrote:
Since in Linux we are mostly a debian shop we'd like to stick with
debian for our
the current kernel version in Ubuntu 14.04 TLS is 3.13.0-24-generic
there are still open issues for 3.12 to be solved
(https://jira.hpdd.intel.com/browse/LU-4416) before it can be merged
into the master. If you checkout from git.whamcloud.com the master and
I have pushed an updated version of lshowmount where warnings and mostly
strcat -> strncat, sprintf -> snprintf are fixed, as well as other issues.
This is a very cool and useful tool which I was not aware before.
I did tested parameter "-l -v -e" combinations on MDT/MGS and OSS,
and it works so
ster. We have a few webservers running Debian that need to access
the storage. I followed the procedure given by Thomas Stibor about
Ubuntu 14 last year.
I could successfully compile the binaries and modules after some
digging. He also left an already compiled lustre 2.7.63 and modules
Remove in debian/lustre-dev.install the line
and it will work.
@@ -1,6 +1,5 @@
08:50:03PM +, Dilger, Andreas wrote:
> On Nov 25, 2016, at 04:27, Thomas Stibor <t.sti...@gsi.de> wrote:
> > Remove in debian/lustre-dev.install the line
> > -debian/tmp/usr/lib/*.so.* usr/lib
> > and it will work.
> > @@ -1,6 +1,5
on DEB distro's there is a similar problem, due to conflicts of (old)
staged Lustre modules and the e.g. new installed modules. The result is, that
first the staged modules are loaded, and then the loaders tries
to load the remaining/missing new modules and fails. On DEB distro's
see JIRA: https://jira.hpdd.intel.com/browse/LU-5718
What seems to work as a quick fix (for older versions) is to set the
value of parameter max_pages_per_rpc=64
As written in https://jira.hpdd.intel.com/browse/LU-5718
the issue is resolved, however for upcoming version 2.10.0
we use a similar approach with the TSM Lustre copytool.
First the data is archived to a TSM Server, then one can do the
$ lfs hsm_set --exists --archived --archive-id
$ lfs hsm_release
The file now exists as a released file in Lustre and is
seamlessly retrieved when
Mail list logo