>
> Lustre Principal Architect
>
> Intel High Performance Data Division
>
>
>
> On 2016/08/11, 20:19, "lustre-discuss on behalf of E.S. Rosenberg" <
> lustre-discuss-boun...@lists.lustre.org on behalf of
> esr+lus...@mail.hebrew.edu> wrote:
>
>
&g
Sorry about spamming the list but I realize it may be better that subjects
be split into threads
I started e2fsck --mdsdb 6 hours ago on an MDT that is 1T in size, am I
being unreasonable if I think it should have been done by now?
What type of runtimes have you seen?
I shudder to think how
What is the normal amount of time I should expect
e2fsck --mdsdb
to be running (1T MDT)?
(So far it's running quite a few hours)
Thanks,
Eli
On Thu, Aug 11, 2016 at 12:42 PM, E.S. Rosenberg <esr+lus...@mail.hebrew.edu
> wrote:
> Hi all,
> Our MDT suffered a kernel panic (which
Hi all,
Our MDT suffered a kernel panic (which I will post separately), the OSSs
stayed alive but the MDT was out for some time while nodes still tried to
interact with lustre.
So I have several questions:
a. what happens to processes/reading writing during such an event (if they
already have
!
>
> On May 22, 2016, at 2:38 PM, E.S. Rosenberg wrote:
>
> > Internal functions have changed and as a result it is currently not
> compiling, what is the accepted style for fixing these things?
> >
> > Right now I am looking at this error:
> >
Have you tried doing the lustre install from source?
On Thu, Jun 23, 2016 at 10:45 AM, wrote:
> *Dear Rocks User,*
>
>
>
> Please help in getting correct Lustre Client Package for Rocks 6.2 with
> Kernel 2.6.32-504.16.2.el6.x86_64
>
> Currently I’m trying to install
d
>> weeks and afterwards it appeared again quite frequently. This happened
>> while there was no change in the installed Lustre versions.
>>
>> Regards,
>>Roland
>>
>>
>> Am 02.06.2016 um 17:39 schrieb Phill Harvey-Smith:
>>
>>> On 0
After (finally) reading this interesting discussion I was left with one
question:
Some of the rules suggested above would imply quite a large amount of
stripes as files get truly big, isn't the (logical) upper limit on striping
the amount of OSTs you have in the system?
Striping more then the OST
up sufficiently to be moved into the main tree.
>
> The master and 2.8 clients work up to at least kernel 4.2 and possibly
> later, and are of course up-to-date feature wise.
>
> Cheers, Andreas
>
> On Apr 20, 2016, at 04:29, E.S. Rosenberg <esr+lus...@mail.hebrew.edu
> &
So what version of lustre is built in to kernel 4.5? We are switching
distro soon at the very least to kernel 4.16 + userspace 2.7 and now would
definitely be the best time to still decide to use a newer kernel.
On Mon, Apr 18, 2016 at 10:28 PM, E.S. Rosenberg <esr+lus...@mail.hebrew.
I thought we fixed that problem?
>
> Cheers, Andreas
>
> > On Apr 18, 2016, at 12:28, E.S. Rosenberg <esr+lus...@mail.hebrew.edu>
> wrote:
> >
> > Hi all,
> > Several of our users ran into the "Text file busy" error described in
> LU-6232 (and
Hi all,
Several of our users ran into the "Text file busy" error described in
LU-6232 (and possibly related to LU-4398 and LU-4429).
Has there been any progress on this issue?
I feel pretty dumb when I have to explain this issue to them and how to
work around it
Thanks,
Eli
Background for
Hi all,
We have a researcher interested in using our cluster who needs to comply
with HIPAA (US legislation on Patient Record confidentiality) so I was
wondering is there anyone here who has experience with being HIPAA
compliant?
And is there an auditing mechanism for lustre that shows who
Hi Eric,
Since we also have some issues with servers accessing files this issue
interests me...
Did you find anything on this?
Thanks,
Eli
On Mon, Nov 30, 2015 at 7:53 PM, Eric Kolb wrote:
> Hello,
>
> Perhaps someone has seen a similar issue? If so any insight would be
>
Hi all,
I'm trying to build lustre 2.7 with kernel 4.1.6 from source, I applied the
patches that appear in:
lustre/kernel_patches/series/3.x-fc18.series
The following patch:
blkdev_tunables-3.7.patch
Seems to be adding the use of a define that left the kernel a while ago:
BLK_DEF_MAX_SECTORS
As
Which would be easily merged with the mainline kernel git instead of the
current setup of manually patching
Thanks,
Eli
___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
> -Marc
>
>
> D. Marc Stearman
> Lustre Operations Lead
> stearm...@llnl.gov
> 925.423.9670
>
>
>
>
> On Sep 2, 2015, at 4:57 AM, E.S. Rosenberg <esr+lus...@mail.hebrew.edu>
> wrote:
>
> > Hi all,
> >
> > I am seeing an interesting
the
server 'sees'/'knows' is a symlink to the lustre filesystem which lives on
nfs...
Thanks,
Eli
>
> Ed Wahl
> OSC
>
>
> --
> *From:* lustre-discuss [lustre-discuss-boun...@lists.lustre.org] on
> behalf of E.S. Rosenberg [esr+lus...@mail.hebrew.edu]
Hi all,
I am seeing an interesting/annoying problem with lustre and am not really
sure what/where to look.
When a webserver (galaxy using wsgi/apache2) tries to server (large) files
stored on lustre it fails to send the full file and I see the following
errors in syslog:
Sep 2 11:50:17 hm-02
:46, E.S. Rosenberg wrote:
I am puzzled, why would you format an external harddisk as lustre? You
realize lustre is not really aimed at being a single disk filesystem?
Lustre is supposed to be spread over a set of disks (and servers) where
you have disks/servers specifically in charge of storing
I am puzzled, why would you format an external harddisk as lustre? You
realize lustre is not really aimed at being a single disk filesystem?
Lustre is supposed to be spread over a set of disks (and servers) where you
have disks/servers specifically in charge of storing objects and
disks/servers
There are several articles on the size of the kernel 4.2 update and it
bringing lots of new/improved features.
I didn't see any mention of lustre features I assume because they interest
the article writers less so I'd like to ask here: what if anything does
kernel 4.2 bring us?
Thanks,
Eli
On Wed, Apr 15, 2015 at 9:53 PM, Colin Faber cfa...@gmail.com wrote:
I think in general this is a good idea. As discussed at the OpenSFS board
meeting yesterday, it would be really nice for organizations which are
building custom releases themselves to provide this patch list as a wiki
page
On Fri, Jan 9, 2015 at 12:45 PM, Thierry Lamoureux
thierry.lamour...@noveltis.fr wrote:
Hi guys,
Did one of you had a little moment to have a look to my previous message?
As far as I can tell your previous message is not a complete paste, it
doesn't end at an error and it seems to end in the
On Wed, Jan 7, 2015 at 1:42 PM, Dilger, Andreas
andreas.dil...@intel.com wrote:
On 2015/01/07, 4:33 AM, Kilian Cavalotti
kilian.cavalotti.w...@gmail.com wrote:
Bonjour Thierry,
/bin/sh: 1: [: -lt: unexpected operator
I'm pretty sure that's because in Debian, /bin/sh is linked to dash
and the
Re:all
We use lustre client on kernel 3.17 (on Debian), we use the kernel
parts that ship
on kernel.org and compile the client only
Our configure instruction is:
./configure --disable-modules --disable-server --enable-client
HTH,
Eli
On Fri, Dec 19, 2014 at 10:21 PM, Dilger, Andreas
Hmm, that may not have been 100% clear, we compile kernel.org kernels
ourself and compile lustre ourself...
On Mon, Dec 22, 2014 at 3:09 PM, E.S. Rosenberg
esr+lus...@mail.hebrew.edu wrote:
Re:all
We use lustre client on kernel 3.17 (on Debian), we use the kernel
parts that ship
on kernel.org
Since we tend to forget this from time to time I just want to do it in
public: Thanks!
And thank you to all the people involved with the lustre project (and
other FOSS).
On Wed, Nov 12, 2014 at 1:04 AM, Scott Nolin scott.no...@ssec.wisc.edu wrote:
Here is my attempt to detail some of the Lustre
On our debian we built 2.4.x like so:
sh autogen.sh
./configure --disable-modules --disable-server --enable-client
--prefix=/path/to/prefix/you/want
The kernel module already ships with 3.11 iirc, though I don't know what
the lustre version compatibility is of 3.11, we use a self built 3.14
On Wed, Jun 11, 2014 at 2:03 AM, Colin Faber cfa...@gmail.com wrote:
Yes. Many of the various VM products will work just fine with lustre.
However they will not perform as well as native hw.
I think he means that the virtual disk images sit on lustre instead of
netapp or other storage
On Wed, Jun 11, 2014 at 3:57 PM, Andrew Holway andrew.hol...@gmail.com
wrote:
On 11 Jun 2014 14:22, E.S. Rosenberg esr+lus...@mail.hebrew.edu wrote:
On Wed, Jun 11, 2014 at 2:03 AM, Colin Faber cfa...@gmail.com wrote:
Yes. Many of the various VM products will work just fine
Did anyone answer this?
From my way too basic understanding should it be possible to expand an MDT?
after all the underlying fs is ext4?
On Mon, Jun 2, 2014 at 5:12 PM, Anjana Kar k...@psc.edu wrote:
The MGS/MDT pool and filesystem were created using these commands:
zpool create -f -o
You should probably be trying 2.4.x or 2.5.x available here:
https://wiki.hpdd.intel.com/display/PUB/Lustre+Releases
On Tue, Mar 4, 2014 at 2:40 AM, omar o...@nearinc.com wrote:
Dear community,
Has anyone installed Lustre on openSUSE 12.3 64-bit with kernel
3.7.10-1.28-dektop? If so, could
Wouldn't it make more sense to downgrade/install 2.4.2 in that case?
On Wed, Jan 8, 2014 at 4:58 PM, Oliver Mangold
oliver.mang...@emea.nec.comwrote:
Hello,
we installed Lustre 2.5.0 servers at a customer site and ran into
massive stability problems. Because of this we would like to
We weren't able to use lustre (2.4.2/2.5.x) with kernel 3.12.4 (staging
kernel driver - kernel panics on mount)
After compiling 3.13-rc we have a working system (though it has not yet
being put through its' paces except for some simultaneous dd)
On Wed, Jan 8, 2014 at 5:02 PM, E.S. Rosenberg
esr
Is hpdd-discuss forwarded here and vice versa?
Or should I be subscribing to that list too?
Thanks,
Eli
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss
Hi all,
I am not a big git expert, can anyone tell me where I can clone the
2.4.2 source?
I cloned the master but that was 2.5.52 and results in a kernel panic
when I mount using kernel 3.12.4 on Debian
Thanks,
Eli
___
Lustre-discuss mailing list
Since in Linux we are mostly a debian shop we'd like to stick with
debian for our calculation nodes if possible.
So I wanted to ask the lustre 2.2 instructions for Debian are they
more or less relevant to lustre 2.4/2.5 or am I going headlong into a
tall brick wall.
Also are newer clients
You mean the 3.11 kernel right?
On Mon, Nov 25, 2013 at 6:43 PM, Thomas Stibor t.sti...@gsi.de wrote:
Forgot to mention that: I have built Debian Wheezy packages which
are available at:
http://web-docs.gsi.de/~tstibor/lustre/lustre-builds/
On Mon, Nov 25, 2013 at 05:48:06PM +0200, E.S
25, 2013 at 6:42 PM, E.S. Rosenberg
esr+lus...@mail.hebrew.edu wrote:
You mean the 3.11 kernel right?
On Mon, Nov 25, 2013 at 6:43 PM, Thomas Stibor t.sti...@gsi.de wrote:
Forgot to mention that: I have built Debian Wheezy packages which
are available at:
http://web-docs.gsi.de/~tstibor/lustre
As I understand in the end 2.4 will not be the stable/long term
release of lustre, so I was just wondering if anything more accurate
then Q4 was known yet...
Thanks,
Eli
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
On Thu, Aug 8, 2013 at 4:44 PM, Phill Harvey-Smith
p.harvey-sm...@warwick.ac.uk wrote:
On 07/08/2013 14:39, Phill Harvey-Smith wrote:
Hi all,
Are there any gotyas that I need to be aware of in upgrading our OSS and
MDS servers to the latest 2.5 Lustre version, will all the clients need
to be
Hi all,
I noticed that lustre was taken into the mainline kernel 3.11 and was
wondering what that means for lustre, will we be able to install
lustre without patching the kernel?
Does this mean that packages for other distros will become a lot easier?
Thanks,
Eli
, if there are no enough inodes on MDT adding OSTs wont help much.
I don't think you can change no. of inodes w/o formatting. You might want to
take a look at
lfs_migrate, on how to increase the no_of_inodes on OST's.
HTH
p
On 19 June 2013 19:19, E.S. Rosenberg
esr+lus...@mail.hebrew.edumailto:esr
On Fri, Jun 21, 2013 at 7:51 PM, White, Cliff cliff.wh...@intel.com wrote:
From: Teik Hooi Beh th...@thbeh.commailto:th...@thbeh.com
Date: Friday, June 21, 2013 1:29 AM
To: Parinay Kondekar
parinay_konde...@xyratex.commailto:parinay_konde...@xyratex.com
Cc:
In a Lustre ldiskfs file system, all the inodes are allocated on the
MDT and OSTs when the file system is first formatted. The total number
of inodes on a formatted MDT or OST cannot be easily changed, although
it is possible to add OSTs with additional space and corresponding
inodes. Thus, the
Thanks
On Tue, Jun 18, 2013 at 12:07 PM, Parinay Kondekar
parinay_konde...@xyratex.com wrote:
http://build.whamcloud.com/job/lustre-manual/lastSuccessfulBuild/artifact/lustre_manual.xhtml#dbdoclet.50438256_31079
HTH
On 18 June 2013 14:20, E.S. Rosenberg esr+lus...@mail.hebrew.edu wrote
On Thu, Jun 13, 2013 at 3:09 AM, Christopher J. Morrone
morro...@llnl.gov wrote:
Lustre does not manage the individual disks. I sits on top of a
filesystem, either ldiskfs(basically ext4) or zfs (as of Lustre 2.4).
Is ZFS the recommended fs, or just an option?
Doesn't ZFS suffer major
with the EOFS and openSFS boards
to update lustre.org
Sent from my iPhone
On Jun 12, 2013, at 11:16 AM, Christopher J. Morrone
morro...@llnl.gov wrote:
On 06/12/2013 04:59 AM, E.S. Rosenberg wrote:
Is lustre.org not being maintained anymore?
Not as far as I can tell
Is there any up-to-date documentation on lustre available, I am trying
to read myself in before I try to set up a lustre cluster but the
documentation on the website seems to be 2-3 years or older.
As far as I understood a while ago on IRC stable lustre is 1.8.x while
2.x is not considered stable
on lustre.org is still *very* relevant.
Ah, ok because it talks about Q2 2010 as the future, and download
links there are broken.
Is lustre.org not being maintained anymore?
About releases,
http://lustre.opensfs.org/download-lustre/
HTH
parinay
Thanks,
Eli
On 12 June 2013 15:58, E.S
Hi all,
I just wanted to point out that
https://groups.google.com/forum/?fromgroups#!forum/lustre-discuss-list
is not keeping an archive of the list anymore. (for more then a year
already)
Is there another (mailman?) archive somewhere?
Regards,
Eli
___
101 - 152 of 152 matches
Mail list logo