-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hi All,
I just wanted to check if there would be problems with mounting a
Lustre filesystem as ext3 and reading/writing to it. Would there be any
corruption or other unwanted effects to the data integrity?
If we encounter any issues with Lustre, a
On May 13, 2009, at 11:13 AM, Nick Jennings wrote:
I just wanted to check if there would be problems with mounting a
Lustre filesystem as ext3 and reading/writing to it. Would there be
any
corruption or other unwanted effects to the data integrity?
If we encounter any issues with Lustre, a
Hi all,
We are pleased to announce that Lustre 2.0 Alpha2 is available for download.
This is our second step in a series of milestone based pre-releases as we
move towards Lustre 2.0 GA. New milestone releases will be planned for
every 4-6 weeks until GA.
Alpha Milestone Criteria:
* Tested
Andreas Dilger a écrit :
Note that the OST threads are already bound to a particular NUMA node.
This means that the pages used for the IO are CPU-local and are not
accessed from a remote CPU's cache.
Indeed, I have seen in lustre/ptlrpc/service.c the following piece of code:
#if
Oleg Drokin wrote:
Hello!
On Apr 23, 2009, at 4:29 AM, Ralf Utermann wrote:
kernel: nfs_stat_to_errno: bad nfs status return value: 45
Do you mean 43 instead of 45? Please see thread this week about
OS/X client + NFS export on this same list.
It is really 45, not the 43 from the OS/X
On Wed, 2009-05-13 at 11:13 +0200, Nick Jennings wrote:
I just wanted to check if there would be problems with mounting a
Lustre filesystem as ext3 and reading/writing to it.
It probably won't mount as ext3 as it's likely not compatible enough
with ext3 any more. ext4 if you have that
We've seen a couple of instances where login/cluster nodes seem completely
unresponsive when in a directory of a Lustre filesystem. When we're in other
non-lustre directories that very same node seems completely responsive. We've
also correlated those evnets to very high load on the OSS Nodes
Hello,
when compiling lustre 1.8.0 from source against vanilla kernel 2.6.22.19 on
our gentoo systems i do get the following:
lustre-1.8.0 # modprobe lnet
WARNING: Error inserting libcfs
(/lib/modules/2.6.22.19/kernel/net/lustre/libcfs.ko): Invalid module format
FATAL: Error inserting lnet
Dear list,
When I was doing lfs quotacheck for a 200TB Lustre filesytem, the OSS
crashed one after another. This is error log
before system crashed from one OSS.
Has anyone met same problem?
May 13 20:20:18 boss09 kernel: LustreError:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Ah yes, of course. Thanks for the reply, I guess I wasn't thinking that
one through completely.
So if, for example, the MDS drive dies, my entire Lustre filesystem is
unrecoverable?
Brian J. Murrell wrote:
On Wed, 2009-05-13 at 11:13 +0200, Nick
Hello!
On May 13, 2009, at 7:53 AM, Ralf Utermann wrote:
What might be useful is if you can reproduce this quickly n as few
set
of
Lustre nodes as possible.
remember your current /proc/sys/lnet/debug value.
on lustre-client/nfs-server and on MDS echo -1 /proc/sys/lnet/debug
then do lctl
On Wed, 2009-05-13 at 14:53 +0200, Nick Jennings wrote:
Ah yes, of course. Thanks for the reply, I guess I wasn't thinking that
one through completely.
So if, for example, the MDS drive dies, my entire Lustre filesystem is
unrecoverable?
Yes. That is why _reliable_ storage for the MDT is
On Mittwoch 13 Mai 2009 14:44:52 Heiko Schröter wrote:
Answering myself ...
I've snatched a poisoned kernel tree.
Sorry for the noise.
Regards
Heiko
Hello,
when compiling lustre 1.8.0 from source against vanilla kernel 2.6.22.19 on
our gentoo systems i do get the following:
lustre-1.8.0
hi All,
I see, there is a page on the wiki, how to build lustre client for
windows. Is there a prebuilt binary available to test it?
Thank you,
tamas
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
Oleg Drokin wrote:
[...]
Either Lustre never got any control at all and your problem is
unrelated to lustre and related to something else in your system or
the logging is broken somewhat. The way to test it is to do ls -la
/mnt/lustre (or whatever your lustre mountpoint is) on the nfs server
Hello!
On May 13, 2009, at 10:48 AM, Ralf Utermann wrote:
Oleg Drokin wrote:
[...]
Either Lustre never got any control at all and your problem is
unrelated to lustre and related to something else in your system or
the logging is broken somewhat. The way to test it is to do ls -la
Good Morning Folks,
A quick question on lustre failover as far as OSSs are concerned.
Can failover pairs be in an (for lack of a better phrase) active-
active setup? I have a GPFS background where we would have NSDs
(OSTs) split between two servers -- half the NSDs would be
On Wed, 2009-05-13 at 10:19 -0700, John White wrote:
Good Morning Folks,
A quick question on lustre failover as far as OSSs are concerned.
Can failover pairs be in an (for lack of a better phrase) active-
active setup?
You are not lacking a better phrase. That's exactly the
It is normal for an OSS server pair to serve OSTs from both servers. So
in that sense, it relates back to the GPFS NSD servers.
The difference versus GPFS (where LUNs are active on both servers all
the time, even though one is the primary server) is that the secondary
server does NOT serve
Also note that you will need third-party software to do this failover, unlike
GPFS.
jab
The difference versus GPFS (where LUNs are active on both
servers all the time, even though one is the primary server)
is that the secondary server does NOT serve the OSTs being
served by the
Hello!
On May 13, 2009, at 8:35 AM, Mag Gam wrote:
I have an application which I would like to use Lustre as the backing
storage. However, the application (MonetDB) uses mmap(). Would the
application have any problems if using Lustre as its backing storage?
There should be no problems in
21 matches
Mail list logo