).
Is this normal/by design?
Thanks in advance,
Brian Andrus
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss
:
7075:0:(obd_mount.c:2042:lustre_fill_super()) Unable to mount (-108)
-
Any ideas on troubleshooting this would be greatly appreciated.
Brian Andrus
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http
be most helpful.
Brian Andrus
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss
On 4/27/2010 6:10 PM, Oleg Drokin wrote:
Hello!
On Apr 27, 2010, at 7:29 PM, Brian Andrus wrote:
Apr 27 16:15:19 nas-0-1 kernel: LustreError:
4133:0:(ldlm_lib.c:1848:target_send_reply_msg()) @@@ processing error (-107)
r...@810669d35c50 x1334203739385128/t0 o400-?@?:0/0 lens 192
enthusiast,
I want to at least show it could be done.
Thanks in advance,
Brian Andrus
___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
to use something like pacemaker
and drbd to deal with the MGS/MGT. Is this how you approached it?
Brian Andrus
On 2/6/2017 12:58 PM, Vicker, Darby (JSC-EG311) wrote:
Agreed. We are just about to go into production on our next LFS with the
setup described. We had to get past a bug in the MGS
and away
from dkms, so I redesigned my build scripts to use zfs/kmod and dropped
ldiskfs. Certainly, this has made life easier when there are kernel
updates :)
So if there is a choice between the two, what is preferred and why?
Hopefully this doesn't start a war or anything...
Brian Andrus
from a client.
Brian Andrus
Firstspot, Inc.
___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
NVMes each. We can do 1 zfs file
system with all or we can separate them into 5 (which would forgo some
of the features of zfs).
Any prior experience/knowledge/suggestions would be appreciated.
Brian Andrus
___
lustre-discuss mailing list
lustre
manually
with no change in the results.
Any ideas what may be going on here?
Brian Andrus
___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
Andrus
On 4/28/2017 1:01 PM, Dilger, Andreas wrote:
On Apr 28, 2017, at 12:23, Brian Andrus <toomuc...@gmail.com> wrote:
All,
I just did a new build against the latest (2.9.56_36) source.
I am building with zfs support and kmod. Packages build fine and install fine.
Lnet comes up and
lustre 2.9.59_15_g107b2cb built for kmod
Is there something I can do to track this down and hopefully remedy it?
Brian Andrus
___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
Brian Andrus
On 8/7/2017 7:17 AM, Brian Andrus wrote:
There were actually several:
On an OSS:
[447314.138709] BUG: unable to handle kernel NULL pointer dereference
at 0020
[543262.189674] BUG: unable to handle kernel NULL pointer dereference
at (null)
[16397.115830] BUG:
at (null)
On 2 separate clients:
[65404.590906] BUG: unable to handle kernel NULL pointer dereference
at (null)
[72095.972732] BUG: unable to handle kernel paging request at
002029b0e000
Brian Andrus
On 8/4/2017 10:49 AM, Patrick Farrell wrote:
Brian,
What
, it goes
down to ~700MB/s
Is there a bandwidth bottleneck that can occur at the socket level for a
system? This really seems like it.
Brian Andrus
___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi
modules, go with CentOS.
If your system is mission critical or will be, go with RHEL
Brian Andrus
On 10/18/2017 2:59 AM, Amjad Syed wrote:
Hello
We are in process of purchasing a new lustre filesystem for our site
that will be used for life sciences and genomics.
We would like to know if we should
filesystem is the parallelism among */multiple destination
servers/*. That's what makes it so scalable.
So, yes, not recommended, but yes can be done. I did it that way just to
learn lustre long ago.
Brian Andrus
On 11/26/2017 6:41 AM, Amjad Syed wrote:
Hello
I know this is not lustre
me. A vote for that!
Brian Andrus
On 11/29/2017 6:08 PM, Dilger, Andreas wrote:
On Nov 29, 2017, at 15:31, Brian Andrus <toomuc...@gmail.com> wrote:
All,
I have always seen lustre as a good solution for large files and not the best
for many small files.
Recently, I have seen a request fo
I know it may be obvious, but did you 'mkdir /mnt/OST7'?
Brian Andrus
On 11/29/2017 3:09 PM, Scott Wood wrote:
[root@fakeoss4 ~]# mount /mnt/OST7
mount.lustre: increased /sys/block/vde/queue/max_sectors_kb from 1024
to 2147483647
mount.lustre: mount /dev/vde at /mnt/OST7 failed: No such file
come back".
Brian Andrus
On 11/29/2017 3:09 PM, Scott Wood wrote:
Hi folks,
In an effort to replicate a production environment to do a test
upgrade, I've created a six server KVM testbed on a Centos 7.4 host
with CentOS 6 guests. I have four OSS and two MDSs. I have qcow2
vir
It should be possible.
You will need to update the config (writeconf) to reflect the IP of the
host (failnode or servicenode).
This way, when it reports to the MGS, it is known and will be accepted
into the mix :)
Brian Andrus
On 11/28/2017 4:35 PM, E.S. Rosenberg wrote:
Hi everyone,
One
these OSTs available?
I assume it's something like:
- mount OSTs
- tune2fs --writeconf [what?]
Correct?
Thanks again!
Eli
On Wed, Nov 29, 2017 at 2:40 AM, Brian Andrus <toomuc...@gmail.com
<mailto:toomuc...@gmail.com>> wrote:
It should be possible.
You will need to updat
set
of builds at https://downloads.hpdd.intel.com/public/lustre/latest-release/
Does anyone know what is different that needs considered for el7.4.1708?
Brian Andrus
___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org
will incur overhead, but be safer.
Brian Andrus
On 11/19/2017 10:31 AM, Dauchy, Nathan (ARC-TNC)[CSRA, LLC] wrote:
Greetings,
I'm trying to clarify and confirm the differences between lfs_migrate's use of rsync vs.
"lfs migrate". This is in regards to performance, checksumm
to just have a secondary
system that is running and use DNS to failover or just round-robin users
to balance them across the systems that are up. Certainly easier to
maintain and easier to recover when something happens.
Brian Andrus
On 11/21/2017 4:44 AM, Amjad Syed wrote:
Hello,
We have
of OSTs/MDTs with different fsnames, but
the same mgsnode.
Brian Andrus
On 11/1/2017 6:50 PM, Parag Khuraswar wrote:
Hi Raj,
But I have two file system. And I think one mgt can be used for two
filesystems. Correct me if I am wrong.
Regards,
Parag
*From:*Raj [mailto:rajgau...@gmail.com]
*Sent
. ZFS has that ability.
File:Metadata Server HA Cluster Simple lowres v1.png
Brian Andrus
On 11/1/2017 7:08 PM, Parag Khuraswar wrote:
I have two mds nodes. For HA purpose I mentioned –service node options
mkfs.lustre --servicenode=10.2.1.204@o2ib
--servicenode=10.2.1.205@o2ib --mgs /dev
it. The node I set for server does not drop any packets.
Brian Andrus
On 12/5/2017 9:20 AM, Shawn Hall wrote:
Hi Brian,
Do you have flow control configured on all ports that are on the
network path? Lustre has a tendency to cause packet losses in ways
that performance testing tools don’t because
Raj,
Thanks for the insight.
It looks like it was the buffer size. The rx buffer was increased on the
lustre nodes and there have been no more dropped packets.
Brian Andrus
On 12/5/2017 11:12 AM, Raj wrote:
Brian,
I would check the following:
- MTU size must be same across all the nodes
) with no dropped packets.
Is there something with writes that can cause dropped packets?
Brian Andrus
___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
boxes. I do that often, actually. A single MDS/OSS has zero redundancy,
unless something is being done at harware level and that would help in
availability.
NFS is quite viable too, but you would be splitting the available
storage on 2 boxes.
Brian Andrus
On 10/30/2017 12:47 AM, Amjad Syed
of my OSTs as ZFS and
letting that handle parallel writes across drives within an OSS, which
has performed well.
Brian Andrus
On 10/30/2017 12:04 PM, Amjad Syed wrote:
The vendor has proposed a single MDT ( 4 * 1.2 TB) in RAID 10
configuration.
The OST will be RAID 6 and proposed are 2 OST
rely boot kernel
3.10.0-514.26.2
Brian Andrus
On 1/8/2018 12:26 AM, Scott Wood wrote:
Thanks for the feedback, Riccardo. I understand not all versions are
certified compatible but knowing that some folks have had success
helps build some confidence. I tried building 2.8.0, the latest from
MDTs with larger sizes (still using
zfs as the backing filesystem).
Those are just general starting points that seem to work well for the
installations I have done where I have a little more freedom in the
architecture.
YMMV
Brian Andrus
On 2/6/2018 9:32 AM, E.S. Rosenberg wrote:
Hello
we had used in the past as well. But now they no longer offer it,
so they are in a possible pickle if anything goes terribly south.
Brian Andrus
___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre
= %{version}-%{fullrelease}
%endif
Brian Andrus
On 8/9/2018 5:51 PM, Faaland, Olaf P. wrote:
Hi,
What is the reason for naming the package "lustre" if it includes both client and server
binaries, but "lustre-client" if it includes only the client?
= (from
# Set the
Could you share the command line used to do the actual formatting?
Brian Andrus
On 8/19/2018 11:58 PM, ANS wrote:
Dear Team,
I am trying install lustre from the pre built RPMs and after
installation i am facing LUN size huge variation.
Please find my infra details as below:-
1) CentOS
37 matches
Mail list logo