Glad to help!
On Thu, Apr 11, 2024 at 12:03 PM Jannek Squar
wrote:
>
> There must have been a problem with the connection, the same line is now
> working again. Thanks for your help.
>
> On 11/04/2024 20:54, Jeff Johnson wrote:
> > Works fine for me. Make sure you don't have
aften
> Fachbereich Informatik
> Arbeitsbereich Wissenschaftliches Rechnen
>
> Bundesstraße 45a
> D-20146 Hamburg
>
> Tel: +49 40 460094-219
> ___
> lustre-discuss mailing list
> lustre-discuss@lists.lustre.org
> http://
A LU ticket and patch for lnetctl or for me being an under-caffeinated
idiot? ;-)
On Wed, Jan 10, 2024 at 12:06 PM Andreas Dilger wrote:
>
> It would seem that the error message could be improved in this case? Could
> you file an LU ticket for that with the reproducer below, and ideally along
np1s0np0
Lots more to test and verify but the original mailing list submission
was total pilot error on my part. Apologies to all who spent cycles
pondering this nothingburger.
On Tue, Jan 9, 2024 at 7:45 PM Jeff Johnson
wrote:
>
> Howdy intrepid Lustrefarians,
>
> While starting d
Howdy intrepid Lustrefarians,
While starting down the debug rabbit hole I thought I'd raise my hand
and see if anyone has a few magic beans to spare.
I cannot get lnet (via lnetctl) to init a o2iblnd interface on a
RoCEv2 interface.
Running `lnetctl net add --net ib0 --if enp1s0np0` results in
; >> ___
> >> lustre-discuss mailing list
> >> lustre-discuss@lists.lustre.org <mailto:lustre-discuss@lists.lustre.org>
> >> https://urldefense.us/v3/__http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org__;!!G2kpM7uM-TzIFchu!1QmOnUbmSPpZPcc39
>
>
> ___
> lustre-discuss mailing list
> lustre-discuss@lists.lustre.org
> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
>
--
--
Jeff Johnson
Co-Founder
Aeon Computing
jeff.john...@aeonco
application.
>
> Cheers, Andreas
> --
> Andreas Dilger
> Lustre Principal Architect
> Whamcloud
>
>
>
>
>
>
>
> ___
> lustre-discuss mailing list
> lustre-discuss@lists.lustre.org
> http://lists.lus
back()) event type 2, status -5, desc
> 9a39ad668000
>
> Does anyone have any ideas about what could be causing this?
>
> Thanks,
> Alastair.
> ___
> lustre-discuss mailing list
> lustre-discuss@lists.lustre.org
> http:/
Nothing better than sliding in at the last moment to steal all the glory ;-)
—Jeff
On Wed, Sep 27, 2023 at 07:10 Jan Andersen wrote:
> Hi Jeff,
>
> Yes, that was it! Things are working beautifully now - big thanks.
>
> /jan
>
> On 27/09/2023 15:07, Jeff Johnson
Any chance the firewall is running?
You can use `lctl ping ipaddress@lnet` to check if you have functional lnet
between machines. Example `lctl ping 10.0.0.10@tcp`
—Jeff
On Wed, Sep 27, 2023 at 05:35 Jan Andersen wrote:
> However, it is still timing out when I try to mount on the oss. This is
ejandro Aguilar Sierra
> LANOT, ICAyCC, UNAM
> _______
> lustre-discuss mailing list
> lustre-discuss@lists.lustre.org
> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
>
--
--
Jeff Johnson
Co-F
>
> lustre:mgsnode=10.34.0.103@tcp
>
> lustre:version=1
>
> lustre:flags=98
>
> lustre:index=0
>
> lustre:fsname=lustre
>
> lustre:svname=lustre:OST
>
>
>
> I have 51G for /dev/vda
>
> df -H /dev/vda2
>
> Filesystem
>> step 6 Generate a persistent hostid on the machine, if one does not
>> already exist. This is needed to help protect ZFS zpools against
>> simultaneous imports on multiple servers. For example:
>>
>> hid=`[ -f /etc/hostid ] && od -An -tx /etc/hostid|sed
7] LustreError: 223535:0:(super25.c:183:lustre_fill_super())
> llite: Unable to mount : rc = -19
>
>
> ___
> lustre-discuss mailing list
> lustre-discuss@lists.lustre.org
> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
>
--
iscuss@lists.lustre.org
> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
>
--
------
Jeff Johnson
Co-Founder
Aeon Computing
jeff.john...@aeoncomputing.com
www.aeoncomputing.com
t: 858-412-3810 x1001 f: 858-412-3845
m: 619-204-9061
4170 Morena Boulevard,
On Wed, Jun 21, 2023 at 11:21 AM Mike Mosley
wrote:
> Jeff,
>
> At this point we have the OSS shutdown. We were coming back from. full
> outage and so we are trying to get the MDS up before starting to bring up
> the OSS.
>
> Mike
>
> On Wed, Jun 21, 2023 at 2:15 PM
ode count: 2247671504
>> Block count: 1404931944
>> Reserved block count: 70246597
>> Free blocks: 807627552
>> Free inodes: 2100036536
>> First block: 0
>> Block size: 4096
>> Fragment size: 4096
>> Reserved GDT blocks: 1024
>> Blocks per group: 20472
>> Fragments per group: 20472
>> Inodes per group: 32752
>
tinfo.cgi/lustre-discuss-lustre.org
>
>
> Cheers, Andreas
> --
> Andreas Dilger
> Lustre Principal Architect
> Whamcloud
>
>
>
>
>
>
>
> ___
> lustre-discuss mailing list
> lustre-discuss@lists.lustre.org
>
t3sas_cm0: log_info(0x3112011a):
> originator(PL), code(0x12), sub_code(0x011a)
> May 9 13:44:34 sphnxoss47 kernel: md: super_written gets error=-5
> May 9 13:44:34 sphnxoss47 kernel: md/raid:md8: Disk failure on dm-55,
> disabling device.
> May 9 13:44:34 sphnxoss47 kernel: md:
h that when we write
> something with lusterfs then it can be shown in block device??
> Can share command??
>
>
>
>
>
>
>
>
>
>
>
> ___
> lustre-discuss mailing list
> lustre-discuss@lists.lustre.org
>
___
> lustre-discuss mailing list
> lustre-discuss@lists.lustre.org
> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
--
--
Jeff Johnson
Co-Founder
Aeon Computing
jeff.john...@aeoncomputing.com
www.aeoncomputing.com
t:
:59:41 oss03 kernel: sd 15:0:0:92: [sddy] 34863054848 4096-byte
> logical blocks: (142 TB/129 TiB)
> Jul 6 07:59:41 oss03 kernel: sd 15:0:0:92: [sddy] Write Protect is off
> Jul 6 07:59:41 oss03 kernel: sd 15:0:0:92: [sddy] Write cache: enabled,
> read cache: enabled, supports DPO
>> is the exact same one used for all the other OSTs in the system.
>>
>>
>>
>> mkfs.lustre --reformat --mkfsoptions="-t ext4 -T huge" --ost
>> --fsname=local --index=0051 --param ost.quota_type=ug
>> --mountfsoptions='errors=remount-ro,extents,mballoc' --mgsnode=10.0.0.3@tcp
>> --mgsnode=10.0.0.1@tc
>>
e cluster.
>
>
>
> We called our DDN sales person who gave us a non-answer and has refused to
> call us back.
>
>
>
> How are other people dealing with this?
>
>
>
> sam
>
>
> _______
> lustre-discuss mai
>
> Heath
> ___
> lustre-discuss mailing list
> lustre-discuss@lists.lustre.org
> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
>
--
--
Jeff Johnson
Co-Founder
Aeon Computing
jeff.john...
; Jun 2 00:02:44 hostname kernel: LustreError:
> 226253:0:(genops.c:397:class_newdev()) Skipped 887 previous similar messages
>
> Does anyone have any ideas?
>
> Thanks,
> Alastair.
> ___
> lustre-discuss mailing list
> lustre-
st
> lustre-discuss@lists.lustre.org
> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
>
--
--
Jeff Johnson
Co-Founder
Aeon Computing
jeff.john...@aeoncomputing.com
www.aeoncomputing.com
t: 858-412-3810 x1001 f: 858-412-3845
m: 619-204-9061
41
t; lustre-discuss@lists.lustre.org
> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
>
--
--
Jeff Johnson
Co-Founder
Aeon Computing
jeff.johnson at aeoncomputing dot com
www.aeoncomputing.com
4170 Morena Boulevard, Suite C - San Diego, CA 92117
H
gards,
>>
>> Simon
>> _______
>> lustre-discuss mailing list
>> lustre-discuss@lists.lustre.org
>> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
>>
> --
> --
> Jeff Johnson
>
_
> lustre-discuss mailing list
> lustre-discuss@lists.lustre.org
> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
>
--
--
Jeff Johnson
Co-Founder
Aeon Computing
jeff.john...@aeoncomputing.com
www.aeoncomputing.com
t: 858-412-3810 x1001
or: Lustre, HPC
> Scientific Computing Group, Thomas Jefferson National Accelerator Facility
> _______
> lustre-discuss mailing list
> lustre-discuss@lists.lustre.org
> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
>
--
ter Operations.
> Maryland Advanced Research Computing Center (MARCC)
> Johns Hopkins University
> jas...@jhu.edu
>
> ___
> lustre-discuss mailing list
> lustre-discuss@lists.lustre.org
> http://lists.lustre.org/listinfo.cgi/lustre-
gt; rhelversion:7.4
>
> lfs --version
> lfs 2.12.0
>
> And this is a fresh install. So is there any other possibility to show the
> complete zpool lun has been allocated for lustre alone.
>
> Thanks,
> ANS
>
>
>
> On Tue, Jan 1, 2019 at 11:44 AM Jeff Johns
> lustre-discuss mailing list
> lustre-discuss@lists.lustre.org
> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
>
--
--
Jeff Johnson
Co-Founder
Aeon Computing
jeff.john...@aeoncomputing.com
www.aeoncomputing.com
t:
___
> lustre-discuss mailing list
> lustre-discuss@lists.lustre.org
> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
>
--
------
Jeff Johnson
Co-Founder
Aeon Computing
jeff.john...@aeoncomputing.com
www.aeoncomputing.com
t: 858-412-3810 x10
: Monday, April 30, 2018 11:49 AM
> Cc: lustre-discuss <lustre-discuss@lists.lustre.org>
> Subject: Re: [lustre-discuss] rhel 7.5
>
> On Mon, Apr 30, 2018 at 10:09 AM, Jeff Johnson <
> jeff.john...@aeoncomputing.com> wrote:
> > RHEL 7.5 support comes in Lustre 2.
___
> lustre-discuss mailing list
> lustre-discuss@lists.lustre.org
> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
>
--
--
Jeff Johnson
Co-Founder
Aeon Computing
jeff.john...@aeoncomputing.com
www.aeoncomputing.co
__
> lustre-discuss mailing list
> lustre-discuss@lists.lustre.org
> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
>
--
--
Jeff Johnson
Co-Founder
Aeon Computing
jeff.john...@aeoncomputing.com
www.aeoncomputing.com
t: 858-412-3810 x1001
list
> lustre-discuss@lists.lustre.org
> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
>
--
--
Jeff Johnson
Co-Founder
Aeon Computing
jeff.john...@aeoncomputing.com
www.aeoncomputing.com
t: 858-412-3810 x1001 f: 858-412-3845
m: 619-204-9061
4170
> lustre-discuss mailing list
> lustre-discuss@lists.lustre.org
> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
>
--
--
Jeff Johnson
Co-Founder
Aeon Computing
jeff.john...@aeoncomputing.com
www.aeoncomputing.com
t: 858-412-3810 x1001 f: 858-41
> On Tuesday 14 November 2017 18:39:49 Jeff Johnson wrote:
> > Harald,
> >
> > As long as your new servers and clients all have the same settings in
> their
> > config files as your currently running configuration you should be fine.
> >
> > --Jeff
&g
lients and add qdr servers and clients or the other
> way
> around?
>
> Thanks in advance
> Harald
>
>
> ___
> lustre-discuss mailing list
> lustre-discuss@lists.lustre.org
> http://lists.lustre.org/listinfo.cgi/lustre-discuss
___
>> lustre-discuss mailing list
>> lustre-discuss@lists.lustre.org
>> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
>>
>>
>
> ___
> lustre-discuss maili
.org
> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
>
--
--
Jeff Johnson
Co-Founder
Aeon Computing
jeff.john...@aeoncomputing.com
www.aeoncomputing.com
t: 858-412-3810 x1001 f: 858-412-3845
m: 619-204-9061
4170 Morena Boulevard, Suite D - San Diego, CA
assumption?
> Or will the 2.10 rpms work on Centps 7.2?
>
> thanks,
> -john c
> ___
> lustre-discuss mailing list
> lustre-discuss@lists.lustre.org
> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
>
--
---
t; lustre-discuss mailing list
>>> lustre-discuss@lists.lustre.org
>>> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
>>>
>>
>>
> ___
> lustre-discuss mailing list
> lustre-discuss@lists.lustre.org
>
ients: Debian + kernel 4.2 + Lustre 2.8
> IB: ConnectX-3 FDR
>
> ___
> lustre-discuss mailing list
> lustre-discuss@lists.lustre.org
> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
>
>
--
-
ad issues in which the robinhood server consumed
>> more than the total amount of 32 CPU cores on the robinhood server (with
>> 128 G RAM) and would functionally hang the robinhood server. The issue
>> was solved for me by changing to MySQL-5.6.35. It was the "sort"
t; Cheers, Andreas
> --
> Andreas Dilger
> Lustre Principal Architect
> Intel Corporation
>
>
>
>
>
>
>
> ___
> lustre-discuss mailing list
> lustre-discuss@lists.lustre.org
> http://lists.lustre.org/listinfo.cgi/lustr
>> Could such a setup be done? It seems that would be a better use case for
> >> something like GPFS or Gluster, but being a die-hard lustre enthusiast,
> >> I want to at least show it could be done.
> >>
> >>
> >> Thanks in advance,
> >>
> >>
Without seeing your entire command it is hard to say for sure but I would make
sure your concurrency option is set to 8 for starters.
--Jeff
Sent from my iPhone
> On Feb 5, 2017, at 11:30, Jon Tegner wrote:
>
> Hi,
>
> I'm trying to use lnet selftest to evaluate network
d here
> > struct rdma_cm_id *rdma_create_id(struct net *net,
> > ^
> > cc1: all warnings being treated as errors
> > make[7]: ***
> > [/root/rpmbuild/BUILD/lustre-2.8.0/lnet/klnds/o2iblnd/o2iblnd.o] Error 1
> > make[6]: *** [/root/rpmbuild/BUILD/lustre-2
--Jeff
--
--
Jeff Johnson
Co-Founder
Aeon Computing
jeff.john...@aeoncomputing.com
www.aeoncomputing.com
t: 858-412-3810 x1001 f: 858-412-3845
m: 619-204-9061
4170 Morena Boulevard, Suite D - San Diego, CA 92117
High-performance Computing / Lustre Filesystems / Scale-out S
--
--
Jeff Johnson
Co-Founder
Aeon Computing
jeff.john...@aeoncomputing.com
www.aeoncomputing.com
t: 858-412-3810 x1001 f: 858-412-3845
m: 619-204-9061
4170 Morena Boulevard, Suite D - San Diego, CA 92117
High-Performance Computing / Lustre Filesystems / Scale-out Storage
___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
--
--
Jeff Johnson
Co-Founder
Aeon Computing
jeff.john...@aeoncomputing.com
www.aeoncomputing.com
t: 858-412-3810 x1001
--
--
Jeff Johnson
Co-Founder
Aeon Computing
jeff.john...@aeoncomputing.com
www.aeoncomputing.com
t: 858-412-3810 x1001 f: 858-412-3845
m: 619-204-9061
4170 Morena Boulevard, Suite D - San Diego, CA 92117
High-performance Computing / Lustre Filesystems / Scale-out Storage
--
--
Jeff Johnson
Co-Founder
Aeon Computing
jeff.john...@aeoncomputing.com
www.aeoncomputing.com
t: 858-412-3810 x1001 f: 858-412-3845
m: 619-204-9061
4170 Morena Boulevard, Suite D - San Diego, CA 92117
High-Performance Computing / Lustre Filesystems / Scale-out Storage
.
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss
--
--
Jeff Johnson
Co-Founder
Aeon Computing
jeff.john...@aeoncomputing.com
www.aeoncomputing.com
t: 858-412-3810 x1001 f: 858-412-3845
m: 619-204-9061
-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss
--
--
Jeff Johnson
Co-Founder
Aeon Computing
jeff.john...@aeoncomputing.com
www.aeoncomputing.com
t: 858-412-3810 x1001 f: 858-412-3845
m: 619-204-9061
4170 Morena Boulevard, Suite D
.
**
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss
--
--
Jeff Johnson
Co-Founder
Aeon Computing
jeff.john
--
--
Jeff Johnson
Co-Founder
Aeon Computing
jeff.john...@aeoncomputing.com
www.aeoncomputing.com
t: 858-412-3810 x1001 f: 858-412-3845
m: 619-204-9061
4170 Morena Boulevard, Suite D - San Diego, CA 92117
High-performance Computing / Lustre Filesystems / Scale-out
.
thank you,
Eduardo
2013/10/17 Jeff Johnson jeff.john...@aeoncomputing.com
Hola Eduardo,
How are the OSTs connected to the OSS (SAS, FC, Infiniband SRP)?
Are there any non-Lustre errors in the dmesg output of the OSS?
Block devices error on the OSS (/dev/sd?)?
If you are losing [scsi,sas
2013/10/17 Jeff Johnson jeff.john...@aeoncomputing.com
Hola Eduardo,
How are the OSTs connected to the OSS (SAS, FC, Infiniband SRP)?
Are there any non-Lustre errors in the dmesg output of the OSS?
Block devices error on the OSS (/dev/sd?)?
If you are losing [scsi,sas,fc,srp] connectivity
/sdi /dev/sdk /dev/sdm /dev/sdo /dev/sdq /dev/sds
--Jeff
--
--
Jeff Johnson
Co-Founder
Aeon Computing
jeff.john...@aeoncomputing.com
www.aeoncomputing.com
t: 858-412-3810 x1001 f: 858-412-3845
m: 619-204-9061
4170 Morena Boulevard, Suite D - San Diego, CA 92117
both the 1.8.7 and 2.1.6 LFSs
simultaneously from the clients over tcp.
I understand that newer server code is friendlier to older client code
but how new? 2.1.x tree or do I have to be in 2.3 tree to get this ability?
Thanks..
--
--
Jeff Johnson
Co-Founder
Aeon
/listinfo/lustre-discuss
--
--
Jeff Johnson
Co-Founder
Aeon Computing
jeff.john...@aeoncomputing.com
www.aeoncomputing.com
t: 858-412-3810 x101 f: 858-412-3845
m: 619-204-9061
/* New Address */
4170 Morena Boulevard, Suite D - San Diego, CA 92117
://lists.lustre.org/mailman/listinfo/lustre-discuss
--
--
Jeff Johnson
Co-Founder
Aeon Computing
jeff.john...@aeoncomputing.com
www.aeoncomputing.com
t: 858-412-3810 x101 f: 858-412-3845
4170 Morena Boulevard, Suite D - San Diego, CA 92117
; or equivalent work experience
Development experience preferably with Lustre, LNET, Ethernet and TCP/IP
Relocation to San Diego, CA
Physical Demands / Work Environment
Ability and desire to work as part of a geographically-distributed team
--
--
Jeff Johnson
Co-Founder
?
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss
--
--
Jeff Johnson
Co-Founder
Aeon Computing
jeff.john...@aeoncomputing.com
www.aeoncomputing.com
t: 858-412-3810 x101 f: 858-412
many other options on the market and ended up here.
--Jeff
--
--
Jeff Johnson
Co-Founder
Aeon Computing
jeff.john...@aeoncomputing.com
www.aeoncomputing.com
t: 858-412-3810 x101 f: 858-412-3845
m: 619-204-9061
4170 Morena Boulevard, Suite D - San Diego, CA 92117
with a non
Whamcloud, stock kernel (plus headers, devel, etc). Then compile/install
the OFED version of your choice. Once you have that you can build Lustre
from source where it will compile against OFED and the installed kernel.
--Jeff
---
Jeff Johnson
Co-Founder
Aeon
On 12/7/12 9:34 AM, Dilger, Andreas wrote:
I've been using Lustre for years with my home MythTV (Linux PVR) setup.
Nerd. :)
--
--
Jeff Johnson
Co-Founder
Aeon Computing
jeff.john...@aeoncomputing.com
www.aeoncomputing.com
t: 858-412-3810 x101 f: 858-412-3845
m
.To everyone
else in the world, have a great weekend!
Megan Larko
Hewlett-Packard
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss
--
--
Jeff
Megan,
lnet pings aren't the same as tcpip/udp pings. An lnet ping 'lctl ping' would
need to touch an active lnet instance on the target address. I don't think you
can bind lnet to a pacemaker virtual IP but I'll let someone smarter than me on
this list confirm or correct me.
In any event an
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss
--
--
Jeff Johnson
Co-Founder
Aeon Computing
jeff.john...@aeoncomputing.com
www.aeoncomputing.com
--
--
Jeff Johnson
Co-Founder
Aeon Computing
jeff.john...@aeoncomputing.com
www.aeoncomputing.com
t: 858-412-3810 x101 f: 858-412-3845
m: 619-204-9061
/* New Address */
4170 Morena Boulevard, Suite D - San Diego, CA 92117
___
Lustre-discuss mailing list
Whamcloud, Inc.
Principal Lustre Engineerhttp://www.whamcloud.com/
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss
--
--
Jeff
1% /mnt/mdt
Any feedback is appreciated!
--Jeff
--
--
Jeff Johnson
Partner
Aeon Computing
jeff dot johnson at aeoncomputing.com
www.aeoncomputing.com
t: 858-412-3810 x101 f: 858-412-3845
4905 Morena Boulevard, Suite 1313 - San Diego, CA 92117
), but
regardless we should have this work completed in the near future.
--
--
Jeff Johnson
Aeon Computing
jeff dot johnson at aeoncomputing.com
www.aeoncomputing.com
4905 Morena Boulevard, Suite 1313 - San Diego, CA 92117
___
Lustre
. Applying the patches manually reveals several hunk failures. I
require a kernel more recent than 2.6.18-274.3.1 for non-Lustre
related issues.
Thanks in advance.
--
--
Jeff Johnson
Aeon Computing
jeff dot johnson at aeoncomputing.com
www.aeoncomputing.com
4905 Morena
29, 2012 at 11:40 PM, Johann Lombardi joh...@whamcloud.com wrote:
On Thu, Mar 29, 2012 at 11:30:03PM -0700, Jeff Johnson wrote:
Does anyone know the most recent kernel (RHEL/CentOS) that can be
successfully patched and compiled against the current Lustre 1.8 git
source tree? I attempted 2.6.18
.
--
--
Jeff Johnson
Aeon Computing
jeff dot johnson at aeoncomputing.com
www.aeoncomputing.com
4905 Morena Boulevard, Suite 1313 - San Diego, CA 92117
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman
[800a089a] keventd_create_kthread+0x0/0xc4
[80032792] kthread+0x0/0x132
[8005dfa7] child_rip+0x0/0x11
--
--
Jeff Johnson
Manager
Aeon Computing
jeff.johnson at aeoncomputing.com
www.aeoncomputing.com
t: 858-412-3810 x101 f: 858-412-3845
4905 Morena
for enabling failover via tunefs.lustre are for OSTs
and I want to be sure that there isn't a different procedure for the MDS
since it can only be active/passive.
Thanks,
--Jeff
--
Jeff Johnson
Aeon Computing
www.aeoncomputing.com
4905 Morena Boulevard, Suite 1313 - San
, it's the same procedure as with an OST.
cliffw
On Tue, Jun 14, 2011 at 12:06 PM, Jeff Johnson
jeff.john...@aeoncomputing.com
mailto:jeff.john...@aeoncomputing.com wrote:
Greetings,
I am attempting to add mds failover operation to an existing v1.8.4
filesystem. I have heartbeat
to mount read-only as ext4 but
any attempt at reading the extended attributes with getfattr fails.
Thanks,
--Jeff
--
--
Jeff Johnson
Manager
Aeon Computing
jeff.john...@aeoncomputing.com
www.aeoncomputing.com
t: 858-412-3810 x101 f: 858-412-3845
m: 619-204-9061
4905
it was never really explained or resolved. It appears that these
controllers, like small children, have tantrums and fall apart. A power cycle
clears the condition.
Not the best controller for an OSS.
--Jeff
---mobile signature---
Jeff Johnson - Aeon Computing
jeff.john...@aeoncomputing.com
if there are any RAID errors. It might be a network problem as well.
Morning is coming and one of the developers will likely respond to this with
more suggestions.
--Jeff
---mobile signature---
Jeff Johnson - Aeon Computing
jeff.john...@aeoncomputing.com
m: 619-204-9061
On Dec 20, 2010, at 23:13
related that
are causing the system to reboot in addition to Lustre complaining that it
can't find the OST storage device.
Others here on the list will likely give you a more detailed answer. The
storage device is the place i would look first.
--Jeff
--
--
Jeff Johnson
---mobile signature---
Jeff Johnson - Aeon Computing
jeff.john...@aeoncomputing.com
On Dec 20, 2010, at 22:27, Daniel Raj danielraj2...@gmail.com wrote:
Hi Jeff,
Thanks for your reply
Storage information :
DL380G5 == OSS + 16GB Ram
OS== SFS G3.2-2 + centos 5.3
Greetings..
Is it possible that the below error can be derived from a client that
has not been rebooted or had lustre kernel mods reloaded during a time
when a few test file systems were built and mounted?
LustreError: 12967:0:(ldlm_lib.c:1914:target_send_reply_msg()) @@@ processing
error
On 12/6/10 3:55 PM, Oleg Drokin wrote:
Hello!
On Dec 6, 2010, at 6:50 PM, Jeff Johnson wrote:
Previous test incarnations of this filesystem were built where ost name
was not assigned (e.g.: OST) and was assigned upon first mount and
connection to the mds. Is it possible that some clients
93 matches
Mail list logo