Glad to help!
On Thu, Apr 11, 2024 at 12:03 PM Jannek Squar
wrote:
>
> There must have been a problem with the connection, the same line is now
> working again. Thanks for your help.
>
> On 11/04/2024 20:54, Jeff Johnson wrote:
> > Works fine for me. Make sure you don't
Naturwissenschaften
> Fachbereich Informatik
> Arbeitsbereich Wissenschaftliches Rechnen
>
> Bundesstraße 45a
> D-20146 Hamburg
>
> Tel: +49 40 460094-219
> ___
> lustre-discuss mailing list
> lustre-discuss@lists.lustre.o
A LU ticket and patch for lnetctl or for me being an under-caffeinated
idiot? ;-)
On Wed, Jan 10, 2024 at 12:06 PM Andreas Dilger wrote:
>
> It would seem that the error message could be improved in this case? Could
> you file an LU ticket for that with the reproducer below, and ideally along
np1s0np0
Lots more to test and verify but the original mailing list submission
was total pilot error on my part. Apologies to all who spent cycles
pondering this nothingburger.
On Tue, Jan 9, 2024 at 7:45 PM Jeff Johnson
wrote:
>
> Howdy intrepid Lustrefarians,
>
> While starting d
Howdy intrepid Lustrefarians,
While starting down the debug rabbit hole I thought I'd raise my hand
and see if anyone has a few magic beans to spare.
I cannot get lnet (via lnetctl) to init a o2iblnd interface on a
RoCEv2 interface.
Running `lnetctl net add --net ib0 --if enp1s0np0` results in
Secretary / Staatssekretär Dr. Volkmar Dietz
> >>
> >>
> >>
> >>
> >> ___
> >> lustre-discuss mailing list
> >> lustre-discuss@lists.lustre.org <mailto:lustre-discuss@lists.lustre.org>
> >> https://urldefense.us/v
>
>
>
>
>
>
>
>
> _______
> lustre-discuss mailing list
> lustre-discuss@lists.lustre.org
> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
>
--
--
Jeff Johnson
Co-Founder
Aeon
with the client application.
>
> Cheers, Andreas
> --
> Andreas Dilger
> Lustre Principal Architect
> Whamcloud
>
>
>
>
>
>
>
> ___
> lustre-discuss mailing list
> lustre-discuss@lists.lustre.org
> h
_callback()) event type 2, status -5, desc
> 9a39ad668000
>
> Does anyone have any ideas about what could be causing this?
>
> Thanks,
> Alastair.
> ___
> lustre-discuss mailing list
> lustre-discuss@lists.lustre.org
> h
Nothing better than sliding in at the last moment to steal all the glory ;-)
—Jeff
On Wed, Sep 27, 2023 at 07:10 Jan Andersen wrote:
> Hi Jeff,
>
> Yes, that was it! Things are working beautifully now - big thanks.
>
> /jan
>
> On 27/09/2023 15:07, Jeff Johnson wrot
Any chance the firewall is running?
You can use `lctl ping ipaddress@lnet` to check if you have functional lnet
between machines. Example `lctl ping 10.0.0.10@tcp`
—Jeff
On Wed, Sep 27, 2023 at 05:35 Jan Andersen wrote:
> However, it is still timing out when I try to mount on the oss. This is
>
> --
> Alejandro Aguilar Sierra
> LANOT, ICAyCC, UNAM
> ___
> lustre-discuss mailing list
> lustre-discuss@lists.lustre.org
> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
>
--
>
> lustre:mgsnode=10.34.0.103@tcp
>
> lustre:version=1
>
> lustre:flags=98
>
> lustre:index=0
>
> lustre:fsname=lustre
>
> lustre:svname=lustre:OST
>
>
>
> I have 51G for /dev/vda
>
> df -H /dev/vda2
>
> Filesystem
;>
>> step 6 Generate a persistent hostid on the machine, if one does not
>> already exist. This is needed to help protect ZFS zpools against
>> simultaneous imports on multiple servers. For example:
>>
>> hid=`[ -f /etc/hostid ] && od -An -tx /etc/hosti
t; [76783.613457] LustreError: 223535:0:(super25.c:183:lustre_fill_super())
> llite: Unable to mount : rc = -19
>
>
> ___
> lustre-discuss mailing list
> lustre-discuss@lists.lustre.org
> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lust
g list
> lustre-discuss@lists.lustre.org
> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
>
--
--
Jeff Johnson
Co-Founder
Aeon Computing
jeff.john...@aeoncomputing.com
www.aeoncomputing.com
t: 858-412-3810 x1001 f: 858-412-3845
m: 619-204-9061
41
-Jeff
On Wed, Jun 21, 2023 at 11:21 AM Mike Mosley
wrote:
> Jeff,
>
> At this point we have the OSS shutdown. We were coming back from. full
> outage and so we are trying to get the MDS up before starting to bring up
> the OSS.
>
> Mike
>
> On Wed, Jun 21, 2023 at 2:
t; Inode count: 2247671504
>> Block count: 1404931944
>> Reserved block count: 70246597
>> Free blocks: 807627552
>> Free inodes: 2100036536
>> First block: 0
>> Block size: 4096
>> Fragment size: 4096
>> Reserved GDT blocks: 1024
>> Blocks per group: 20472
>> Fragments per group: 20472
>> Inodes per group: 32752
t; http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
>
>
> Cheers, Andreas
> --
> Andreas Dilger
> Lustre Principal Architect
> Whamcloud
>
>
>
>
>
>
>
> ___
> lustre-discuss mailing list
> lustr
t; May 9 13:44:34 sphnxoss47 kernel: mpt3sas_cm0: log_info(0x3112011a):
> originator(PL), code(0x12), sub_code(0x011a)
> May 9 13:44:34 sphnxoss47 kernel: md: super_written gets error=-5
> May 9 13:44:34 sphnxoss47 kernel: md/raid:md8: Disk failure on dm-55,
> disabling device.
>
ustre file system on block such that when we write
> something with lusterfs then it can be shown in block device??
> Can share command??
>
>
>
>
>
>
>
>
>
>
>
> ___
> lustre-discuss mailing list
> lustr
gt;
> ellis
> ___
> lustre-discuss mailing list
> lustre-discuss@lists.lustre.org
> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
--
--
Jeff Johnson
Co-Founder
Aeon Computing
jeff.john...@aeo
:59:41 oss03 kernel: sd 15:0:0:92: [sddy] 34863054848 4096-byte
> logical blocks: (142 TB/129 TiB)
> Jul 6 07:59:41 oss03 kernel: sd 15:0:0:92: [sddy] Write Protect is off
> Jul 6 07:59:41 oss03 kernel: sd 15:0:0:92: [sddy] Write cache: enabled,
> read cache: enabled, supports DPO
d below, which other than the index
>> is the exact same one used for all the other OSTs in the system.
>>
>>
>>
>> mkfs.lustre --reformat --mkfsoptions="-t ext4 -T huge" --ost
>> --fsname=local --index=0051 --param ost.quota_type=ug
>> --mountfsoptions='errors=remount-ro,extents,mballoc' --mgsnode=10.0.0.3
>
>
>
> We called our DDN sales person who gave us a non-answer and has refused to
> call us back.
>
>
>
> How are other people dealing with this?
>
>
>
> sam
>
>
> _______
> lustre-discuss mai
o2ib
>
>
>
> Heath
> ___
> lustre-discuss mailing list
> lustre-discuss@lists.lustre.org
> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
>
--
--
Jeff Johnson
Co-Founder
Aeon Computin
ar messages
> Jun 2 00:02:44 hostname kernel: LustreError:
> 226253:0:(genops.c:397:class_newdev()) Skipped 887 previous similar messages
>
> Does anyone have any ideas?
>
> Thanks,
> Alastair.
> ___
> lustre-discuss mailing li
> lustre-discuss mailing list
> lustre-discuss@lists.lustre.org
> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
>
--
--
Jeff Johnson
Co-Founder
Aeon Computing
jeff.john...@aeoncomputing.com
www.aeoncomputing.com
t: 858-412-3810 x1001
cuss mailing list
> lustre-discuss@lists.lustre.org
> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
>
--
--
Jeff Johnson
Co-Founder
Aeon Computing
jeff.johnson at aeoncomputing dot com
www.aeoncomputing.com
4170 Morena Boulevard, Suite C - S
gards,
>>
>> Simon
>> _______
>> lustre-discuss mailing list
>> lustre-discuss@lists.lustre.org
>> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
>>
> --
> --
> Jeff Johnson
>
_
> lustre-discuss mailing list
> lustre-discuss@lists.lustre.org
> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
>
--
--
Jeff Johnson
Co-Founder
Aeon Computing
jeff.john...@aeoncomputing.com
www.aeoncomputing.com
t: 858-412-3810 x1001
#winning ;-) happy Friday
On Fri, Apr 12, 2019 at 11:41 AM Kurt Strosahl wrote:
> Thanks, that was exactly what I needed to dig out the bad modules!
>
>
>
>
> ------
> *From:* Jeff Johnson
> *Sent:* Friday, April 12, 2019 12:00 PM
> *To:* K
> Kurt J. Strosahl
> System Administrator: Lustre, HPC
> Scientific Computing Group, Thomas Jefferson National Accelerator Facility
> ___
> lustre-discuss mailing list
> lustre-discuss@lists.lustre.org
> http://lists.lustre.org/listinfo.cgi/
a Center Operations.
> Maryland Advanced Research Computing Center (MARCC)
> Johns Hopkins University
> jas...@jhu.edu
>
> ___
> lustre-discuss mailing list
> lustre-discuss@lists.lustre.org
> http://lists.lustre.org/listinfo.cgi/lu
gt; rhelversion:7.4
>
> lfs --version
> lfs 2.12.0
>
> And this is a fresh install. So is there any other possibility to show the
> complete zpool lun has been allocated for lustre alone.
>
> Thanks,
> ANS
>
>
>
> On Tue, Jan 1, 2019 at 11:44 AM Jeff Johns
> lustre-discuss mailing list
> lustre-discuss@lists.lustre.org
> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
>
--
--
Jeff Johnson
Co-Founder
Aeon Computing
jeff.john...@aeoncomputing.com
www.aeoncomputing.com
t:
___
> lustre-discuss mailing list
> lustre-discuss@lists.lustre.org
> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
>
--
------
Jeff Johnson
Co-Founder
Aeon Computing
jeff.john...@aeoncomputing.com
www.aeoncomputing.com
t: 858-412-3810 x10
: Monday, April 30, 2018 11:49 AM
> Cc: lustre-discuss
> Subject: Re: [lustre-discuss] rhel 7.5
>
> On Mon, Apr 30, 2018 at 10:09 AM, Jeff Johnson <
> jeff.john...@aeoncomputing.com> wrote:
> > RHEL 7.5 support comes in Lustre 2.10.4. Only path I can think of off
> &
ling list
> lustre-discuss@lists.lustre.org
> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
>
--
--
Jeff Johnson
Co-Founder
Aeon Computing
jeff.john...@aeoncomputing.com
www.aeoncomputing.com
t: 858-412-3810 x1001 f: 858-412-3845
m: 619-204-9061
4170
_
> lustre-discuss mailing list
> lustre-discuss@lists.lustre.org
> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
>
--
--
Jeff Johnson
Co-Founder
Aeon Computing
jeff.john...@aeoncomputing.com
www.aeoncomputing.com
t: 858-412-3810 x1001
ts.lustre.org
> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
>
--
--
Jeff Johnson
Co-Founder
Aeon Computing
jeff.john...@aeoncomputing.com
www.aeoncomputing.com
t: 858-412-3810 x1001 f: 858-412-3845
m: 619-204-9061
4170 Morena Boulevard, Suite D
iscuss mailing list
> lustre-discuss@lists.lustre.org
> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
>
--
--
Jeff Johnson
Co-Founder
Aeon Computing
jeff.john...@aeoncomputing.com
www.aeoncomputing.com
t: 858-412-3810 x1001 f: 858-412-3845
m: 61
eff,
>
> thanks for your answer.
> Can I be sure that there is no autoprobing which sets any configuration
> differently?
> The options given for mkfs.lustre and in /etc/modprobe.d/lustre.conf
> will be the same, is this enough?
>
> Best
> Harald
>
> On Tuesday 14 Novem
clients or the other
> way
> around?
>
> Thanks in advance
> Harald
>
>
> ___
> lustre-discuss mailing list
> lustre-discuss@lists.lustre.org
> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
>
s mailing
>> listlustre-discuss@lists.lustre.orghttp://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
>>
>>
>>
>> ___
>> lustre-discuss mailing list
>> lustre-discuss@lists.lustre.org
>> http://lis
巍才)
>
> +86 18600622522 <%2B86%2018600622522>
>
> Dell HPC Product Technologist Greater China
>
>
> ___
> lustre-discuss mailing list
> lustre-discuss@lists.lustre.org
> http://lists.lustre.org/listinfo.cgi/lustre-discu
correct assumption?
> Or will the 2.10 rpms work on Centps 7.2?
>
> thanks,
> -john c
> ___
> lustre-discuss mailing list
> lustre-discuss@lists.lustre.org
> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
>
--
>>
>>> Thanks much!
>>>
>>> Best,
>>>
>>> ---Steve
>>>
>>> ___
>>> lustre-discuss mailing list
>>> lustre-discuss@lists.lustre.org
>>> http://lists.lustre.or
)
> Clients: Debian + kernel 4.2 + Lustre 2.8
> IB: ConnectX-3 FDR
>
> ___
> lustre-discuss mailing list
> lustre-discuss@lists.lustre.org
> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
>
>
--
--
Jeff Johns
e robinhood server consumed
>> more than the total amount of 32 CPU cores on the robinhood server (with
>> 128 G RAM) and would functionally hang the robinhood server. The issue
>> was solved for me by changing to MySQL-5.6.35. It was the "sort" command
>> in robinhood that was not
Dilger
> Lustre Principal Architect
> Intel Corporation
>
>
>
>
>
>
>
> ___
> lustre-discuss mailing list
> lustre-discuss@lists.lustre.org
> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
>
--
--
gt;>
> >>
> >> Could such a setup be done? It seems that would be a better use case for
> >> something like GPFS or Gluster, but being a die-hard lustre enthusiast,
> >> I want to at least show it could be done.
> >>
> >>
> >> Thanks
Without seeing your entire command it is hard to say for sure but I would make
sure your concurrency option is set to 8 for starters.
--Jeff
Sent from my iPhone
> On Feb 5, 2017, at 11:30, Jon Tegner wrote:
>
> Hi,
>
> I'm trying to use lnet selftest to evaluate network performance on a tes
.0/lnet/klnds/o2iblnd/o2iblnd.h:74:0,
> > from
> > /root/rpmbuild/BUILD/lustre-2.8.0/lnet/klnds/o2iblnd/o2iblnd.c:42:
> > /usr/src/kernels/3.10.0-514.el7.x86_64/include/rdma/rdma_cm.h:172:20:
> > note: declared here
> > struct rdma_cm_id *rdma_create_id(struct net
lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
Also consider that 20KB of data per lnet RPC, assuming a 1MB RPC, to
move 20KB files at 200MB/sec into a non-striped LFS directory you are
using EDR for lnet? 100GB Ethernet?
--Jeff
--
------
Jeff Johnson
r to let the RAID
>>> Controllers do it (as in Option B)
>>>
>>> 2. Will Option B allow us to have lesser CPU/RAM than Option A?
>>>
>>> Regards,
>>>
>>>
>>> Indivar Nair
>>>
>>>
>>>
>>>
>> Celsio has released a white paper in which they compare Lustre RDMA over
>> 40 Gb Ethernet and FDR IB
>>
>>
>> http://www.chelsio.com/wp-content/uploads/resources/Lustre-Over-iWARP-vs-IB-FDR.pdf
>>
>> where they claim comparable performance of both.
>>
@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
--
--
Jeff Johnson
Co-Founder
Aeon Computing
jeff.john...@aeoncomputing.com
www.aeoncomputing.com
t: 858-412-3810 x1001 f: 858-412-3845
m: 619-204-9061
4170 Morena Boulevard, Suite D - San Dieg
OR directly on its local filesystem without having
>
> to go through the network?
>
>
> Thanks!
>
> Teng
>
> ___
> Lustre-discuss mailing list
> Lustre-discuss@lists.lustre.org
> http://lists.lustre.org/mailman/listinfo/
___
> Lustre-discuss mailing list
> Lustre-discuss@lists.lustre.org
> http://lists.lustre.org/mailman/listinfo/lustre-discuss
>
>
--
--
Jeff Johnson
Co-Founder
Aeon Computing
jeff.john...@aeoncomputing.com
www.aeoncomputing.com
t:
return 0;
>
> which suggests that device removal events are not handled. Is there a
> plan to fix this?
> _______
> Lustre-discuss mailing list
> Lustre-discuss@lists.lustre.org
> http://lists.lustre.org/mailman/listinfo/lustr
ndering if anyone has any idea what this means.
>
> Thanks,
> Murshid Azman.
>
> ___
> Lustre-discuss mailing list
> Lustre-discuss@lists.lustre.org
> http://lists.lustre.org/mailman/listinfo/lustre-discuss
>
>
--
--
Jeff Johnson
Co-Founder
Aeon Computing
__
> Lustre-discuss mailing list
> Lustre-discuss@lists.lustre.org
> http://lists.lustre.org/mailman/listinfo/lustre-discuss
>
--
--
Jeff Johnson
Co-Founder
Aeon Computing
jeff.john...@aeoncomputing.com
www.aeoncomputing.co
tre-OST000a: Client 0afb2e4c-d870-47ef-c16f-4d2bce6dabf9 (at
> 10.2.64.4@o2ib) refused reconnection, still busy with 9 active RPCs
>
> this only happen with clients that are reading a lot of small files
> (~100MB each) in the same OST.
>
> thank you,
>
> Eduardo
>
>
&
> this only happen with clients that are reading a lot of small files
> (~100MB each) in the same OST.
>
> thank you,
>
> Eduardo
>
>
>
> 2013/10/17 Jeff Johnson
>
> Hola Eduardo,
>
> How are the OSTs connected to the OSS (SAS, FC, Infiniband SRP)?
>
urrieta
> Unidad de Cómputo
> Instituto de Ciencias Nucleares, UNAM
> Ph. +52-55-5622-4739 ext. 5103
>
>
>
> ___
> Lustre-discuss mailing list
> Lustre-discuss@lists.lustre.org
> http://lists.lustre.org/mailman/listinfo/lustr
confidential
> and intended solely for the specified addressees. If you have received this
> email in error, please do not read, copy, distribute, disclose or use any
> information of this email in any way and please immediately notify the sender
> and delete this email. Thank you for your co
dev/sda /dev/sdc \
/dev/sde /dev/sdg /dev/sdi /dev/sdk /dev/sdm /dev/sdo /dev/sdq /dev/sds
--Jeff
--
------
Jeff Johnson
Co-Founder
Aeon Computing
jeff.john...@aeoncomputing.com
www.aeoncomputing.com
t: 858-412-3810 x1001 f: 858-412-3845
m: 619-204-9061
4170 Moren
mount both the 1.8.7 and 2.1.6 LFSs
simultaneously from the clients over tcp.
I understand that "newer server code is friendlier to older client code"
but how new? 2.1.x tree or do I have to be in 2.3 tree to get this ability?
Thanks..
--
------
Jeff Johnson
gt; Lustre-discuss mailing list
> Lustre-discuss@lists.lustre.org
> http://lists.lustre.org/mailman/listinfo/lustre-discuss
--
--
Jeff Johnson
Co-Founder
Aeon Computing
jeff.john...@aeoncomputing.com
www.aeoncomputing.com
t: 858-412-3810 x101 f: 858-412-3845
m: 619-20
_
> Lustre-discuss mailing list
> Lustre-discuss@lists.lustre.org
> http://lists.lustre.org/mailman/listinfo/lustre-discuss
--
--
Jeff Johnson
Co-Founder
Aeon Computing
jeff.john...@aeoncomputing.com
www.aeoncomputing.com
t: 858-412-3810 x101 f: 858-412-3
equivalent work experience
Development experience preferably with Lustre, LNET, Ethernet and TCP/IP
Relocation to San Diego, CA
Physical Demands / Work Environment
Ability and desire to work as part of a geographically-distributed team
--
--
Jeff Johnson
Co-Founder
t; Does that make any sense?
>
>
>
> ___
> Lustre-discuss mailing list
> Lustre-discuss@lists.lustre.org
> http://lists.lustre.org/mailman/listinfo/lustre-discuss
--
--
Jeff Johnson
Co-Founder
Aeon Computing
jeff.john...@aeoncomputin
se people
have also used many other options on the market and ended up here.
--Jeff
--
----------
Jeff Johnson
Co-Founder
Aeon Computing
jeff.john...@aeoncomputing.com
www.aeoncomputing.com
t: 858-412-3810 x101 f: 858-412-3845
m: 619-204-9061
4170 Morena Boulevard, Suite D -
start with a non
Whamcloud, stock kernel (plus headers, devel, etc). Then compile/install
the OFED version of your choice. Once you have that you can build Lustre
from source where it will compile against OFED and the installed kernel.
--Jeff
---
Jeff Johnson
Co-Fo
On 12/7/12 9:34 AM, Dilger, Andreas wrote:
> I've been using Lustre for years with my home MythTV (Linux PVR) setup.
Nerd. :)
--
------
Jeff Johnson
Co-Founder
Aeon Computing
jeff.john...@aeoncomputing.com
www.aeoncomputing.com
t: 858-412-3810 x101 f: 858-41
ve a great weekend!
>
> Megan Larko
> Hewlett-Packard
> ___
> Lustre-discuss mailing list
> Lustre-discuss@lists.lustre.org
> http://lists.lustre.org/mailman/listinfo/lustre-discuss
--
Megan,
lnet pings aren't the same as tcpip/udp pings. An lnet ping 'lctl ping' would
need to touch an active lnet instance on the target address. I don't think you
can bind lnet to a pacemaker virtual IP but I'll let someone smarter than me on
this list confirm or correct me.
In any event an l
gs, each on a private 1 GbE network. We have
> a bonded 10 GbE network for the LNET.
>
> Thanks,
>
> Shawn
>
>
>
> ___
> Lustre-discuss mailing list
> Lustre-discuss@lists.lustre.org
> http://lists.lustre.org/mai
@lists.lustre.org
> http://lists.lustre.org/mailman/listinfo/lustre-discuss
--
--
Jeff Johnson
Co-Founder
Aeon Computing
jeff.john...@aeoncomputing.com
www.aeoncomputing.com
t: 858-412-3810 x101 f: 858-412-3845
m: 619-20
;
>
>
>
> _______
> Lustre-discuss mailing list
> Lustre-discuss@lists.lustre.org
> http://lists.lustre.org/mailman/listinfo/lustre-discuss
--
--
Jeff Johnson
Man
464191% /mnt/mdt
Any feedback is appreciated!
--Jeff
--
--
Jeff Johnson
Partner
Aeon Computing
jeff dot johnson at aeoncomputing.com
www.aeoncomputing.com
t: 858-412-3810 x101 f: 858-412-3845
4905 Morena Boulevard, Suite 1313 - San Diego
ut
> regardless we should have this work completed in the near future.
--
------
Jeff Johnson
Aeon Computing
jeff dot johnson at aeoncomputing.com
www.aeoncomputing.com
4905 Morena Boulevard, Suite 1313 - San Diego, CA 92117
_
rk completed in the near future.
--
------
Jeff Johnson
Aeon Computing
jeff dot johnson at aeoncomputing.com
www.aeoncomputing.com
4905 Morena Boulevard, Suite 1313 - San Diego, CA 92117
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://li
, Mar 29, 2012 at 11:40 PM, Johann Lombardi wrote:
> On Thu, Mar 29, 2012 at 11:30:03PM -0700, Jeff Johnson wrote:
>> Does anyone know the most recent kernel (RHEL/CentOS) that can be
>> successfully patched and compiled against the current Lustre 1.8 git
>> source tree? I at
. Applying the patches manually reveals several hunk failures. I
require a kernel more recent than 2.6.18-274.3.1 for non-Lustre
related issues.
Thanks in advance.
--
--
Jeff Johnson
Aeon Computing
jeff dot johnson at aeoncomputing.com
www.aeoncomputing.com
4905 Morena
0xfe/0x132
[] request_module+0x0/0x14d
[] child_rip+0xa/0x11
[] keventd_create_kthread+0x0/0xc4
[] kthread+0x0/0x132
[] child_rip+0x0/0x11
--
--
Jeff Johnson
Manager
Aeon Computing
jeff.johnson "at" aeoncomputing.com
www.aeoncomputing.com
t: 858-412-38
, it's the same procedure as with an OST.
cliffw
On Tue, Jun 14, 2011 at 12:06 PM, Jeff Johnson
<mailto:jeff.john...@aeoncomputing.com>> wrote:
Greetings,
I am attempting to add mds failover operation to an existing v1.8.4
filesystem. I have heartbeat/stonith configu
r enabling failover via tunefs.lustre are for OSTs
and I want to be sure that there isn't a different procedure for the MDS
since it can only be active/passive.
Thanks,
--Jeff
------
Jeff Johnson
Aeon Computing
www.aeoncomputing.com
4905 Morena Boulevard, Suite 1
to mount read-only as ext4 but
any attempt at reading the extended attributes with getfattr fails.
Thanks,
--Jeff
--
--
Jeff Johnson
Manager
Aeon Computing
jeff.john...@aeoncomputing.com
www.aeoncomputing.com
t: 858-412-3810 x101 f: 858-412-3845
m: 619-204-9061
4
was never really explained or resolved. It appears that these
controllers, like small children, have tantrums and fall apart. A power cycle
clears the condition.
Not the best controller for an OSS.
--Jeff
---mobile signature---
Jeff Johnson - Aeon Computing
jeff.john...@aeoncomputing.com
On
there are any RAID errors. It might be a network problem as well.
Morning is coming and one of the developers will likely respond to this with
more suggestions.
--Jeff
---mobile signature---
Jeff Johnson - Aeon Computing
jeff.john...@aeoncomputing.com
m: 619-204-9061
On Dec 20, 2010, at 23:13
-Jeff
---mobile signature---
Jeff Johnson - Aeon Computing
jeff.john...@aeoncomputing.com
On Dec 20, 2010, at 22:27, Daniel Raj wrote:
> Hi Jeff,
>
>
> Thanks for your reply
>
> Storage information :
>
>
> DL380G5 == OSS + 16GB Ram
> OS== SFS
or OS related that
are causing the system to reboot in addition to Lustre complaining that it
can't find the OST storage device.
Others here on the list will likely give you a more detailed answer. The
storage device is the place i would look first.
--Jeff
--
--
Je
On 12/6/10 3:55 PM, Oleg Drokin wrote:
> Hello!
>
> On Dec 6, 2010, at 6:50 PM, Jeff Johnson wrote:
>> Previous test incarnations of this filesystem were built where ost name
>> was not assigned (e.g.: OST) and was assigned upon first mount and
>> connection to t
Greetings..
Is it possible that the below error can be derived from a client that
has not been rebooted or had lustre kernel mods reloaded during a time
when a few test file systems were built and mounted?
LustreError: 12967:0:(ldlm_lib.c:1914:target_send_reply_msg()) @@@ processing
error (-19
96 matches
Mail list logo