Hello,
I am trying to deploy a multi rail configuration on Lustre 2.10.0 on RHEL73.
My goal is to use both the IB interfaces on OSSes and client.
I have one client and two OSSes and 1 MDS
My LNet network is labelled o2ib5 and tcp5 just for my own convenience.
What I did is to modify the configurati
> On Thu, Aug 17, 2017 at 1:58 AM, Riccardo Veraldi
> wrote:
>> hello.
>>
>> I am installing lustre-dkms-2.10.0 on RHEL74 running kernel
>> 4.4.82-1.el7.elrepo.x86_64
>>
>> dkms.conf: Error! Directive 'DEST_MODULE_LOCATION' does not begin with
that will work for mellanox cards
(ConnectX-3 family).
I Can't speak for ConnectX-4 because I have no experience on those right
now.
On 8/23/17 4:36 PM, Dilger, Andreas wrote:
> On Aug 23, 2017, at 08:39, Mohr Jr, Richard Frank (Rick Mohr)
> wrote:
>>
>>> On Aug 22, 201
On 8/23/17 7:39 AM, Mohr Jr, Richard Frank (Rick Mohr) wrote:
>> On Aug 22, 2017, at 7:14 PM, Riccardo Veraldi
>> wrote:
>>
>> On 8/22/17 9:22 AM, Mannthey, Keith wrote:
>>> Younot expected.
>>>
>> yes they are automatically used on my Mellanox
thanks
Rick
>
>
> Thanks,
>
> Keith
>
>
>
> *From:*lustre-discuss [mailto:lustre-discuss-boun...@lists.lustre.org]
> *On Behalf Of *Chris Horn
> *Sent:* Monday, August 21, 2017 12:40 PM
> *To:* Riccardo Veraldi ; Arman
> Khalatyan
> *Cc:* lustr
.
thanks to everyone for helping
Riccardo
On 8/19/17 8:54 AM, Riccardo Veraldi wrote:
>
> I found out that ko2iblnd is not getting settings from
> /etc/modprobe/ko2iblnd.conf
> alias ko2iblnd-opa ko2iblnd
> options ko2iblnd-opa peer_credits=128 peer_credits_hiw=64 credits=1024
>
un following:
> tuned-adm profile latency-performance
> for more options use:
> tuned-adm list
>
> It will be interesting to see the difference.
>
> Am 19.08.2017 3:57 vorm. schrieb "Riccardo Veraldi"
> mailto:riccardo.vera...@cnaf.infn.it>>:
>
see the difference.
>
> Am 19.08.2017 3:57 vorm. schrieb "Riccardo Veraldi"
> mailto:riccardo.vera...@cnaf.infn.it>>:
>
> Hello Keith and Dennis, these are the test I ran.
>
> * obdfilter-survey, shows that I Can saturate disk performance,
>
t;
> Sent from my iPhone
>
> On Aug 18, 2017, at 8:57 PM, Riccardo Veraldi
> mailto:riccardo.vera...@cnaf.infn.it>>
> wrote:
>
>> Hello Keith and Dennis, these are the test I ran.
>>
>> * obdfilter-survey, shows that I Can saturate disk performance, the
install ko2iblnd /usr/sbin/ko2iblnd-probe
On 8/18/17 7:05 PM, Dennis Nelson wrote:
> If all four servers are identical and all have IB, why are you
> specifying tcp when mounting the client?
>
> Sent from my iPhone
>
> On Aug 18, 2017, at 8:57 PM, Riccardo Veraldi
>
e.conf on the MDS
options lnet networks=tcp5(eth0)
>
> Sent from my iPhone
>
> On Aug 18, 2017, at 8:57 PM, Riccardo Veraldi
> mailto:riccardo.vera...@cnaf.infn.it>>
> wrote:
>
>> Hello Keith and Dennis, these are the test I ran.
>>
>> * obdfilter
Keith
> -Original Message-----
> From: lustre-discuss [mailto:lustre-discuss-boun...@lists.lustre.org] On
> Behalf Of Riccardo Veraldi
> Sent: Thursday, August 17, 2017 10:48 PM
> To: Dennis Nelson ; lustre-discuss@lists.lustre.org
> Subject: Re: [lustre-discuss] Lustre poor
On 8/18/17 1:13 PM, Mannthey, Keith wrote:
> Is Selinux enabled on the client or server?
the first thing I always to is to disable SElinux.
it's not running.
>
> Thanks,
> Keith
> -Original Message-----
> From: Riccardo Veraldi [mailto:riccardo.vera...@cnaf.infn.it]
at the OBD layer in Lustre.
>
> Thanks,
> Keith
> -Original Message-
> From: lustre-discuss [mailto:lustre-discuss-boun...@lists.lustre.org] On
> Behalf Of Riccardo Veraldi
> Sent: Thursday, August 17, 2017 10:48 PM
> To: Dennis Nelson ; lustre-discuss@lists.lustre.
this is my lustre.conf
[drp-tst-ffb01:~]$ cat /etc/modprobe.d/lustre.conf
options lnet networks=o2ib5(ib0),tcp5(enp1s0f0)
data transfer is over infiniband
ib0: flags=4163 mtu 65520
inet 172.21.52.83 netmask 255.255.252.0 broadcast 172.21.55.255
On 8/17/17 10:45 PM, Riccardo Veraldi
On 8/17/17 9:22 PM, Dennis Nelson wrote:
> It appears that you are running iozone on a single client? What kind of
> network is tcp5? Have you looked at the network to make sure it is not the
> bottleneck?
>
yes the data transfer is on ib0 interface and I did a memory to memory
test through Inf
On 8/17/17 8:56 PM, Jones, Peter A wrote:
> Riccardo
>
> I expect that it will be useful to know which version of ZFS you are using
apologies for not telling this I Am running 0.7.1
>
> Peter
>
>
>
>
> On 8/17/17, 8:21 PM, "lustre-discuss on behalf of
Hello,
I am running Lustre 2.10.0 on Centos 7.3
I have one MDS and two OSSes, each with one OST
each OST is a ZFS raidz1 with 6 nvme disks each.
The configuration of ZFS is done in a way to allow maximum write
performances:
zfs set sync=disabled drpffb-ost02
zfs set atime=off drpffb-ost02
zfs set
, Dilger, Andreas wrote:
> On Apr 6, 2017, at 20:05, Riccardo Veraldi
> wrote:
>> I figured out what it was
> It's always nice in cases like this to follow up with an explanation of what
> was wrong, so that in case anyone else has a similar problem they can see the
> sol
hello.
I am installing lustre-dkms-2.10.0 on RHEL74 running kernel
4.4.82-1.el7.elrepo.x86_64
dkms.conf: Error! Directive 'DEST_MODULE_LOCATION' does not begin with
'/kernel', '/updates', or '/extra' in record #24.
dkms.conf: Error! Directive 'DEST_MODULE_LOCATION' does not begin with
'/kernel',
I figure out the problem was a wrong setting client side.
On 8/7/17 8:09 PM, Riccardo Veraldi wrote:
> it is like if my /etc/modprobe.d/lustre.conf gets completely ignored
> when lnet module is loaded
>
> On 8/7/17 7:05 PM, Cowe, Malcolm J wrote:
>> Lustre file system nam
t; can do this with tunefs.lustre on all the storage targets, but I can’t
> remember if you need to use --erase-params and recreate all the options.
> Alternatively, reformat.
>
> Malcolm.
>
> On 8/8/17, 11:30 am, "lustre-discuss on behalf of Riccardo Veraldi"
&g
t;
> Malcolm.
>
> On 8/8/17, 11:30 am, "lustre-discuss on behalf of Riccardo Veraldi"
> riccardo.vera...@cnaf.infn.it> wrote:
>
> trying to debug more this problem looks like tcp port 9888 is closed on
> the MDS.
> this is weird. lnet module is
that I need to take care of in the configuration ?
On 8/7/17 6:13 PM, Riccardo Veraldi wrote:
> Hello,
>
> I have a new Lustre cluster based on Lustre 2.10.0/ZFS 0.7.0 on Centos 7.3
> Lustre FS creation went smooth.
> When I tryed then to mount from the clients, Lustre is not able to
Hello,
I have a new Lustre cluster based on Lustre 2.10.0/ZFS 0.7.0 on Centos 7.3
Lustre FS creation went smooth.
When I tryed then to mount from the clients, Lustre is not able to mount
any of the OSTs.
It stops at MGS/MDT level.
this is from the client side:
mount.lustre: mount 192.168..48.254
dkms is much more convenient if you plan to stay up to date with kernel
patching.
You do not need to rebuild a new kmod rpm every time yum upate will
upgrade your kernel to latest patch level,
also I do not like that weak-updates kernel module solution (with
symbolic links).
I am using dkms buildin
you can use the dkms lustre package (build it from rpms) and get rid of
kmod dependencies
On 7/17/17 9:42 AM, Götz Waschk wrote:
> Hi Peter,
>
> I wasn't able to install the official binary build of
> kmod-lustre-osd-zfs, even with kmod-zfs-0.6.5.9-1.el7_3.centos from
> from zfsonlinux.org, the ks
Hello,
on one of my lustre FS I need to find a solution so that users can still
access data on the FS but cannot write new files on it.
I have hundreds of clients accessing the FS so remounting it ro is not
really easily feasible.
Is there an option on the OSS side to allow OSTs to be accessed jus
trying to install lustre-dkms on 4.4.76-1.el7.elrepo.x86_64
Loading new lustre-client-2.9.59 DKMS files...
Building for 4.4.76-1.el7.elrepo.x86_64
Building initial module for 4.4.76-1.el7.elrepo.x86_64
Done.
dkms.conf: Error! Directive 'DEST_MODULE_LOCATION' does not begin with
'/kernel', '/update
Hello,
I have a high volume data transfer between my Lustre filesystems.
I upgraded to Lustre 2.9.0 on server side and Lustre 2.9.59 on client
side (because of a corruption problem bug).
My clients running 2.9.59 hangs and I need to reboot them and at about
the same time these are the kind of the e
debug/lnet/devices and so
on...
so basically to run "lustre check" from the lustre client is not
possible as a normal user, while it was possible before.
Is there a workaround or is it intended to be like that ?
thanks
On 6/20/17 3:34 PM, Dilger, Andreas wrote:
> On Jun 20, 2017, at
Hello,
I built lustre-client 2.9.59 from source as dkms package.
Everything work fine but /proc/fs/lustre/version disappeared while it
was there with Lustre client 2.9.0
is this normal ?
thanks
Rick
___
lustre-discuss mailing list
lustre-discuss@l
Hello,
I am planning to upgrade my MDS and OSS (several of them to Lustre 2.9.0
I would rather avoid that but my clients are all RHEL73 with 2.9.0
Lustre client.
Apparently the 2.9.0 client is working fine with the 2.4.1 server
The question is, if I decide to upgrade how can I go from 2.4.1 to
way around, as the combination is not
> supported and has not been tested.
>
> I strongly suggest you upgrade your servers! There are lots of handy
> new features, and you would avoid this problem entirely.
>
> - Patrick
> -------
are lots of handy
> new features, and you would avoid this problem entirely.
>
> - Patrick
> ----
> *From:* lustre-discuss on
> behalf of Riccardo Veraldi
> *Sent:* Saturday, May 6, 2017 5:25:40 PM
> *To:* lustre-discuss@lists.lustre.org
> *Subject:* [lustre-discuss]
Hello,
I moved many of my lustre clients to 2.9.0. Anyway the server version is
pretty old (2.4)
Do I have to worry ?
Things seems working though
May 4 14:37:48 psana1620 kernel: [ 43.145108] Lustre: Server MGS
version (2.4.1.0) is much older than client. Consider upgrading server
(2.9.0)
Hello,
I am building lustre-client from src rpm on RHL73.
it fails with this error during the install process:
+ echo /etc/init.d/lnet
+ echo /etc/init.d/lsvcgss
+ find /root/rpmbuild/BUILDROOT/lustre-client-2.9.0-1.el7.x86_64 -name
'*.so' -type f -exec chmod +x '{}' ';'
+ '[' -d
/root/rpmbuild/
thanks I am about to proceed.
On 4/10/17 1:21 PM, Dilger, Andreas wrote:
> On Apr 6, 2017, at 09:24, Riccardo Veraldi
> wrote:
>> Hello,
>>
>> I plan to upgrade both my MDS and OSS server which are now RHEL72 Lustre
>> 2.8.0 + ZFS 0.6.5.8 to RHEL73 Lustre 2.9.0
Hello,
when I try to install Lustre 2.9.0 rpm or ither build my own and install
them, I have a lot of dependency fails:
Kernel is 3.10.0-514.10.2.el7.x86_64
zfs-0.6.5.9-1.el7_3.centos.x86_64
zfs-dkms-0.6.5.9-1.el7_3.centos.noarch
libzfs2-0.6.5.9-1.el7_3.centos.x86_64
libzfs2-devel-0.6.5.9-1.el7_
I figured out what it was
On 4/6/17 6:02 PM, Riccardo Veraldi wrote:
> Hello,
>
> I am trying to build the lustre 2.9.0 server binary rpm.
>
> the default provided rpm will not work because they are built on a
> different version of ZFS.
>
> I am on RHEL73 kernel 3.
Hello,
I am trying to build the lustre 2.9.0 server binary rpm.
the default provided rpm will not work because they are built on a
different version of ZFS.
I am on RHEL73 kernel 3.10.0-514.10.2.el7.x86_64
I installed lustre-2.9.0-1.src.rpm and lustre-dkms-2.9.0-1.el7.src.rpm
when I try to b
Hello,
I plan to upgrade both my MDS and OSS server which are now RHEL72 Lustre
2.8.0 + ZFS 0.6.5.8 to RHEL73 Lustre 2.9.0 and ZFS 0.6.5.9.
I plan to first upgrade the MDS then the OSS server.
Is there any risk to lose data ?
My idea was to shut down lustre and upgrade the operating system and
the
Yes it worked fine, thank you!
On 1/31/17 5:12 AM, Bob Ball wrote:
> Just "zpool scrub ". Scrub may slow down access, but it does
> not otherwise impact the OST, in my experience.
>
> bob
>
> On 1/30/2017 9:52 PM, Riccardo Veraldi wrote:
>> Hello,
>>
&g
Hello,
I need to scrub the underlying ZFS data pools on my Lustre OSSs.
May I do it safely when the Lustre filesystem is mounted ?
thanks
Rick
___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lust
al kernel/fs/lustre/
>
> --Jeff
>
> On Tue, Nov 29, 2016 at 10:27 PM, Riccardo Veraldi
> mailto:riccardo.vera...@cnaf.infn.it>>
> wrote:
>
> I fixed it building Lustre 2.8.60 and it works.
> Anyway the kernel modules osd_zfs.ko and so on are placed in
&g
accordingly to rebuild it properly.
Any hint about this, on how to restore the standard path of the lustre,
lnet, osd_zfs kernel modules ?
thank you
Riccardo
On 11/29/16 2:25 PM, Riccardo Veraldi wrote:
> Hello.
>
> Today I rebuilt Lustre for the new kernel which is inside RHEL
>
Hello.
Today I rebuilt Lustre for the new kernel which is inside RHEL
7.3/CentOS 7.3 3.10.0-514.el7.x86_64
I do not know what changed in the distribution but it is not compiling
anymore.
What changed in my environment was a yum update which brought the system
from RHEL 7.2 kernel 3.10.0-327.36.3.e
Hello,
on my Lustre OSSs I have the following settings in ko2iblnd
alias ko2iblnd-opa ko2iblnd
options ko2iblnd-opa peer_credits=128 peer_credits_hiw=64 credits=1024
concurrent_sends=256 ntx=2048 map_on_demand=32 fmr_pool_size=2048
fmr_flush_trigger=512 fmr_cache=1
install ko2iblnd /usr/sbin
pdf>
On 14 October 2016 at 08:42, Dilger, Andreas <mailto:andreas.dil...@intel.com>> wrote:
On Oct 13, 2016 19:02, Riccardo Veraldi
mailto:riccardo.vera...@cnaf.infn.it>> wrote:
>
> Hello,
> will the lustre 2.9.0 rpm be released on the Intel site ?
On 14/10/16 14:31, Mark Hahn wrote:
anyway if I force direct I/O, for example using oflag=direct in dd,
the write performance drop as low as 8MB/sec
with 1MB block size. And each write it's about 120ms latency.
but that's quite a small block size. do you approach buffered
performance
if you
*From:*lustre-discuss mailto:lustre-discuss-boun...@lists.lustre.org>> on behalf of
Patrick Farrell mailto:p...@cray.com>>
*Sent:* Friday, October 14, 2016 3:12:22 PM
*To:* Riccardo Veraldi; lustre-discuss@lists.lustre.org
<mailto:lustre-discuss@lists.lustre.org&
irect write performance were you hoping for? It will
never match that 800 MB/s from one thread you see with buffered I/O.
- Patrick
*From:* lustre-discuss on
behalf of Riccardo Veraldi
*Sent:* Friday, October 14, 2016
Hello,
I would like how may I improve the situation of my lustre cluster.
I have 1 MDS and 1 OSS with 20 OST defined.
Each OST is a 8x Disks RAIDZ2.
A single process write performance is around 800MB/sec
anyway if I force direct I/O, for example using oflag=direct in dd, the
write performanc
Hello,
will the lustre 2.9.0 rpm be released on the Intel site ?
Also the latest rpm for zfsonlinux available is 0.6.5.8
thank you
Riccardo
On 13/10/16 11:16, Dilger, Andreas wrote:
On Oct 13, 2016, at 10:32, E.S. Rosenberg wrote:
On Fri, Oct 7, 2016 at 9:16 AM, Xiong, Jinshan wrote:
On
thank you I did not know about LU-4865
On 03/08/16 17:01, Dilger, Andreas wrote:
On Aug 3, 2016, at 12:32, Jeff Johnson wrote:
On 8/3/16 10:57 AM, Dilger, Andreas wrote:
On Jul 29, 2016, at 03:33, Oliver Mangold wrote:
On 29.07.2016 04:19, Riccardo Veraldi wrote:
I am using lustre on ZFS
On 03/08/16 10:57, Dilger, Andreas wrote:
On Jul 29, 2016, at 03:33, Oliver Mangold wrote:
On 29.07.2016 04:19, Riccardo Veraldi wrote:
I am using lustre on ZFS.
While write performances are excellent also on smaller files, I find
there is a drop down in performance
on reading 20KB files
:19 PM, Riccardo Veraldi
wrote:
Hello,
I have a lustre cluster on rhel7, 6 OSS each of them has 3 OSTs and 1 MDS.
I am using lustre on ZFS.
While write performances are excellent also on smaller files, I find there is a
drop down in performance
on reading 20KB files. Performance can go as low as
Hello,
I have a lustre cluster on rhel7, 6 OSS each of them has 3 OSTs and 1 MDS.
I am using lustre on ZFS.
While write performances are excellent also on smaller files, I find
there is a drop down in performance
on reading 20KB files. Performance can go as low as 200MB/sec or even less.
I am
connecting to both
filesystems. One of the filesystems will need to regenerate the configuration to use "tcp1" and "o2ib1" (or whatever)
to allow the clients to distinguish between the different networks.
sounds like that.
How do I do it ?
I am using lustre on ZFS.
Hello,
I am in a situation in which I have a bunch of lustre clients which may
access two different lustre
infrastructures. One is working on TCP and the other on infiniband.
So I have 2 OSS groups. One group is reachable thought TCP/Ethernet and
the other group through infiniband.
The prob
nt (it's in the root filesystem that hasn't yet been mounted).
Therefore the zfs module does not get those options passed to it during load.
Olaf P. Faaland
Livermore Computing
________
From: Riccardo Veraldi [riccardo.vera...@cnaf.infn.it]
Sent: Tuesday, Jul
his first boot stage, your hand-edited zfs.conf
is not present (it's in the root filesystem that hasn't yet been mounted).
Therefore the zfs module does not get those options passed to it during load.
Olaf P. Faaland
Livermore Computing
________
From: R
Faaland, Olaf P. [faala...@llnl.gov]
Sent: Tuesday, July 12, 2016 11:48 AM
To: Riccardo Veraldi; lustre-discuss@lists.lustre.org
Subject: Re: [lustre-discuss] problem loading zfs properties for a lustre/zfs fs
Riccardo,
Does your initramfs still include zfs-related files? What is the output of
-Olaf
____
From: Riccardo Veraldi [riccardo.vera...@cnaf.infn.it]
Sent: Monday, July 11, 2016 4:55 PM
To: Faaland, Olaf P.; lustre-discuss@lists.lustre.org
Subject: Re: [lustre-discuss] problem loading zfs properties for a lustre/zfs fs
On 11/07/16 16:15, Faala
zfs.conf then rebooted the system,
but they are not loaded
thank you
Riccardo
Olaf P. Faaland
Livermore Computing
From: lustre-discuss [lustre-discuss-boun...@lists.lustre.org] on behalf of
Riccardo Veraldi [riccardo.vera...@cnaf.infn.it]
Sent: Monday
Hello,
I am tailoring my system for lustre on ZFS and I am not able to set
these parameters
writing the config file /etc/modprobe.d/zfs.conf with the following options
options zfs zfs_prefetch_disable=1
options zfs zfs_txg_history=120
options zfs metaslab_debug:unload=1
when I check the parame
I also can say this procedure works fine with a little tweak to the
.spec file
for building the server packages.
It works well also with kernel 3.18.33, the latest one I tested with
Lustre/ZFS
On 24/06/16 13:15, Christopher J. Morrone wrote:
Yes, it is all a lot harder than it should be at
The error indeed does exist. It hapened to me too.
For my own Lustre server needs I just fixed it commenting out two lines
in the .spec file since I am not running Lustre in High Avilability.
On 29/06/16 11:41, Christopher J. Morrone wrote:
On 06/29/2016 10:36 AM, Martin Hecht wrote:
Hello
Hello,
Actually I have a problem which could be similar to this:
https://groups.google.com/forum/#!topic/lustre-discuss-list/ED6rxVGuKM8
I am running oss and mds on Centos 6.4 with lustre 2.4
Suddently my JBOD array failed. It has something like 12 OST on it.
When I established it back, one of
iv
On Wednesday, May 4, 2016, Riccardo Veraldi
mailto:riccardo.vera...@cnaf.infn.it>>
wrote:
On 04/05/16 00:42, Tommi T wrote:
LU-7404 zfs: Reset default zfs version to 0.6.5.5
very good news then!!
___
lustre-discuss mailing
On 04/05/16 00:42, Tommi T wrote:
LU-7404 zfs: Reset default zfs version to 0.6.5.5
very good news then!!
___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
you need to use 0.6.4.2
this because lustre has I/O performance issues with the latest ZFS
version for linux.
I used ZFS 0.6.4 and it works.
only you have to build your own lustre 2.8.0 RPM and eventually build
them without ldiskfs support.
I was not interested in ldiskfs so I just took it
Hello,
I am building lustre 2.8.0 rpm on RHEL7.
rpmbuild --without ldiskfs --without ldiskfsprogs --without lustre_tests
--without lustre_modules --with zfs -ba lustre.spec
But there are quite a few problems in the %install section of the SPEC
file I think.
some files are missing.
First I g
Hello,
I am trying to deploy a small cluster with "zfs on lustre" setup for
initial testing a SSD solution.
I followed a few hints on the following resources:
http://zfsonlinux.org/lustre-configure-single.html
http://zfsonlinux.org/lustre.html
I am working on rhel7
I installed the following
101 - 174 of 174 matches
Mail list logo