[lustre-discuss] Compiling client issue lustre 2.12.9

2024-05-17 Thread Jerome Verleyen via lustre-discuss

Dear all

I need to install client package for Lustre v2.12.9 on some Almalinux 
8.9 system. As i could'nt get rpm file, i try to compile from sours rpm 
file. I follow this recomendation from lustre'wiki:
https://wiki.whamcloud.com/display/PUB/Rebuilding+the+Lustre-client+rpms+for+a+new+kernel 



I'm facing a compile issue, and could not resolve this at this moment:

make[3]: Entering directory '/usr/src/kernels/4.18.0-513.24.1.el8_9.x86_64'
  CC [M] /home/jerome/rpmbuild/SOURCES/lustre-2.12.9/lustre/llite/vvp_io.o
In file included from include/linux/string.h:254,
 from include/linux/bitmap.h:9,
 from include/linux/cpumask.h:12,
 from include/linux/smp.h:13,
 from include/linux/lockdep.h:15,
 from include/linux/mutex.h:17,
 from include/linux/kernfs.h:13,
 from include/linux/sysfs.h:16,
 from include/linux/kobject.h:20,
 from 
/home/jerome/rpmbuild/SOURCES/lustre-2.12.9/lustre/include/obd.h:36,
 from 
/home/jerome/rpmbuild/SOURCES/lustre-2.12.9/lustre/llite/vvp_io.c:41:

In function 'fortify_memset_chk',
    inlined from 'vvp_io_init' at 
/home/jerome/rpmbuild/SOURCES/lustre-2.12.9/lustre/llite/vvp_io.c:1520:2:
include/linux/fortify-string.h:239:4: error: call to 
'__write_overflow_field' declared with attribute warning: detected write 
beyond size of field (1st parameter); maybe use struct_group()? [-Werror]

    __write_overflow_field(p_size_field, size);
    ^~
cc1: all warnings being treated as errors
make[6]: *** [scripts/Makefile.build:318: 
/home/jerome/rpmbuild/SOURCES/lustre-2.12.9/lustre/llite/vvp_io.o] Error 1
make[5]: *** [scripts/Makefile.build:558: 
/home/jerome/rpmbuild/SOURCES/lustre-2.12.9/lustre/llite] Error 2
make[4]: *** [scripts/Makefile.build:558: 
/home/jerome/rpmbuild/SOURCES/lustre-2.12.9/lustre] Error 2
make[3]: *** [Makefile:1619: 
_module_/home/jerome/rpmbuild/SOURCES/lustre-2.12.9] Error 2

make[3]: Leaving directory '/usr/src/kernels/4.18.0-513.24.1.el8_9.x86_64'
make[2]: *** [autoMakefile:1123: modules] Error 2
make[2]: Leaving directory '/home/jerome/rpmbuild/SOURCES/lustre-2.12.9'
make[1]: *** [autoMakefile:661: all-recursive] Error 1
make[1]: Leaving directory '/home/jerome/rpmbuild/SOURCES/lustre-2.12.9'
make: *** [autoMakefile:519: all] Error 2


In anothee email list, they recom,end to use a CFLAGS option like this: 
-D_FORTIFY_SOURCE=0. However, this option can't resolve my issue.


Hope someone could help me on this stuff?

Best regards.

--
-- Jérôme
 Beau jeune homme, il doit pas être loin de ses 75 kilos.
- J'l'ai pas pesé!
- Dans ces poids-là, j'peux vous l'embaumer façon Cléopatre, le Chef d'Oeuvre 
égyptien, inaltérable!
- Mais on vous demande pas de conserver, on vous demande de détruire!
(Michel Audiard)

___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


Re: [lustre-discuss] Unexpected result with overstriping

2024-05-17 Thread bauerj
Andreas,
Thanks for the update. I should point out that the documentation for -C is 
incorrect in that it states all the OSTs in the file system will be used, but 
in actuality it is all the OSTs in the selected  pool will be used. 
John

Sent from my iPhone

> On May 17, 2024, at 11:37 AM, Andreas Dilger  wrote:
> 
>  There is a patch inflight that adds "-C -N" -> "N stripes per OST", N <= 
> 32.  This also changes "-C -1" to be the same as "-c -1" (ie. 1-stripe per 
> OST), which was discussed and decided to be the more sensible option than 
> using 2000 stripes for a file. If there is a need for a 2000-stripe file, "-C 
> 2000" can still be used. 
> 
> https://review.whamcloud.com/54192
> 
> Cheers, Andreas
> 
>>> On May 17, 2024, at 17:32, Nathan Dauchy via lustre-discuss 
>>>  wrote:
>>> 
>> 
>> John,
>> 
>> I believe the lfs-setstripe man page is incorrect (or at least misleading) 
>> in this case. I recall seeing 2000 hardcoded as a maximum, so it appears to 
>> be picking that.
>> 
>> Using "-C -1" to put a single stripe on each OST wouldn't have any benefit 
>> over "-c -1".   IMHO, it would probably be more useful to have negative 
>> values represent number of stripes per OST. 
>> 
>> -Nathan
>> From: lustre-discuss  on behalf of 
>> John Bauer 
>> Sent: Friday, May 17, 2024 8:48 AM
>> To: lustre-discuss 
>> Subject: [lustre-discuss] Unexpected result with overstriping
>>  
>> External email: Use caution opening links or attachments
>> 
>> Good morning all,
>> 
>> I am playing around with overstriping a bit and I found a behavior that, to 
>> me, would seem unexpected.  The documentation for -C -1  indicates that the 
>> file should be striped over all available OSTs.  The pool, which happens to 
>> be the default, is ssd-pool which has 32 OSTs.  I got a stripeCount of 2000. 
>>  Is this as expected?
>> 
>> pfe20.jbauer2 213> rm -f /nobackup/jbauer2/ddd.dat
>> pfe20.jbauer2 214> lfs setstripe -C -1 /nobackup/jbauer2/ddd.dat
>> pfe20.jbauer2 215> lfs getstripe /nobackup/jbauer2/ddd.dat
>> /nobackup/jbauer2/ddd.dat
>> lmm_stripe_count:  2000
>> lmm_stripe_size:   1048576
>> lmm_pattern:   raid0,overstriped
>> lmm_layout_gen:0
>> lmm_stripe_offset: 119
>> lmm_pool:  ssd-pool
>> obdidx objid objid group
>>119  523862870x31f59ef 0
>>123  523479470x31ec42b 0
>>127  527344870x324aa17 0
>>121  528393960x32643e4 0
>>131  527427090x324ca35 0
>>116  522426590x31d28e3 0
>>117  518311250x316e155 0
>>124  524252180x31ff202 0
>>125  524027220x31f9a22 0
>>106  527005810x32425a5 0
>> 
>> edited for brevity
>> 
>> 
>> ___
>> lustre-discuss mailing list
>> lustre-discuss@lists.lustre.org
>> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


Re: [lustre-discuss] Unexpected result with overstriping

2024-05-17 Thread Nathan Dauchy via lustre-discuss
John,

I believe the lfs-setstripe man page is incorrect (or at least misleading) in 
this case. I recall seeing 2000 hardcoded as a maximum, so it appears to be 
picking that.

Using "-C -1" to put a single stripe on each OST wouldn't have any benefit over 
"-c -1".   IMHO, it would probably be more useful to have negative values 
represent number of stripes per OST. 

-Nathan

From: lustre-discuss  on behalf of 
John Bauer 
Sent: Friday, May 17, 2024 8:48 AM
To: lustre-discuss 
Subject: [lustre-discuss] Unexpected result with overstriping

External email: Use caution opening links or attachments


Good morning all,

I am playing around with overstriping a bit and I found a behavior that, to me, 
would seem unexpected.  The documentation for -C -1  indicates that the file 
should be striped over all available OSTs.  The pool, which happens to be the 
default, is ssd-pool which has 32 OSTs.  I got a stripeCount of 2000.  Is this 
as expected?

pfe20.jbauer2 213> rm -f /nobackup/jbauer2/ddd.dat
pfe20.jbauer2 214> lfs setstripe -C -1 /nobackup/jbauer2/ddd.dat
pfe20.jbauer2 215> lfs getstripe /nobackup/jbauer2/ddd.dat
/nobackup/jbauer2/ddd.dat
lmm_stripe_count:  2000
lmm_stripe_size:   1048576
lmm_pattern:   raid0,overstriped
lmm_layout_gen:0
lmm_stripe_offset: 119
lmm_pool:  ssd-pool
obdidx objid objid group
   119  523862870x31f59ef 0
   123  523479470x31ec42b 0
   127  527344870x324aa17 0
   121  528393960x32643e4 0
   131  527427090x324ca35 0
   116  522426590x31d28e3 0
   117  518311250x316e155 0
   124  524252180x31ff202 0
   125  524027220x31f9a22 0
   106  527005810x32425a5 0

edited for brevity

___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


[lustre-discuss] Unexpected result with overstriping

2024-05-17 Thread John Bauer

Good morning all,

I am playing around with overstriping a bit and I found a behavior that, 
to me, would seem unexpected.  The documentation for -C -1  indicates 
that the file should be striped over all available OSTs.  The pool, 
which happens to be the default, is ssd-pool which has 32 OSTs.  I got a 
stripeCount of 2000.  Is this as expected?


pfe20.jbauer2 213> rm -f /nobackup/jbauer2/ddd.dat
pfe20.jbauer2 214> lfs setstripe -C -1 /nobackup/jbauer2/ddd.dat
pfe20.jbauer2 215> lfs getstripe /nobackup/jbauer2/ddd.dat
/nobackup/jbauer2/ddd.dat
lmm_stripe_count:  2000
lmm_stripe_size:   1048576
lmm_pattern:   raid0,overstriped
lmm_layout_gen:    0
lmm_stripe_offset: 119
lmm_pool:  ssd-pool
    obdidx         objid         objid         group
       119      52386287        0x31f59ef     0
       123      52347947        0x31ec42b     0
       127      52734487        0x324aa17     0
       121      52839396        0x32643e4     0
       131      52742709        0x324ca35     0
       116      52242659        0x31d28e3     0
       117      51831125        0x316e155     0
       124      52425218        0x31ff202     0
       125      52402722        0x31f9a22     0
       106      52700581        0x32425a5     0

edited for brevity

___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


Re: [lustre-discuss] upgrade process from zfs pool based lustre-2.12 (RH7) to lustre-2.15 (RH8)

2024-05-17 Thread Peter Jones via lustre-discuss
Hi Bernd

I will defer to others on some of your points, but just to note that a 2.15.5 
release is expected out shortly with support for Alma 8.10 and ZFS 2.1.15 so, 
if your timing permits, perhaps you should defer until that release is available

Peter

On 5/17/24, 6:25 AM, "lustre-discuss on behalf of Bernd Melchers via 
lustre-discuss" mailto:lustre-discuss-boun...@lists.lustre.org> on behalf of 
lustre-discuss@lists.lustre.org > wrote:


Hi All,
we plan to update our lustre from 2.12.9 to 2.15.4. We will update the OS from 
CentOS 7.9 to Alma 8.9, the recommended [1] OS version for 2.15.4. The system 
setup is based on ZFS, which is at (slightly patched) 0.7.13 and will be 
updated as
recommended to version 2.1.11.


We did not see a guide or experience report for this update. Any pointers to a 
document, or any hints about pitfalls to avoid? We were for wondering if it is 
better practice to leave the zpools at the old level for a system
that has seen its share of crashed in several years of operation, or if "zpool 
upgrade -a" is the better option prior to start lustre 2.15.


Also we think about using the most recent version of zfs 2.1, which is 2.1.15 
and not 2.1.11


The content of our /etc/modprobe.d/zfs.conf is:
options zfs_arc_max=74959079424 zfs_vdev_scheduler=deadline 
metaslab_debug_unload=1 zfs_vdev_async_read_max_active=16 
zfs_vdev_aggregation_limit=1 zfs_vdev_async_write_active_min_dirty_percent=20
zfs_vdev_async_write_min_active=5 zfs_vdev_sync_read_min_active=16 
zfs_vdev_sync_read_max_active=16


and /etc/modprobe.d/lustre.conf is:
options lnet networks="o2ib0(ib0),tcp0(eno1)"
options ptlrpc at_min=40 at_max=400 ldlm_enqueue_min=260


Should we change/remove/add some parameters?


Best regards
Bernd Melchers


[1] https://wiki.whamcloud.com/display/PUB/Lustre+Support+Matrix 

-- 
Archiv- und Backup-Service | fab-serv...@zedat.fu-berlin.de 

Freie Universität Berlin | Tel. +49-30-838-55905
___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org 
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org 




___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


[lustre-discuss] upgrade process from zfs pool based lustre-2.12 (RH7) to lustre-2.15 (RH8)

2024-05-17 Thread Bernd Melchers via lustre-discuss
Hi All,
we plan to update our lustre from 2.12.9 to 2.15.4. We will update the OS from 
CentOS 7.9 to Alma 8.9, the recommended [1] OS version for 2.15.4. The system 
setup is based on ZFS, which is at (slightly patched) 0.7.13 and will be 
updated as
recommended to version 2.1.11.

We did not see a guide or experience report for this update. Any pointers to a 
document, or any hints about pitfalls to avoid? We were for wondering if it is 
better practice to leave the zpools at the old level for a system
that has seen its share of crashed in several years of operation, or if "zpool 
upgrade -a" is the better option prior to start lustre 2.15.

Also we think about using the most recent version of zfs 2.1, which is 2.1.15 
and not 2.1.11

The content of our /etc/modprobe.d/zfs.conf is:
options zfs_arc_max=74959079424 zfs_vdev_scheduler=deadline 
metaslab_debug_unload=1 zfs_vdev_async_read_max_active=16 
zfs_vdev_aggregation_limit=1 zfs_vdev_async_write_active_min_dirty_percent=20
zfs_vdev_async_write_min_active=5 zfs_vdev_sync_read_min_active=16 
zfs_vdev_sync_read_max_active=16

and /etc/modprobe.d/lustre.conf is:
options lnet networks="o2ib0(ib0),tcp0(eno1)"
options ptlrpc at_min=40 at_max=400 ldlm_enqueue_min=260

Should we change/remove/add some parameters?

Best regards
Bernd Melchers

[1] https://wiki.whamcloud.com/display/PUB/Lustre+Support+Matrix
-- 
Archiv- und Backup-Service | fab-serv...@zedat.fu-berlin.de
Freie Universität Berlin   | Tel. +49-30-838-55905
___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org