Re: [lustre-discuss] [EXTERNAL] Re: Help with recovery of data

2022-06-22 Thread Vicker, Darby J. (JSC-EG111)[Jacobs Technology, Inc.] via lustre-discuss
Thanks Andreas – I appreciate the info.

I am dd’ing the MDT block device (both of them – more details below) to 
separate storage now.

I’ve written this up on the ZFS mailing list.

https://zfsonlinux.topicbox.com/groups/zfs-discuss/Tcb8a3ef663db0031/need-help-with-data-recovery-if-possible

Actually, in the process of doing that, I think I see what is going on.  More 
details in the ZFS post but it looks like the block device names for the ZFS 
volumes got swapped on the crash and reboot.  So /dev/zd0 is a clone of the 
February snapshot and /dev/zd16 is actually our primary (current) MDT.  If I 
mount zd16 and poke around, I see lots of files newer than February.


[root@hpfs-fsl-mds1 ~]# mount -t ldiskfs -o ro /dev/zd16 /mnt/mdt_backup/
[root@hpfs-fsl-mds1 ~]# cd /mnt/mdt_backup/
[root@hpfs-fsl-mds1 mdt_backup]# ls -l PENDING/
total 0
-rw--- 1 ecdavis2 damocles 0 Jun 17 11:03 0x200021094:0x3b26:0x0
-rw--- 1 rharpold rharpold 0 Jun 14 12:27 0x200021096:0x1337:0x0
[root@hpfs-fsl-mds1 mdt_backup]#

So it looks like we have a shot at recovery.  I hope to get more guidance on 
the ZFS list on how to properly swap zd0 and zd16 back.  I’m also tarring up 
the contents of the read only mount of zd16.  In all:

dd if=/dev/zd0 of=/internal/zd0.dd.2022.06.22 bs=1M
dd if=/dev/zd16 of=/internal/zd16.dd.2022.06.22 bs=1M
cd /mnt/mdt_backup ; tar cf /internal/zd16.tar --xattrs 
--xattrs-include="trusted.*" --sparse .

Please let me know if there is something else we should consider doing before 
attempting recovery.

Actually, I’m 100% certain this is our current MDT.  I see files and 
directories in /mnt/mdt_backup/ROOT that were just created in the last couple 
weeks.  Happy day.

One other question.  We are seeing a ton of these in the MDS logs since the 
crash.

Jun 22 21:53:16 hpfs-fsl-mds1 kernel: LustreError: 
14346:0:(qmt_handler.c:699:qmt_dqacq0()) $$$ Release too much! 
uuid:scratch-MDT-lwp-OST000f_UUID release: 67108864 granted:0, total:0  
qmt:scratch-QMT pool:dt-0x0 id:5697 enforced:0 hard:0 soft:0 granted:0 
time:0 qunit: 0 edquot:0 may_rel:0 revoke:0 default:yes

I assume this is not unexpected with an MDT that got reverted?

From: Andreas Dilger 
Date: Wednesday, June 22, 2022 at 4:48 PM
To: "Vicker, Darby J. (JSC-EG111)[Jacobs Technology, Inc.]" 

Cc: "lustre-discuss@lists.lustre.org" 
Subject: [EXTERNAL] Re: [lustre-discuss] Help with recovery of data

First thing, if you haven't already done so, would be to make a separate "dd" 
backup of the ldiskfs MDT(s) to some external storage before you do anything 
else.  That will give you a fallback in case whatever changes you make don't 
work out well.

I would also suggest to contact the ZFS mailing list to ask if they can help 
restore the "new version" of the MDT at the ZFS level.  You may also want to 
consider a separate ZFS-level backup because the core of the problem appears to 
be ZFS related.  Unfortunately, the opportunity to recover a newer version of 
the ldiskfs MDT at the ZFS level declines the more changes are made to the ZFS 
pool.

I don't think LFSCK will repair the missing files on the MDT, since the OSTs 
don't have enough information to regenerate the namespace.  At most LFSCK will 
create stub files on the MDT under .lustre/lost+found that connect the objects 
for the new files created after your MDT snapshot, but they won't have proper 
filenames.  At most they will have UID/GID/timestamps to identify the 
owners/age, and the users would need to identify the files by content.




___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


Re: [lustre-discuss] Help with recovery of data

2022-06-22 Thread Andreas Dilger via lustre-discuss
First thing, if you haven't already done so, would be to make a separate "dd" 
backup of the ldiskfs MDT(s) to some external storage before you do anything 
else.  That will give you a fallback in case whatever changes you make don't 
work out well.

I would also suggest to contact the ZFS mailing list to ask if they can help 
restore the "new version" of the MDT at the ZFS level.  You may also want to 
consider a separate ZFS-level backup because the core of the problem appears to 
be ZFS related.  Unfortunately, the opportunity to recover a newer version of 
the ldiskfs MDT at the ZFS level declines the more changes are made to the ZFS 
pool.

I don't think LFSCK will repair the missing files on the MDT, since the OSTs 
don't have enough information to regenerate the namespace.  At most LFSCK will 
create stub files on the MDT under .lustre/lost+found that connect the objects 
for the new files created after your MDT snapshot, but they won't have proper 
filenames.  At most they will have UID/GID/timestamps to identify the 
owners/age, and the users would need to identify the files by content.


On Jun 22, 2022, at 10:46, Vicker, Darby J. (JSC-EG111)[Jacobs Technology, 
Inc.] via lustre-discuss 
mailto:lustre-discuss@lists.lustre.org>> wrote:

A quick follow up.  I thought an lfsck would only clean up (i.e. remove 
orphaned MDT and OST objects) but it appears this might have a good shot at 
repairing the file system – specifically, recreating the MDT objects with the 
--create-mdtobj option.  We have started this command:

[root@hpfs-fsl-mds1 ~]# lctl lfsck_start -M scratch-MDT --dryrun on 
--create-mdtobj on

And after running for about an hour we are already seeing this from the query:

layout_repaired: 4645105

Can anyone confirm this will work for our situation – i.e. repair the metadata 
for the OST objects that were orphaned when our metadata got reverted?

From: "Vicker, Darby J. (JSC-EG111)[Jacobs Technology, Inc.]" 
mailto:darby.vicke...@nasa.gov>>
Date: Tuesday, June 21, 2022 at 5:27 PM
To: "lustre-discuss@lists.lustre.org" 
mailto:lustre-discuss@lists.lustre.org>>
Subject: Help with recovery of data

Hi everyone,

We ran into a problem with our lustre filesystem this weekend and could use a 
sanity check and/or advice on recovery.

We are running on CentOS 7.9, ZFS 2.1.4 and Lustre 2.14.  We are using ZFS 
OST’s but and an ldiskfs MDT (for better MDT performance).  For various 
reasons, the ldiskfs is built on a zdev.  Every night we (intend to) back up 
the metadata by ZFS snapshot-ing the zdev, mount the MDT via ldiskfs and tar up 
the contents, umount and remove the ZFS snapshot.  On Sunday (6/19 at about 4 
pm), the metadata server crashed.  It came back up fine but users started 
reporting many missing files and directories today (6/21) – everything since 
about February 9th is gone.  After quite a bit of investigation, it looks like 
the MDT got rolled back to a snapshot of the metadata from February.

[root@hpfs-fsl-mds1 ~]# zfs list -t snap mds1-0/meta-scratch
NAME   USED  AVAIL REFER  MOUNTPOINT
mds1-0/meta-scratch@snap  52.3G  - 1.34T  -
[root@hpfs-fsl-mds1 ~]# zfs get all mds1-0/meta-scratch@snap | grep creation
mds1-0/meta-scratch@snap  creation  Thu Feb 10  3:35 2022  -
[root@hpfs-fsl-mds1 ~]#

We discovered that our MDT backups have been stalled since February since the 
first step is to create mds1-0/meta-scratch@snap  and that dataset already 
exists.  The script was erroring out since the existing snapshot still in 
place.  We have rebooted this MDS several times (gracefully) since February 
with no issues but, apparently, whatever happened in the server crash on Sunday 
caused the MDT to revert to the February data.  So, in theory, the data on the 
OST’s is still there, we are just missing the metadata due to the ZFS glitch.

So the first question - is anyone familiar with this failure mode of ZFS or if 
there is a way recover from it?  I think its unlikely there are any direct ZFS 
recovery options but wanted to ask.

Obviously, MDT backups would be our best recovery option but since this was all 
caused by the backup scripts stalling (and the subsequent rolling back to the 
last snapshot), our backups are the same age as the current data on the 
filesystem.

[root@hpfs-fsl-mds1 ~]# ls -lrt /internal/ldiskfs_backups/
total 629789909
-rw-r--r-- 1 root root 1657 Apr 30  2019 process.txt
-rw-r--r-- 1 root root 445317560320 Jan 25 15:36 
mds1-0_meta-scratch-2022_01_25.tar
-rw-r--r-- 1 root root 446230016000 Jan 26 15:31 
mds1-0_meta-scratch-2022_01_26.tar
-rw-r--r-- 1 root root 448093808640 Jan 27 15:46 
mds1-0_meta-scratch-2022_01_27.tar
-rw-r--r-- 1 root root 440368783360 Jan 28 16:56 
mds1-0_meta-scratch-2022_01_28.tar
-rw-r--r-- 1 root root 442342113280 Jan 29 14:45 
mds1-0_meta-scratch-2022_01_29.tar
-rw-r--r-- 1 root root 442922567680 Jan 30 15:03 

Re: [lustre-discuss] Installing 2.15 on rhel 8.5 fails

2022-06-22 Thread Jian Yu via lustre-discuss
Hi Thomas,

The issue is being fixed in https://jira.whamcloud.com/browse/LU-15962.
A workaround is to build Lustre with "--with-o2ib=" configure option.
The  is where in-kernel Module.symvers is located.

--
Best regards,
Jian Yu 
 

-Original Message-
From: lustre-discuss  on behalf of 
Thomas Roth via lustre-discuss 
Reply-To: Thomas Roth 
Date: Wednesday, June 22, 2022 at 10:32 AM
To: Andreas Dilger 
Cc: lustre-discuss 
Subject: Re: [lustre-discuss] Installing 2.15 on rhel 8.5 fails

Hmm, but we are using the in-kernel OFED, so this makes these messages all 
the more mysterious.
Regards,
Thomas

On 22/06/2022 19.12, Andreas Dilger wrote:
> On Jun 22, 2022, at 10:40, Thomas Roth via lustre-discuss 
mailto:lustre-discuss@lists.lustre.org>> wrote:
> 
> my rhel8 system is actually an Alma Linux 8.5 installation, this is the 
first time the compatiblity to an alleged rhel8.5 software fails...
> 
> 
> The system is running kernel '4.18.0-348.2.1.el8_5'
> This version string can also be found in the package names in
> 
https://downloads.whamcloud.com/public/lustre/lustre-2.15.0/el8.5.2111/server/RPMS/x86_64
> - this is usually a good sign.
> 
> However, installation of kmod-lustre-2.15.0-1.el8 yields the well known 
"depmod: WARNINGs", like
>> 
/lib/modules/4.18.0-348.2.1.el8_lustre.x86_64/extra/lustre/net/ko2iblnd.ko 
needs unknown symbol __ib_alloc_pd
> 
> 
> The kernel from 
downloads.whamcloud.com/public/lustre/lustre-2.15.0/el8.5.2111/server/RPMS/x86_64
 identifies itself as "CentOS" and does not want to boot - no option either.
> 
> 
> Any hints how to proceed?
> 
> The ko2iblnd module is built against the in-kernel OFED, so if you are 
using MOFED you will need to rebuild the kernel modules themselves.  If you 
don't use IB at all you can ignore these depmod messages.
> 
> Cheers, Andreas
> --
> Andreas Dilger
> Lustre Principal Architect
> Whamcloud
> 








-- 

Thomas Roth
Department: Informationstechnologie
Location: SB3 2.291
Phone: +49-6159-71 1453  Fax: +49-6159-71 2986


GSI Helmholtzzentrum für Schwerionenforschung GmbH
Planckstraße 1, 64291 Darmstadt, Germany, www.gsi.de

Commercial Register / Handelsregister: Amtsgericht Darmstadt, HRB 1528
Managing Directors / Geschäftsführung:
Professor Dr. Paolo Giubellino, Dr. Ulrich Breuer, Jörg Blaurock
Chairman of the Supervisory Board / Vorsitzender des GSI-Aufsichtsrats:
State Secretary / Staatssekretär Dr. Volkmar Dietz

___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


Re: [lustre-discuss] Installing 2.15 on rhel 8.5 fails

2022-06-22 Thread Thomas Roth via lustre-discuss

Hmm, but we are using the in-kernel OFED, so this makes these messages all the 
more mysterious.
Regards,
Thomas

On 22/06/2022 19.12, Andreas Dilger wrote:

On Jun 22, 2022, at 10:40, Thomas Roth via lustre-discuss 
mailto:lustre-discuss@lists.lustre.org>> wrote:

my rhel8 system is actually an Alma Linux 8.5 installation, this is the first 
time the compatiblity to an alleged rhel8.5 software fails...


The system is running kernel '4.18.0-348.2.1.el8_5'
This version string can also be found in the package names in
https://downloads.whamcloud.com/public/lustre/lustre-2.15.0/el8.5.2111/server/RPMS/x86_64
- this is usually a good sign.

However, installation of kmod-lustre-2.15.0-1.el8 yields the well known "depmod: 
WARNINGs", like

/lib/modules/4.18.0-348.2.1.el8_lustre.x86_64/extra/lustre/net/ko2iblnd.ko 
needs unknown symbol __ib_alloc_pd



The kernel from 
downloads.whamcloud.com/public/lustre/lustre-2.15.0/el8.5.2111/server/RPMS/x86_64 
identifies itself as "CentOS" and does not want to boot - no option either.


Any hints how to proceed?

The ko2iblnd module is built against the in-kernel OFED, so if you are using 
MOFED you will need to rebuild the kernel modules themselves.  If you don't use 
IB at all you can ignore these depmod messages.

Cheers, Andreas
--
Andreas Dilger
Lustre Principal Architect
Whamcloud










--

Thomas Roth
Department: Informationstechnologie
Location: SB3 2.291
Phone: +49-6159-71 1453  Fax: +49-6159-71 2986


GSI Helmholtzzentrum für Schwerionenforschung GmbH
Planckstraße 1, 64291 Darmstadt, Germany, www.gsi.de

Commercial Register / Handelsregister: Amtsgericht Darmstadt, HRB 1528
Managing Directors / Geschäftsführung:
Professor Dr. Paolo Giubellino, Dr. Ulrich Breuer, Jörg Blaurock
Chairman of the Supervisory Board / Vorsitzender des GSI-Aufsichtsrats:
State Secretary / Staatssekretär Dr. Volkmar Dietz

___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


Re: [lustre-discuss] Installing 2.15 on rhel 8.5 fails

2022-06-22 Thread Andreas Dilger via lustre-discuss
On Jun 22, 2022, at 10:40, Thomas Roth via lustre-discuss 
mailto:lustre-discuss@lists.lustre.org>> wrote:

my rhel8 system is actually an Alma Linux 8.5 installation, this is the first 
time the compatiblity to an alleged rhel8.5 software fails...


The system is running kernel '4.18.0-348.2.1.el8_5'
This version string can also be found in the package names in
https://downloads.whamcloud.com/public/lustre/lustre-2.15.0/el8.5.2111/server/RPMS/x86_64
- this is usually a good sign.

However, installation of kmod-lustre-2.15.0-1.el8 yields the well known 
"depmod: WARNINGs", like
> /lib/modules/4.18.0-348.2.1.el8_lustre.x86_64/extra/lustre/net/ko2iblnd.ko 
> needs unknown symbol __ib_alloc_pd


The kernel from 
downloads.whamcloud.com/public/lustre/lustre-2.15.0/el8.5.2111/server/RPMS/x86_64
 identifies itself as "CentOS" and does not want to boot - no option either.


Any hints how to proceed?

The ko2iblnd module is built against the in-kernel OFED, so if you are using 
MOFED you will need to rebuild the kernel modules themselves.  If you don't use 
IB at all you can ignore these depmod messages.

Cheers, Andreas
--
Andreas Dilger
Lustre Principal Architect
Whamcloud







___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


Re: [lustre-discuss] Help with recovery of data

2022-06-22 Thread Vicker, Darby J. (JSC-EG111)[Jacobs Technology, Inc.] via lustre-discuss
A quick follow up.  I thought an lfsck would only clean up (i.e. remove 
orphaned MDT and OST objects) but it appears this might have a good shot at 
repairing the file system – specifically, recreating the MDT objects with the 
--create-mdtobj option.  We have started this command:

[root@hpfs-fsl-mds1 ~]# lctl lfsck_start -M scratch-MDT --dryrun on 
--create-mdtobj on

And after running for about an hour we are already seeing this from the query:

layout_repaired: 4645105

Can anyone confirm this will work for our situation – i.e. repair the metadata 
for the OST objects that were orphaned when our metadata got reverted?

From: "Vicker, Darby J. (JSC-EG111)[Jacobs Technology, Inc.]" 

Date: Tuesday, June 21, 2022 at 5:27 PM
To: "lustre-discuss@lists.lustre.org" 
Subject: Help with recovery of data

Hi everyone,

We ran into a problem with our lustre filesystem this weekend and could use a 
sanity check and/or advice on recovery.

We are running on CentOS 7.9, ZFS 2.1.4 and Lustre 2.14.  We are using ZFS 
OST’s but and an ldiskfs MDT (for better MDT performance).  For various 
reasons, the ldiskfs is built on a zdev.  Every night we (intend to) back up 
the metadata by ZFS snapshot-ing the zdev, mount the MDT via ldiskfs and tar up 
the contents, umount and remove the ZFS snapshot.  On Sunday (6/19 at about 4 
pm), the metadata server crashed.  It came back up fine but users started 
reporting many missing files and directories today (6/21) – everything since 
about February 9th is gone.  After quite a bit of investigation, it looks like 
the MDT got rolled back to a snapshot of the metadata from February.

[root@hpfs-fsl-mds1 ~]# zfs list -t snap mds1-0/meta-scratch
NAME   USED  AVAIL REFER  MOUNTPOINT
mds1-0/meta-scratch@snap  52.3G  - 1.34T  -
[root@hpfs-fsl-mds1 ~]# zfs get all mds1-0/meta-scratch@snap | grep creation
mds1-0/meta-scratch@snap  creation  Thu Feb 10  3:35 2022  -
[root@hpfs-fsl-mds1 ~]#

We discovered that our MDT backups have been stalled since February since the 
first step is to create mds1-0/meta-scratch@snap  and that dataset already 
exists.  The script was erroring out since the existing snapshot still in 
place.  We have rebooted this MDS several times (gracefully) since February 
with no issues but, apparently, whatever happened in the server crash on Sunday 
caused the MDT to revert to the February data.  So, in theory, the data on the 
OST’s is still there, we are just missing the metadata due to the ZFS glitch.

So the first question - is anyone familiar with this failure mode of ZFS or if 
there is a way recover from it?  I think its unlikely there are any direct ZFS 
recovery options but wanted to ask.

Obviously, MDT backups would be our best recovery option but since this was all 
caused by the backup scripts stalling (and the subsequent rolling back to the 
last snapshot), our backups are the same age as the current data on the 
filesystem.

[root@hpfs-fsl-mds1 ~]# ls -lrt /internal/ldiskfs_backups/
total 629789909
-rw-r--r-- 1 root root 1657 Apr 30  2019 process.txt
-rw-r--r-- 1 root root 445317560320 Jan 25 15:36 
mds1-0_meta-scratch-2022_01_25.tar
-rw-r--r-- 1 root root 446230016000 Jan 26 15:31 
mds1-0_meta-scratch-2022_01_26.tar
-rw-r--r-- 1 root root 448093808640 Jan 27 15:46 
mds1-0_meta-scratch-2022_01_27.tar
-rw-r--r-- 1 root root 440368783360 Jan 28 16:56 
mds1-0_meta-scratch-2022_01_28.tar
-rw-r--r-- 1 root root 442342113280 Jan 29 14:45 
mds1-0_meta-scratch-2022_01_29.tar
-rw-r--r-- 1 root root 442922567680 Jan 30 15:03 
mds1-0_meta-scratch-2022_01_30.tar
-rw-r--r-- 1 root root 443076515840 Jan 31 15:17 
mds1-0_meta-scratch-2022_01_31.tar
-rw-r--r-- 1 root root 444589025280 Feb  1 15:11 
mds1-0_meta-scratch-2022_02_01.tar
-rw-r--r-- 1 root root 443741409280 Feb  2 15:17 
mds1-0_meta-scratch-2022_02_02.tar
-rw-r--r-- 1 root root 448209367040 Feb  3 15:24 
mds1-0_meta-scratch-2022_02_03.tar
-rw-r--r-- 1 root root 453777090560 Feb  4 15:55 
mds1-0_meta-scratch-2022_02_04.tar
-rw-r--r-- 1 root root 454211307520 Feb  5 14:37 
mds1-0_meta-scratch-2022_02_05.tar
-rw-r--r-- 1 root root 454619084800 Feb  6 14:30 
mds1-0_meta-scratch-2022_02_06.tar
-rw-r--r-- 1 root root 455459276800 Feb  7 15:26 
mds1-0_meta-scratch-2022_02_07.tar
-rw-r--r-- 1 root root 457470945280 Feb  8 15:07 
mds1-0_meta-scratch-2022_02_08.tar
-rw-r--r-- 1 root root 460592517120 Feb  9 15:21 
mds1-0_meta-scratch-2022_02_09.tar
-rw-r--r-- 1 root root 332377712640 Feb 10 12:04 
mds1-0_meta-scratch-2022_02_10.tar
[root@hpfs-fsl-mds1 ~]#


Yes, I know, we will put in some monitoring for this in the future...

Fortunately, we also have a robinhood system syncing with this file system.  
The sync is fairly up to date – the logs say a few days ago and I’ve used 
rbh-find to find some files that were created in the last few days.  So I think 
we have a shot at recovery.  We have this command running now to see what it 
will do:

rbh-diff 

[lustre-discuss] Installing 2.15 on rhel 8.5 fails

2022-06-22 Thread Thomas Roth via lustre-discuss

Hi all,

my rhel8 system is actually an Alma Linux 8.5 installation, this is the first time the compatiblity to 
an alleged rhel8.5 software fails...



The system is running kernel '4.18.0-348.2.1.el8_5'
This version string can also be found in the package names in
https://downloads.whamcloud.com/public/lustre/lustre-2.15.0/el8.5.2111/server/RPMS/x86_64
- this is usually a good sign.

However, installation of kmod-lustre-2.15.0-1.el8 yields the well known "depmod: 
WARNINGs", like
> /lib/modules/4.18.0-348.2.1.el8_lustre.x86_64/extra/lustre/net/ko2iblnd.ko needs unknown symbol 
__ib_alloc_pd



The kernel from downloads.whamcloud.com/public/lustre/lustre-2.15.0/el8.5.2111/server/RPMS/x86_64 
identifies itself as "CentOS" and does not want to boot - no option either.



Any hints how to proceed?

Regards,
Thomas


--

Thomas Roth
Department: Informationstechnologie
Location: SB3 2.291
Phone: +49-6159-71 1453  Fax: +49-6159-71 2986


GSI Helmholtzzentrum für Schwerionenforschung GmbH
Planckstraße 1, 64291 Darmstadt, Germany, www.gsi.de

Commercial Register / Handelsregister: Amtsgericht Darmstadt, HRB 1528
Managing Directors / Geschäftsführung:
Professor Dr. Paolo Giubellino, Dr. Ulrich Breuer, Jörg Blaurock
Chairman of the Supervisory Board / Vorsitzender des GSI-Aufsichtsrats:
State Secretary / Staatssekretär Dr. Volkmar Dietz

___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


[lustre-discuss] Building 2.15 on rhel8 fails

2022-06-22 Thread Thomas Roth via lustre-discuss

Hi all,

I tried to install 'lustre-ldiskfs-dkms' on a rhel8.5 system, running kernel
Fails, /var/lib/dkms/lustre-ldiskfs/2.15.0/build/make.log says "No targets specified and no makefile 
found", and in the corresponding '/var/lib/dkms/lustre-ldiskfs/2.15.0/buildconfig.log' indeed the 
first real error seems to be


> scripts/Makefile.build:45: 
/var/lib/dkms/lustre-ldiskfs/2.15.0/build/build//var/lib/dkms/lustre-ldiskfs/2.15.0/build/build/Makefile: 
No such file or directory
> make[1]: *** No rule to make target 
'/var/lib/dkms/lustre-ldiskfs/2.15.0/build/build//var/lib/dkms/lustre-ldiskfs/2.15.0/build/build/Makefile'. 
 Stop.



This directory tree is a bit large :-)
> '/var/lib/dkms/lustre-ldiskfs/2.15.0/build/build/Makefile'
does exist, though.

Where could this doubling of the path come from?


Btw, how to re-run dkms, in case I'd edit some stuff there?

Regards
Thomas




--

Thomas Roth
Department: Informationstechnologie
Location: SB3 2.291
Phone: +49-6159-71 1453  Fax: +49-6159-71 2986


GSI Helmholtzzentrum für Schwerionenforschung GmbH
Planckstraße 1, 64291 Darmstadt, Germany, www.gsi.de

Commercial Register / Handelsregister: Amtsgericht Darmstadt, HRB 1528
Managing Directors / Geschäftsführung:
Professor Dr. Paolo Giubellino, Dr. Ulrich Breuer, Jörg Blaurock
Chairman of the Supervisory Board / Vorsitzender des GSI-Aufsichtsrats:
State Secretary / Staatssekretär Dr. Volkmar Dietz

___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org