Re: [Lustre-discuss] Announce: Lustre 2.0.0 is available!
Hey, The team has spent extraordinary efforts over the last year preparing this release for GA. This release has had the most extensive pre-release testing of any previous Lustre release. I've seen from the lustre support matrix that only the RHEL5 kernel is supported on the server side and not any longer a SLES kernel. Is it planned to reintegrate a current SLES kernel as server or will lustre be focused on RHEL? Greetings Patrick -- Patrick Winnertz Tel.: +49 (0)21 61 - 46 43-0 Fax: +49 (0)21 61 - 46 43-100 credativ GmbH, HRB Mönchengladbach 12080 Hohenzollernstr. 133, 41061 Mönchengladbach Geschäftsführung: Dr. Michael Meskes, Jörg Folz ___ Lustre-discuss mailing list Lustre-discuss@lists.lustre.org http://lists.lustre.org/mailman/listinfo/lustre-discuss
[Lustre-discuss] LBUG with 1.8.2 during rm
] ptlrpc_main+0x867/0x22e0 [ptlrpc] [ 177.502362] [c0120493] default_wake_function+0x0/0x8 [ 177.502700] [e15bba10] ptlrpc_main+0x0/0x22e0 [ptlrpc] [ 177.503003] [c01045e3] kernel_thread_helper+0x7/0x10 [ 177.503296] IRQ [ 177.506256] LustreError: dumping log to /tmp/lustre-log.1272455822.2099 The lustre-log.1272455822.2099 file is attached to this mail. Has anybody a clue what is going wrong here? Greetings Patrick -- Patrick Winnertz Tel.: +49 (0)21 61 - 46 43-0 Fax: +49 (0)21 61 - 46 43-100 credativ GmbH, HRB Mönchengladbach 12080 Hohenzollernstr. 133, 41061 Mönchengladbach Geschäftsführung: Dr. Michael Meskes, Jörg Folz lustre-log.1272455822.2099 Description: Binary data ___ Lustre-discuss mailing list Lustre-discuss@lists.lustre.org http://lists.lustre.org/mailman/listinfo/lustre-discuss
[Lustre-discuss] fsfilt_ldiskfs.ko - undefined symbols
ldiskfs_mb_discard_inode_preallocations [ 995.609895] LustreError: 2553:0:(fsfilt.c:124:fsfilt_get_ops()) Can't find fsfilt_ldiskfs interface [ 995.610437] LustreError: 2553:0:(obd_config.c:372:class_setup()) setup MGS failed (-256) [ 995.610835] LustreError: 2553:0:(obd_mount.c:481:lustre_start_simple()) MGS setup error -256 [ 995.611296] LustreError: 15e-a: Failed to start MGS 'MGS' (-256). Is the 'mgs' module loaded? [ 995.613187] LDISKFS-fs: mballoc: 0 blocks 0 reqs (0 success) [ 995.613476] LDISKFS-fs: mballoc: 0 extents scanned, 0 goal hits, 0 2^N hits, 0 breaks, 0 lost [ 995.613886] LDISKFS-fs: mballoc: 0 generated and it took 0 [ 995.614161] LDISKFS-fs: mballoc: 0 preallocated, 0 discarded [ 995.614822] Lustre: server umount MGS complete [ 995.615060] LustreError: 2553:0:(obd_mount.c:2042:lustre_fill_super()) Unable to mount (-256) during modpost of the modules, we'll see a similiar warning: uilding modules, stage 2. MODPOST 23 modules WARNING: ldiskfs_mb_discard_inode_preallocations [/usr/src/modules/lustre/lustre/lvfs/fsfilt_ldiskfs.ko] undefined! WARNING: ldiskfs_map_inode_page [/usr/src/modules/lustre/lustre/lvfs/fsfilt_ldiskfs.ko] undefined! WARNING: ldiskfs_ext_walk_space [/usr/src/modules/lustre/lustre/lvfs/fsfilt_ldiskfs.ko] undefined! WARNING: ldiskfs_bread [/usr/src/modules/lustre/lustre/lvfs/fsfilt_ldiskfs.ko] undefined! WARNING: ldiskfs_ext_search_left [/usr/src/modules/lustre/lustre/lvfs/fsfilt_ldiskfs.ko] undefined! WARNING: ldiskfs_mb_new_blocks [/usr/src/modules/lustre/lustre/lvfs/fsfilt_ldiskfs.ko] undefined! WARNING: __ldiskfs_journal_get_write_access [/usr/src/modules/lustre/lustre/lvfs/fsfilt_ldiskfs.ko] undefined! WARNING: ldiskfs_ext_insert_extent [/usr/src/modules/lustre/lustre/lvfs/fsfilt_ldiskfs.ko] undefined! WARNING: ldiskfs_read_inode_bitmap [/usr/src/modules/lustre/lustre/lvfs/fsfilt_ldiskfs.ko] undefined! WARNING: ldiskfs_xattr_set_handle [/usr/src/modules/lustre/lustre/lvfs/fsfilt_ldiskfs.ko] undefined! WARNING: ldiskfs_ext_calc_credits_for_insert [/usr/src/modules/lustre/lustre/lvfs/fsfilt_ldiskfs.ko] undefined! WARNING: ldiskfs_xattr_get [/usr/src/modules/lustre/lustre/lvfs/fsfilt_ldiskfs.ko] undefined! WARNING: ldiskfs_mark_inode_dirty [/usr/src/modules/lustre/lustre/lvfs/fsfilt_ldiskfs.ko] undefined! WARNING: ldiskfs_ext_search_right [/usr/src/modules/lustre/lustre/lvfs/fsfilt_ldiskfs.ko] undefined! WARNING: ldiskfs_itable_unused_count [/usr/src/modules/lustre/lustre/lvfs/fsfilt_ldiskfs.ko] undefined! WARNING: ldiskfs_ext_store_pblock [/usr/src/modules/lustre/lustre/lvfs/fsfilt_ldiskfs.ko] undefined! WARNING: ldiskfs_force_commit [/usr/src/modules/lustre/lustre/lvfs/fsfilt_ldiskfs.ko] undefined! WARNING: __ldiskfs_journal_dirty_metadata [/usr/src/modules/lustre/lustre/lvfs/fsfilt_ldiskfs.ko] undefined! WARNING: __ldiskfs_journal_stop [/usr/src/modules/lustre/lustre/lvfs/fsfilt_ldiskfs.ko] undefined! WARNING: ldiskfs_journal_start_sb [/usr/src/modules/lustre/lustre/lvfs/fsfilt_ldiskfs.ko] undefined! WARNING: ext_pblock [/usr/src/modules/lustre/lustre/lvfs/fsfilt_ldiskfs.ko] undefined! WARNING: ldiskfs_get_group_desc [/usr/src/modules/lustre/lustre/lvfs/fsfilt_ldiskfs.ko] undefined! WARNING: ldiskfs_free_blocks [/usr/src/modules/lustre/lustre/lvfs/fsfilt_ldiskfs.ko] undefined! WARNING: ldiskfs_iget [/usr/src/modules/lustre/lustre/lvfs/fsfilt_ldiskfs.ko] undefined! as this is native sles11 kernel without further modification (expect the lustre patchset of course), I'll really doesn't understand what is going wrong here. Has anybody else a clue? Greetings Patrick -- Patrick Winnertz Tel.: +49 (0)21 61 - 46 43-0 Fax: +49 (0)21 61 - 46 43-100 credativ GmbH, HRB Mönchengladbach 12080 Hohenzollernstr. 133, 41061 Mönchengladbach Geschäftsführung: Dr. Michael Meskes, Jörg Folz ___ Lustre-discuss mailing list Lustre-discuss@lists.lustre.org http://lists.lustre.org/mailman/listinfo/lustre-discuss
Re: [Lustre-discuss] b1_8 patchless client on 2.6.30
Hey Up to 2.6.28 vanilla kernels, mounting lustre on a patchless client with b1_8 is fine. Why do I see errors about 'unknown type: mgc' on the client side with 2.6.30? When I remember correctly one module is not loaded automatically, try prior loading the module mgc by hand (modprobe mgc) then try again. Greetings Winnie -- Tel.: +49 (0)21 61 - 46 43-0 Fax: +49 (0)21 61 - 46 43-100 credativ GmbH, HRB Mönchengladbach 12080 Hohenzollernstr. 133, 41061 Mönchengladbach Geschäftsführung: Dr. Michael Meskes, Jörg Folz ___ Lustre-discuss mailing list Lustre-discuss@lists.lustre.org http://lists.lustre.org/mailman/listinfo/lustre-discuss
[Lustre-discuss] Assertion failur in ldiskfs_get_blocks_handle
Hey! After reinstallation of an lustre testcluster with 1.8.1 one ost crashed with an Assertion failure in inode.c (running 2.6.18er kernel on amd64). Do you know such an issue or has anybody else met this error before? Greetings Winnie here the error msg: Lustre: spfs-OST0001: received MDS connection from 192.168@tcp Assertion failure in ldiskfs_get_blocks_handle() at /usr/src/modules/lustre/ldiskfs/ldiskfs/inode.c:806: !(LDISKFS_I(inode)- i_flags LDISKFS_EXTENTS_FL) --- [cut here ] - [please bite here ] - Kernel BUG at /usr/src/modules/lustre/ldiskfs/ldiskfs/inode.c:806 invalid opcode: [1] SMP CPU 0 Modules linked in: obdfilter fsfilt_ldiskfs ost mgc ldiskfs crc16 lustre lov mdc lquota osc ksocklnd ptlrpc obdclass lnet lvfs libcfs ipv6 button ac battery dm_snapshot dm_mirror dm_mod sbp2 loop evdev sg serio_raw pcspkr psmouse eth1394 sr_mod cdrom ext3 jbd mbcache sd_mod sata_nv libata usb_storage scsi_mod ohci1394 e1000 ieee1394 generic amd74xx ide_core ehci_hcd ohci_hcd thermal processor fan Pid: 2343, comm: ll_ost_io_01 Tainted: GF 2.6.18+lustre1.8.1+0.credativ.etch.1 #1 RIP: 0010:[885756d0] [885756d0] :ldiskfs:ldiskfs_get_blocks_handle+0x80/0xd10 RSP: 0018:81003a345490 EFLAGS: 00010286 RAX: 00a0 RBX: RCX: 80450868 RDX: 80450868 RSI: 0086 RDI: 80450860 RBP: 81003b2261d8 R08: 80450868 R09: 0020 R10: 0046 R11: R12: 81003a3456d0 R13: R14: 0001 R15: 81003b2261d8 FS: 2ba142b446d0() GS:80522000() knlGS: CS: 0010 DS: ES: CR0: 8005003b CR2: 2b598fc6a360 CR3: 37e66000 CR4: 06e0 Process ll_ost_io_01 (pid: 2343, threadinfo 81003a344000, task 81003a011830) Stack: 8100c000 0086 89340001 81003d6cfed0 0001 0001 81003a3456d0 0001 81003c24c040 81003b226100 Call Trace: [8022c31f] __wake_up+0x38/0x4f [8840b0c0] :ksocklnd:ksocknal_queue_tx_locked+0x460/0x4a0 [8840b9cf] :ksocklnd:ksocknal_find_conn_locked+0xcf/0x1f0 [8840bfec] :ksocklnd:ksocknal_launch_packet+0x2ac/0x3a0 [8840db25] :ksocklnd:ksocknal_alloc_tx+0x205/0x2b0 [8857675e] :ldiskfs:ldiskfs_get_block+0xde/0x120 [88574710] :ldiskfs:ldiskfs_bmap+0x0/0xb0 [8023110e] generic_block_bmap+0x37/0x41 [88574710] :ldiskfs:ldiskfs_bmap+0x0/0xb0 [8860954d] :obdfilter:filter_commitrw_write+0x37d/0x2590 [80256f2e] cache_alloc_refill+0xde/0x1da [8025c11e] thread_return+0x0/0xe7 [8025ca8a] schedule_timeout+0x92/0xad [885c3968] :ost:ost_brw_write+0x1b88/0x2310 [8027c6b0] default_wake_function+0x0/0xe [88388f28] :ptlrpc:lustre_msg_check_version_v2+0x8/0x20 [885c6f53] :ost:ost_handle+0x2e63/0x5a00 [802aa138] zone_statistics+0x3e/0x6d [8020de5c] __alloc_pages+0x5c/0x2a9 [882e4838]
[Lustre-discuss] building lustre on debian unstable
Hello, I've huge problems since several days to build lustre on unstable, the cause seems to be something related to auto* stuff. configure is crashing with this error msg: checking whether to build kernel modules... no (linux-gnu) ../../configure: line 5542: syntax error near unexpected token `else' ../../configure: line 5542: `else' make: *** [configure-stamp] Error 2 I used automake 1.10 and autoconf 2.64. On a older system (e.g. lenny or etch) it builds without any problems. (The configure is generated correctly). Does anybody else hitted this problem? Greetings Patrick ___ Lustre-discuss mailing list Lustre-discuss@lists.lustre.org http://lists.lustre.org/mailman/listinfo/lustre-discuss
Re: [Lustre-discuss] building lustre on debian unstable
Hey Can you please submit a bug with the above, and attach the generated configure and config.log files. Also post the excerpt of the configure file around line 5542 here would possibly allow someone else to diagnose what is going wrong. Done.. see #20383 I'll add the requested files tomorrow morning when I'm back in the office. Greetings Patrick ___ Lustre-discuss mailing list Lustre-discuss@lists.lustre.org http://lists.lustre.org/mailman/listinfo/lustre-discuss
Re: [Lustre-discuss] LustreErrors on mgs/mdt when accessing files
Hey, looks something wrong with permission: $ wget http://www.credativ.com/~pwi/lustre-debug-client --2009-03-17 07:51:24-- http://www.credativ.com/~pwi/lustre-debug-client Resolving www.credativ.com... 88.198.32.163 Connecting to www.credativ.com|88.198.32.163|:80... connected. HTTP request sent, awaiting response... 302 Found Location: http://www.credativ.com/404.html [following] Sorry for this, This is fixed now. Greetings Patrick -- Patrick Winnertz Tel.: +49 (0) 2161 / 4643 - 0 credativ GmbH, HRB Mönchengladbach 12080 Hohenzollernstr. 133, 41061 Mönchengladbach Geschäftsführung: Dr. Michael Meskes, Jörg Folz ___ Lustre-discuss mailing list Lustre-discuss@lists.lustre.org http://lists.lustre.org/mailman/listinfo/lustre-discuss
Re: [Lustre-discuss] LustreErrors on mgs/mdt when accessing files
Hey, Is this error replicated? can you replicate this with start debug daemon (lctl debug_daemon ) and set lnet.debug=-1 / lnet.subsystem_debug=-1 ? As I was not sure where you want me to set this I've uploaded two debug logs (one from the client and one from the mgs/mdt server). http://www.credativ.com/~pwi/lustre-debug-client # from client http://www.credativ.com/~pwi/lustre-debug-mgs # from server Please wait a bit for downloading the client logfile it's quite huge (~250MB) the server logfile is complete. I have one similar report before - but not have debug logs for investigate. I hope this helps to sort this out. If you need more informations please ask. Greetings Patrick -- Patrick Winnertz Tel.: +49 (0) 2161 / 4643 - 0 credativ GmbH, HRB Mönchengladbach 12080 Hohenzollernstr. 133, 41061 Mönchengladbach Geschäftsführung: Dr. Michael Meskes, Jörg Folz ___ Lustre-discuss mailing list Lustre-discuss@lists.lustre.org http://lists.lustre.org/mailman/listinfo/lustre-discuss
Re: [Lustre-discuss] lustre patches for e2fsprogs version 1.41.0?
Hello, If you are very interested to start working on this, then you can get the lustre-e2fsprogs CVS module (put it in a directory called patches in the e2fsprogs tree) and then run quilt push -a to try and apply patches, fixing each one as you go. Where is this module located? I didn't find any hint in the lustre wiki about this module and therefore I don't have any idea where to check out. Any contribution is appreciated, even if you didn't finish it, since it saves the developer to tackle the tricky parts of the integration. I'll try this next week. Greetings Patrick Winnertz ___ Lustre-discuss mailing list Lustre-discuss@lists.lustre.org http://lists.lustre.org/mailman/listinfo/lustre-discuss
[Lustre-discuss] lustre patches for e2fsprogs version 1.41.0?
Hello, Are there any plans to support the latest e2fsprogs version with the lustre patches? If yes, is there a a bug in the bugzilla where the status can be viewed? Greetings Patrick Winnertz ___ Lustre-discuss mailing list Lustre-discuss@lists.lustre.org http://lists.lustre.org/mailman/listinfo/lustre-discuss
Re: [Lustre-discuss] lustre + debian
Hello, On Thursday 14 August 2008 18:57:52 Robert LeBlanc wrote: It's easy and then it's not. The Debian way is to download the Lustre kernel packages and the Lustre binary packages. You build a kernel using make-kpkg the --with-patches Lustre option, we found a bug when trying to use the --append_to_kernel option so unless that got fixed, don't use it, Thanks for this informations, I'll have a look on it. them up. You are now ready to use Lustre Debian style. If deploying this on a cluster, just install your ready made .debs for the kernel and modules. None of this was explained in the README in /usr/share/doc/Lustre so we had to figure it out ourselves. Hopefully the Debian folks have fixed that. Could you please have a look on our Readme.Debian. I've attached the current version of this README. Do not use a vanilla kernel from kernel.org, only use the kernel source from the Debian repository as the Lustre kernel packages are tested against them. Vanilla kernels should be supported.. We've only packaged the upstream patches and modified them a bit for the debian kernels. So vanilla kernels should work as usual. Greetings Patrick Winnertz ___ Lustre-discuss mailing list Lustre-discuss@lists.lustre.org http://lists.lustre.org/mailman/listinfo/lustre-discuss
[Lustre-discuss] kernel panic with 1.6.5rc2 on mds
Hello, As I wrote in #11742 [1] I experienced a kernel panic after doing heavy I/O on the 1.6.5rc2 cluster on the mds. Since nobody answered to this bug until now (and I think in other cases the lustre team is _really_ fast (thanks for that :))) I fear that it was not recognised by anybody. This kernel-panic seems somehow to be related to the bug mentioned above (#11742) as this bugnr. is mentioned in the dmesg output when it died. Furthermore right before it started to fail there were several messages like the following: LustreError: 3342:0:(osc_request.c:678:osc_announce_cached()) dirty 81108992 dirty_max 33554432 This behaviour is described in #13344 [2]. Any ideas? Greetings Patrick Winnertz [1]:https://bugzilla.lustre.org/show_bug.cgi?id=11742 [2]:https://bugzilla.lustre.org/show_bug.cgi?id=13344 -- Patrick Winnertz Tel.: +49 (0) 2161 / 4643 - 0 credativ GmbH, HRB Mönchengladbach 12080 Hohenzollernstr. 133, 41061 Mönchengladbach Geschäftsführung: Dr. Michael Meskes, Jörg Folz ___ Lustre-discuss mailing list Lustre-discuss@lists.lustre.org http://lists.lustre.org/mailman/listinfo/lustre-discuss
[Lustre-discuss] LBUG with 1.6.5~rc2 and lfs quotacheck
Hey I've some troubles with quota on 1.6.5. (Yes I know this rc software :)). Since I don't find any bugreport about this issue (and this is not yet released) I'm unsure what do do. Here the problem description: After reformatting the fs and mounting it. I've tried to execute this command: debian:/mnt# lfs quotacheck -ug /mnt/lustre_client/ Right after hitting enter I got this: Message from [EMAIL PROTECTED] at Tue May 13 13:05:08 2008 ... debian kernel: LustreError: 2921:0: (fsfilt-ldiskfs.c:2066:fsfilt_ldiskfs_quotainfo()) LBUG Furthermore a tmp. file was created. I've attached it to this mail. Anybody know what's going wrong here? Greetings Winnie -- Patrick Winnertz Tel.: +49 (0) 2161 / 4643 - 0 credativ GmbH, HRB Mönchengladbach 12080 Hohenzollernstr. 133, 41061 Mönchengladbach Geschäftsführung: Dr. Michael Meskes, Jörg Folz lustre-log.1210676708.2921 Description: Binary data ___ Lustre-discuss mailing list Lustre-discuss@lists.lustre.org http://lists.lustre.org/mailman/listinfo/lustre-discuss