Hi, Thank you for the reply.
There is no firewall and no selinux. Is there any ldiskfs module for debian xen compatible kernel? Below are the requested output: (These machines ar using Debian stable (Lenny)) puma2:~# /sbin/iptables -nvL Chain INPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination puma2:~# puma2:~# dmesg |tail -n 20 [ 81.829983] NET: Registered protocol family 10 [ 81.829983] lo: Disabled Privacy Extensions [ 81.829983] ADDRCONF(NETDEV_UP): eth1: link is not ready [ 84.839690] bnx2: eth1 NIC Copper Link is Up, 1000 Mbps full duplex [ 84.841266] ADDRCONF(NETDEV_CHANGE): eth1: link becomes ready [ 85.603393] bnx2: eth0 NIC Copper Link is Down [ 88.880264] bnx2: eth0 NIC Copper Link is Up, 1000 Mbps full duplex [ 92.755753] eth0: no IPv6 routers present [ 94.016709] suspend: event channel 44 [ 95.150592] eth1: no IPv6 routers present [ 96.144998] Bridge firewalling registered [ 96.776796] Lustre: Added LNI 10.0....@tcp [8/256] [ 96.776953] Lustre: Accept secure, port 988 [ 96.881533] Lustre: Lustre Client File System; [email protected] [140919.261749] LustreError: 20910:0:(obd_mount.c:1241:server_kernel_mount()) premount /dev/sda5:0x0 ldiskfs failed: -19, ldiskfs2 failed: -19. Is the ldiskfs module available? [140919.261749] LustreError: 20910:0:(obd_mount.c:1560:server_fill_super()) Unable to mount device /dev/sda5: -19 [140919.261749] LustreError: 20910:0:(obd_mount.c:1951:lustre_fill_super()) Unable to mount (-19) [190541.769779] LustreError: 27018:0:(obd_mount.c:1241:server_kernel_mount()) premount /dev/sda5:0x0 ldiskfs failed: -19, ldiskfs2 failed: -19. Is the ldiskfs module available? [190541.769779] LustreError: 27018:0:(obd_mount.c:1560:server_fill_super()) Unable to mount device /dev/sda5: -19 [190541.769779] LustreError: 27018:0:(obd_mount.c:1951:lustre_fill_super()) Unable to mount (-19) puma2:~# puma30:~# dmesg |tail -n 10 [140767.809478] LustreError: 15c-8: mgc10.0....@tcp: The configuration from log 'puma2-client' failed (-108). This may be the result of communication errors between this node and the MGS, a bad configuration, or other errors. See the syslog for more information. [140767.809636] LustreError: 21147:0:(llite_lib.c:1061:ll_fill_super()) Unable to process log: -108 [140767.809955] Lustre: client ffff8803e7693c00 umount complete [140767.809998] LustreError: 21147:0:(obd_mount.c:1951:lustre_fill_super()) Unable to mount (-108) [189735.081445] Lustre: Request x16 sent from mgc10.0....@tcp to NID 10.0....@tcp 5s ago has timed out (limit 5s). [189735.081531] LustreError: 27174:0:(client.c:716:ptlrpc_import_delay_req()) @@@ IMP_INVALID r...@ffff8803e5e5f600 x18/t0 o501->[email protected]@tcp_0:26/25 lens 136/248 e 0 to 100 dl 0 ref 1 fl Rpc:/0/0 rc 0/0 [189735.081669] LustreError: 15c-8: mgc10.0....@tcp: The configuration from log 'puma2-client' failed (-108). This may be the result of communication errors between this node and the MGS, a bad configuration, or other errors. See the syslog for more information. [189735.081794] LustreError: 27174:0:(llite_lib.c:1061:ll_fill_super()) Unable to process log: -108 [189735.082065] Lustre: client ffff8803e7694800 umount complete [189735.082106] LustreError: 27174:0:(obd_mount.c:1951:lustre_fill_super()) Unable to mount (-108) puma30:~# puma2:~# ifconfig eth0 eth0 Link encap:Ethernet HWaddr 00:22:19:55:28:59 inet addr:10.0.2.2 Bcast:10.0.7.255 Mask:255.255.248.0 inet6 addr: fe80::222:19ff:fe55:2859/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:9000 Metric:1 RX packets:3570658 errors:0 dropped:0 overruns:0 frame:0 TX packets:128400 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:229098082 (218.4 MiB) TX bytes:8463859 (8.0 MiB) Interrupt:16 Memory:f8000000-f8012100 puma30:~# ifconfig eth0 eth0 Link encap:Ethernet HWaddr 00:22:19:58:65:20 inet addr:10.0.2.30 Bcast:10.0.7.255 Mask:255.255.248.0 inet6 addr: fe80::222:19ff:fe58:6520/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:9000 Metric:1 RX packets:3572109 errors:0 dropped:0 overruns:0 frame:0 TX packets:128662 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:229242725 (218.6 MiB) TX bytes:8539988 (8.1 MiB) Interrupt:16 Memory:f8000000-f8012100 puma30:~# (This machine below is running Debian Unstable) puma58:~# mkfs.lustre --mgs --mdt --fsname puma58 /dev/sda5 Permanent disk data: Target: puma58-MDTffff Index: unassigned Lustre FS: puma58 Mount type: ldiskfs Flags: 0x75 (MDT MGS needs_index first_time update ) Persistent mount opts: errors=remount-ro,iopen_nopriv,user_xattr Parameters: mdt.group_upcall=/usr/sbin/l_getgroups checking for existing Lustre data: not found device size = 264437MB 2 6 26 formatting backing filesystem ldiskfs on /dev/sda5 target name puma58-MDTffff 4k blocks 0 options -J size=400 -i 4096 -I 512 -q -O dir_index,uninit_groups -F mkfs_cmd = mkfs.ext2 -j -b 4096 -L puma58-MDTffff -J size=400 -i 4096 -I 512 -q -O dir_index,uninit_groups -F /dev/sda5 mkfs.lustre: Unable to mount /dev/sda5: No such device Is the ldiskfs module available? mkfs.lustre FATAL: failed to write local files mkfs.lustre: exiting with 19 (No such device) puma58:~# dmesg |tail -n 25 [ 81.643875] bnx2: eth1 NIC Copper Link is Down [ 84.827523] bnx2: eth1 NIC Copper Link is Up, 1000 Mbps full duplex [ 84.948040] eth1: no IPv6 routers present [ 87.247905] eth0: no IPv6 routers present [ 91.968742] Bridge firewalling registered [ 91.965490] tmpbridge: Dropping NETIF_F_UFO since no NETIF_F_HW_CSUM feature. [ 102.362278] no ownder [ 102.362383] map irq failed [ 102.530116] ADDRCONF(NETDEV_UP): peth1: link is not ready [ 105.012554] bnx2: peth1 NIC Copper Link is Up, 1000 Mbps full duplex [ 105.013251] ADDRCONF(NETDEV_CHANGE): peth1: link becomes ready [ 105.574356] device peth1 entered promiscuous mode [ 105.606375] eth1: port 1(peth1) entering learning state [ 105.606615] eth1: topology change detected, propagating [ 105.607468] eth1: port 1(peth1) entering forwarding state [ 115.864955] peth1: no IPv6 routers present [ 116.344377] eth1: no IPv6 routers present [ 138.949463] suspend: event channel 46 [ 142.314790] Lustre: OBD class driver, http://www.lustre.org/ [ 142.314900] Lustre: Lustre Version: 1.6.7 [ 142.315815] Lustre: Build Version: 1.6.7-19691231210000-PRISTINE-.lib.modules.2.6.26-2-xen-amd64.build-2.6.26-2-xen-amd64 [ 142.446536] Lustre: Added LNI 10.0.8...@tcp [8/256] [ 142.446536] Lustre: Accept secure, port 988 [ 142.446536] LustreError: 3721:0:(router_proc.c:1020:lnet_proc_init()) couldn't create proc entry sys/lnet/stats [ 142.680752] Lustre: Lustre Client File System; http://www.lustre.org/ puma58:~# puma58:~# cat /var/log/messages |tail -n 35 May 20 22:43:21 puma58 kernel: [ 36.549284] EXT3 FS on sda3, internal journal May 20 22:43:21 puma58 kernel: [ 37.473899] loop: module loaded May 20 22:43:21 puma58 kernel: [ 37.921432] kjournald starting. Commit interval 5 seconds May 20 22:43:21 puma58 kernel: [ 37.925504] EXT3 FS on sda1, internal journal May 20 22:43:21 puma58 kernel: [ 37.925920] EXT3-fs: mounted filesystem with ordered data mode. May 20 22:43:21 puma58 kernel: [ 38.442582] no ownder May 20 22:43:21 puma58 kernel: [ 38.442682] map irq failed May 20 22:43:21 puma58 kernel: [ 39.028686] no ownder May 20 22:43:21 puma58 kernel: [ 39.028785] map irq failed May 20 22:43:21 puma58 kernel: [ 41.689422] bnx2: eth0 NIC Copper Link is Up, 1000 Mbps full duplex May 20 22:43:21 puma58 kernel: [ 42.340784] bnx2: eth1 NIC Copper Link is Up, 1000 Mbps full duplex May 20 22:43:21 puma58 kernel: [ 73.968015] NET: Registered protocol family 10 May 20 22:43:21 puma58 kernel: [ 73.968694] lo: Disabled Privacy Extensions May 20 22:43:21 puma58 kernel: [ 73.969094] ADDRCONF(NETDEV_UP): eth0: link is not ready May 20 22:43:21 puma58 kernel: [ 76.917142] bnx2: eth0 NIC Copper Link is Up, 1000 Mbps full duplex May 20 22:43:21 puma58 kernel: [ 76.917743] ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 20 22:43:21 puma58 kernel: [ 84.827523] bnx2: eth1 NIC Copper Link is Up, 1000 Mbps full duplex May 20 22:43:21 puma58 rsyslogd: [origin software="rsyslogd" swVersion="3.22.0" x-pid="3147" x-info="http://www.rsyslog.com"] restart May 20 22:43:28 puma58 kernel: [ 91.968742] Bridge firewalling registered May 20 22:43:38 puma58 kernel: [ 102.362278] no ownder May 20 22:43:38 puma58 kernel: [ 102.362383] map irq failed May 20 22:43:38 puma58 kernel: [ 102.530116] ADDRCONF(NETDEV_UP): peth1: link is not ready May 20 22:43:41 puma58 kernel: [ 105.012554] bnx2: peth1 NIC Copper Link is Up, 1000 Mbps full duplex May 20 22:43:41 puma58 kernel: [ 105.013251] ADDRCONF(NETDEV_CHANGE): peth1: link becomes ready May 20 22:43:41 puma58 kernel: [ 105.574356] device peth1 entered promiscuous mode May 20 22:43:42 puma58 kernel: [ 105.606375] eth1: port 1(peth1) entering learning state May 20 22:43:42 puma58 kernel: [ 105.606615] eth1: topology change detected, propagating May 20 22:43:42 puma58 kernel: [ 105.607468] eth1: port 1(peth1) entering forwarding state May 20 22:44:15 puma58 kernel: [ 138.949463] suspend: event channel 46 May 20 22:44:18 puma58 kernel: [ 142.314790] Lustre: OBD class driver, http://www.lustre.org/ May 20 22:44:18 puma58 kernel: [ 142.314900] Lustre: Lustre Version: 1.6.7 May 20 22:44:18 puma58 kernel: [ 142.315815] Lustre: Build Version: 1.6.7-19691231210000-PRISTINE-.lib.modules.2.6.26-2-xen-amd64.build-2.6.26-2-xen-amd64 May 20 22:44:18 puma58 kernel: [ 142.446536] Lustre: Added LNI 10.0.2...@tcp [8/256] May 20 22:44:18 puma58 kernel: [ 142.446536] Lustre: Accept secure, port 988 May 20 22:44:18 puma58 kernel: [ 142.680752] Lustre: Lustre Client File System; http://www.lustre.org/ Thanks. -- Ettore Enrico Delfino Ligorio [email protected] 55-11-9145-6151 On Wed, May 20, 2009 at 5:02 PM, Oleg Drokin <[email protected]> wrote: > Hello! > > On May 20, 2009, at 10:42 AM, Ettore Enrico Delfino Ligorio wrote: >> >> Anyone had success using Lustre (from debian package) with xen in >> debian (lenny or unstable)? What i must to make this work? >> > > Show us what is in the dmesg after the failed attempt. > Do you happen to have SELinux enabled? (if you do, you have to disable > it on the Lustre server nodes, SELinux would prevent the mds/ost mounting > because they do not advertise xattr support). > > Bye, > Oleg > _______________________________________________ Lustre-discuss mailing list [email protected] http://lists.lustre.org/mailman/listinfo/lustre-discuss
