Hi all,

Here is the situation:
I have 2 nodes MDS1 , MDS2 (10.0.0.22 , 10.0.0.23) I wish to use as failover MGS, active/active MDT with zfs.
I have a jbod shelf with 12 disks, seen by both nodes as das (the shelf has 2 sas ports, connected to a sas hba on each node), and I am using lustre 2.4 on centos 6.4 x64

I have created 3 zfs pools:
1. mgs:
# zpool create -f -o ashift=12 -O canmount=off lustre-mgs mirror /dev/disk/by-id/wwn-0x50000c0f012306fc /dev/disk/by-id/wwn-0x50000c0f01233aec
# mkfs.lustre --mgs --servicenode=mds1@tcp0 --servicenode=mds2@tcp0 --param sys.timeout=5000 --backfstype=zfs lustre-mgs/mgs

   Permanent disk data:
Target:     MGS
Index:      unassigned
Lustre FS: 
Mount type: zfs
Flags:      0x1064
              (MGS first_time update no_primnode )
Persistent mount opts:
Parameters: failover.node=10.0.0.22@tcp failover.node=10.0.0.23@tcp sys.timeout=5000

2 mdt0:
# zpool create -f -o ashift=12 -O canmount=off lustre-mdt0 mirror /dev/disk/by-id/wwn-0x50000c0f01d07a34 /dev/disk/by-id/wwn-0x50000c0f01d110c8
# mkfs.lustre --mdt --fsname=fs0 --servicenode=mds1@tcp0 --servicenode=mds2@tcp0 --param sys.timeout=5000 --backfstype=zfs --mgsnode=mds1@tcp0 --mgsnode=mds2@tcp0  lustre-mdt0/mdt0
warning: lustre-mdt0/mdt0: for Lustre 2.4 and later, the target index must be specified with --index

   Permanent disk data:
Target:     fs0:MDT0000
Index:      0
Lustre FS:  fs0
Mount type: zfs
Flags:      0x1061
              (MDT first_time update no_primnode )
Persistent mount opts:
Parameters: failover.node=10.0.0.22@tcp failover.node=10.0.0.23@tcp sys.timeout=5000 mgsnode=10.0.0.22@tcp mgsnode=10.0.0.23@tcp

checking for existing Lustre data: not found
mkfs_cmd = zfs create -o canmount=off -o xattr=sa lustre-mdt0/mdt0
Writing lustre-mdt0/mdt0 properties
  lustre:version=1
  lustre:flags=4193
  lustre:index=0
  lustre:fsname=fs0
  lustre:svname=fs0:MDT0000
  lustre:failover.node=10.0.0.22@tcp
  lustre:failover.node=10.0.0.23@tcp
  lustre:sys.timeout=5000
  lustre:mgsnode=10.0.0.22@tcp
  lustre:mgsnode=10.0.0.23@tcp

3. mdt1:
# zpool create -f -o ashift=12 -O canmount=off lustre-mdt1 mirror /dev/disk/by-id/wwn-0x50000c0f01d113e0 /dev/disk/by-id/wwn-0x50000c0f01d116fc
# mkfs.lustre --mdt --fsname=fs0 --servicenode=mds2@tcp0 --servicenode=mds1@tcp0 --param sys.timeout=5000 --backfstype=zfs --index=1 --mgsnode=mds1@tcp0 --mgsnode=mds2@tcp0  lustre-mdt1/mdt1

   Permanent disk data:
Target:     fs0:MDT0001
Index:      1
Lustre FS:  fs0
Mount type: zfs
Flags:      0x1061
              (MDT first_time update no_primnode )
Persistent mount opts:
Parameters: failover.node=10.0.0.23@tcp failover.node=10.0.0.22@tcp sys.timeout=5000 mgsnode=10.0.0.22@tcp mgsnode=10.0.0.23@tcp

checking for existing Lustre data: not found
mkfs_cmd = zfs create -o canmount=off -o xattr=sa lustre-mdt1/mdt1
Writing lustre-mdt1/mdt1 properties
  lustre:version=1
  lustre:flags=4193
  lustre:index=1
  lustre:fsname=fs0
  lustre:svname=fs0:MDT0001
  lustre:failover.node=10.0.0.23@tcp
  lustre:failover.node=10.0.0.22@tcp
  lustre:sys.timeout=5000
  lustre:mgsnode=10.0.0.22@tcp
  lustre:mgsnode=10.0.0.23@tcp

a few basic sanity checks:
# zfs list
NAME               USED  AVAIL  REFER  MOUNTPOINT
lustre-mdt0        824K  3.57T   136K  /lustre-mdt0
lustre-mdt0/mdt0   136K  3.57T   136K  /lustre-mdt0/mdt0
lustre-mdt1        716K  3.57T   136K  /lustre-mdt1
lustre-mdt1/mdt1   136K  3.57T   136K  /lustre-mdt1/mdt1
lustre-mgs        4.78M  3.57T   136K  /lustre-mgs
lustre-mgs/mgs    4.18M  3.57T  4.18M  /lustre-mgs/mgs

# zpool list
NAME          SIZE  ALLOC   FREE    CAP  DEDUP  HEALTH  ALTROOT
lustre-mdt0  3.62T  1.00M  3.62T     0%  1.00x  ONLINE  -
lustre-mdt1  3.62T   800K  3.62T     0%  1.00x  ONLINE  -
lustre-mgs   3.62T  4.86M  3.62T     0%  1.00x  ONLINE  -

# zpool status
  pool: lustre-mdt0
 state: ONLINE
  scan: none requested
config:

    NAME                        STATE     READ WRITE CKSUM
    lustre-mdt0                 ONLINE       0     0     0
      mirror-0                  ONLINE       0     0     0
        wwn-0x50000c0f01d07a34  ONLINE       0     0     0
        wwn-0x50000c0f01d110c8  ONLINE       0     0     0

errors: No known data errors

  pool: lustre-mdt1
 state: ONLINE
  scan: none requested
config:

    NAME                        STATE     READ WRITE CKSUM
    lustre-mdt1                 ONLINE       0     0     0
      mirror-0                  ONLINE       0     0     0
        wwn-0x50000c0f01d113e0  ONLINE       0     0     0
        wwn-0x50000c0f01d116fc  ONLINE       0     0     0

errors: No known data errors

  pool: lustre-mgs
 state: ONLINE
  scan: none requested
config:

    NAME                        STATE     READ WRITE CKSUM
    lustre-mgs                  ONLINE       0     0     0
      mirror-0                  ONLINE       0     0     0
        wwn-0x50000c0f012306fc  ONLINE       0     0     0
        wwn-0x50000c0f01233aec  ONLINE       0     0     0

errors: No known data errors
# zfs get lustre:svname lustre-mgs/mgs
NAME            PROPERTY       VALUE          SOURCE
lustre-mgs/mgs  lustre:svname  MGS            local
# zfs get lustre:svname lustre-mdt0/mdt0
NAME              PROPERTY       VALUE          SOURCE
lustre-mdt0/mdt0  lustre:svname  fs0:MDT0000    local
# zfs get lustre:svname lustre-mdt1/mdt1
NAME              PROPERTY       VALUE          SOURCE
lustre-mdt1/mdt1  lustre:svname  fs0:MDT0001    local

So far, so good.
My /etc/ldev.conf:
mds1 mds2 MGS zfs:lustre-mgs/mgs
mds1 mds2 fs0-MDT0000 zfs:lustre-mdt0/mdt0
mds2 mds1 fs0-MDT0001 zfs:lustre-mdt1/mdt1

my /etc/modprobe.d/lustre.conf
# options lnet networks=tcp0(em1)
options lnet ip2nets="tcp0 10.0.0.[22,23]; tcp0 10.0.0.*;"
-----------------------------------------------------------------------------

Now, when starting the services, I get strange errors:
# service lustre start local
Mounting lustre-mgs/mgs on /mnt/lustre/local/MGS
Mounting lustre-mdt0/mdt0 on /mnt/lustre/local/fs0-MDT0000
mount.lustre: mount lustre-mdt0/mdt0 at /mnt/lustre/local/fs0-MDT0000 failed: Input/output error
Is the MGS running?
# service lustre status local
running

attached lctl-dk.local01

If I run the same command again, I get a different error:

# service lustre start local
Mounting lustre-mgs/mgs on /mnt/lustre/local/MGS
mount.lustre: according to /etc/mtab lustre-mgs/mgs is already mounted on /mnt/lustre/local/MGS
Mounting lustre-mdt0/mdt0 on /mnt/lustre/local/fs0-MDT0000
mount.lustre: mount lustre-mdt0/mdt0 at /mnt/lustre/local/fs0-MDT0000 failed: File exists

attached lctl-dk.local02

What am I doing wrong?
I have tested lnet self-test as well, using the following script:
# cat lnet-selftest.sh
#!/bin/bash
export LST_SESSION=$$
lst new_session read/write
lst add_group servers 10.0.0.[22,23]@tcp
lst add_group readers 10.0.0.[22,23]@tcp
lst add_group writers 10.0.0.[22,23]@tcp
lst add_batch bulk_rw
lst add_test --batch bulk_rw --from readers --to servers \
brw read check=simple size=1M
lst add_test --batch bulk_rw --from writers --to servers \
brw write check=full size=4K
# start running
lst run bulk_rw
# display server stats for 30 seconds
lst stat servers & sleep 30; kill $!
# tear down
lst end_session

and it seemed ok
# modprobe lnet-selftest && ssh mds2 modprobe lnet-selftest
# ./lnet-selftest.sh
SESSION: read/write FEATURES: 0 TIMEOUT: 300 FORCE: No
10.0.0.[22,23]@tcp are added to session
10.0.0.[22,23]@tcp are added to session
10.0.0.[22,23]@tcp are added to session
Test was added successfully
Test was added successfully
bulk_rw is running now
[LNet Rates of servers]
[R] Avg: 19486    RPC/s Min: 19234    RPC/s Max: 19739    RPC/s
[W] Avg: 19486    RPC/s Min: 19234    RPC/s Max: 19738    RPC/s
[LNet Bandwidth of servers]
[R] Avg: 1737.60  MB/s  Min: 1680.70  MB/s  Max: 1794.51  MB/s
[W] Avg: 1737.60  MB/s  Min: 1680.70  MB/s  Max: 1794.51  MB/s
[LNet Rates of servers]
[R] Avg: 19510    RPC/s Min: 19182    RPC/s Max: 19838    RPC/s
[W] Avg: 19510    RPC/s Min: 19182    RPC/s Max: 19838    RPC/s
[LNet Bandwidth of servers]
[R] Avg: 1741.67  MB/s  Min: 1679.51  MB/s  Max: 1803.83  MB/s
[W] Avg: 1741.67  MB/s  Min: 1679.51  MB/s  Max: 1803.83  MB/s
[LNet Rates of servers]
[R] Avg: 19458    RPC/s Min: 19237    RPC/s Max: 19679    RPC/s
[W] Avg: 19458    RPC/s Min: 19237    RPC/s Max: 19679    RPC/s
[LNet Bandwidth of servers]
[R] Avg: 1738.87  MB/s  Min: 1687.28  MB/s  Max: 1790.45  MB/s
[W] Avg: 1738.87  MB/s  Min: 1687.28  MB/s  Max: 1790.45  MB/s
[LNet Rates of servers]
[R] Avg: 19587    RPC/s Min: 19293    RPC/s Max: 19880    RPC/s
[W] Avg: 19586    RPC/s Min: 19293    RPC/s Max: 19880    RPC/s
[LNet Bandwidth of servers]
[R] Avg: 1752.62  MB/s  Min: 1695.38  MB/s  Max: 1809.85  MB/s
[W] Avg: 1752.62  MB/s  Min: 1695.38  MB/s  Max: 1809.85  MB/s
[LNet Rates of servers]
[R] Avg: 19528    RPC/s Min: 19232    RPC/s Max: 19823    RPC/s
[W] Avg: 19528    RPC/s Min: 19232    RPC/s Max: 19824    RPC/s
[LNet Bandwidth of servers]
[R] Avg: 1741.63  MB/s  Min: 1682.29  MB/s  Max: 1800.98  MB/s
[W] Avg: 1741.63  MB/s  Min: 1682.29  MB/s  Max: 1800.98  MB/s
session is ended
./lnet-selftest.sh: line 17:  8835 Terminated              lst stat servers


Addendum - I can start the MGS service on the 2nd node, and then start mdt0 service on local node:
# ssh mds2 service lustre start MGS
Mounting lustre-mgs/mgs on /mnt/lustre/foreign/MGS
# service lustre start fs0-MDT0000
Mounting lustre-mdt0/mdt0 on /mnt/lustre/local/fs0-MDT0000
# service lustre status
unhealthy
# service lustre status local
running
00000020:01200004:13.0F:1387286251.596922:0:8291:0:(obd_mount.c:1203:lustre_fill_super())
 VFS Op: sb ffff88200a3fa000
00000020:01000004:13.0:1387286251.596933:0:8291:0:(obd_mount.c:789:lmd_print()) 
  mount data:
00000020:01000004:13.0:1387286251.596934:0:8291:0:(obd_mount.c:792:lmd_print()) 
device:  lustre-mgs/mgs
00000020:01000004:13.0:1387286251.596935:0:8291:0:(obd_mount.c:793:lmd_print()) 
flags:   a00
00000020:01000004:13.0:1387286251.596936:0:8291:0:(obd_mount.c:1248:lustre_fill_super())
 Mounting server from lustre-mgs/mgs
00000020:01000004:13.0:1387286251.596938:0:8291:0:(obd_mount_server.c:1604:osd_start())
 Attempting to start MGS, type=osd-zfs, lsifl=201004, mountfl=0
00000020:01000004:13.0:1387286251.596985:0:8291:0:(obd_mount.c:192:lustre_start_simple())
 Starting obd MGS-osd (typ=osd-zfs)
00000020:00000080:13.0:1387286251.596987:0:8291:0:(obd_config.c:1068:class_process_config())
 processing cmd: cf001
00000020:00000080:13.0:1387286251.596989:0:8291:0:(obd_config.c:366:class_attach())
 attach type osd-zfs name: MGS-osd uuid: MGS-osd_UUID
00000020:00000080:13.0:1387286251.597042:0:8291:0:(genops.c:357:class_newdev()) 
Adding new device MGS-osd (ffff88200e5e02f8)
00000020:00000080:13.0:1387286251.597044:0:8291:0:(obd_config.c:442:class_attach())
 OBD: dev 0 attached type osd-zfs with refcount 1
00000020:00000080:13.0:1387286251.597046:0:8291:0:(obd_config.c:1068:class_process_config())
 processing cmd: cf003
00000020:00000080:13.0:1387286251.851292:0:8291:0:(obd_config.c:550:class_setup())
 finished setup of obd MGS-osd (uuid MGS-osd_UUID)
00080000:01000000:13.0:1387286251.851298:0:8291:0:(osd_handler.c:788:osd_obd_connect())
 connect #0
00000020:00000080:13.0:1387286251.851303:0:8291:0:(genops.c:1147:class_connect())
 connect: client MGS-osd_UUID, cookie 0x82b123e5e3c1eaf8
00000020:01000004:13.0:1387286251.851305:0:8291:0:(obd_mount_server.c:1671:server_fill_super())
 Found service MGS on device lustre-mgs/mgs
00000020:01000000:13.0:1387286251.851351:0:8291:0:(obd_mount_server.c:253:server_start_mgs())
 Start MGS service MGS
00000020:01000004:13.0:1387286251.851354:0:8291:0:(obd_mount_server.c:120:server_register_mount())
 reg_mnt (null) from MGS
00000020:01000004:13.0:1387286251.851356:0:8291:0:(obd_mount.c:192:lustre_start_simple())
 Starting obd MGS (typ=mgs)
00000020:00000080:13.0:1387286251.851358:0:8291:0:(obd_config.c:1068:class_process_config())
 processing cmd: cf001
00000020:00000080:13.0:1387286251.851360:0:8291:0:(obd_config.c:366:class_attach())
 attach type mgs name: MGS uuid: MGS
00000020:00000080:13.0:1387286251.851411:0:8291:0:(genops.c:357:class_newdev()) 
Adding new device MGS (ffff88201e100338)
00000020:00000080:13.0:1387286251.851413:0:8291:0:(obd_config.c:442:class_attach())
 OBD: dev 1 attached type mgs with refcount 1
00000020:00000080:13.0:1387286251.851415:0:8291:0:(obd_config.c:1068:class_process_config())
 processing cmd: cf003
00000020:01000004:13.0:1387286251.851432:0:8291:0:(obd_mount_server.c:170:server_get_mount())
 get_mnt (null) from MGS, refs=2
00080000:01000000:13.0:1387286251.851434:0:8291:0:(osd_handler.c:788:osd_obd_connect())
 connect #1
00000020:00000080:13.0:1387286251.851435:0:8291:0:(genops.c:1147:class_connect())
 connect: client MGS-osd_UUID, cookie 0x82b123e5e3c1eb06
00000040:01000000:1.0F:1387286251.958734:0:8291:0:(llog_obd.c:212:llog_setup()) 
obd MGS ctxt 0 is initialized
00000100:00080000:19.0F:1387286251.959230:0:8320:0:(pinger.c:683:ping_evictor_main())
 Starting Ping Evictor
00000020:00000080:1.0:1387286251.959269:0:8291:0:(obd_config.c:550:class_setup())
 finished setup of obd MGS (uuid MGS)
00000020:01000004:1.0:1387286251.959320:0:8291:0:(obd_mount.c:333:lustre_start_mgc())
 Start MGC 'MGC10.0.0.22@tcp'
00000020:00000080:1.0:1387286251.959322:0:8291:0:(obd_config.c:1068:class_process_config())
 processing cmd: cf005
00000020:00000080:1.0:1387286251.959324:0:8291:0:(obd_config.c:1079:class_process_config())
 adding mapping from uuid MGC10.0.0.22@tcp_0 to nid 0x9000000000000 (0@lo)
00000020:00000080:1.0:1387286251.959326:0:8291:0:(obd_config.c:1068:class_process_config())
 processing cmd: cf005
00000020:00000080:1.0:1387286251.959327:0:8291:0:(obd_config.c:1079:class_process_config())
 adding mapping from uuid MGC10.0.0.22@tcp_0 to nid 0x200000a000016 
(10.0.0.22@tcp)
00000020:01000004:1.0:1387286251.959338:0:8291:0:(obd_mount.c:192:lustre_start_simple())
 Starting obd MGC10.0.0.22@tcp (typ=mgc)
00000020:00000080:1.0:1387286251.959339:0:8291:0:(obd_config.c:1068:class_process_config())
 processing cmd: cf001
00000020:00000080:1.0:1387286251.959340:0:8291:0:(obd_config.c:366:class_attach())
 attach type mgc name: MGC10.0.0.22@tcp uuid: 
5b07d52f-0e1d-ed7b-15e0-1b2cd6c6c0be
00000020:00000080:1.0:1387286251.959391:0:8291:0:(genops.c:357:class_newdev()) 
Adding new device MGC10.0.0.22@tcp (ffff8820177f0378)
00000020:00000080:1.0:1387286251.959393:0:8291:0:(obd_config.c:442:class_attach())
 OBD: dev 2 attached type mgc with refcount 1
00000020:00000080:1.0:1387286251.959394:0:8291:0:(obd_config.c:1068:class_process_config())
 processing cmd: cf003
00010000:00080000:1.0:1387286251.959412:0:8291:0:(ldlm_lib.c:115:import_set_conn())
 imp [email protected]@tcp: add connection MGC10.0.0.22@tcp_0 at 
head
00000040:01000000:1.0:1387286251.959440:0:8291:0:(llog_obd.c:212:llog_setup()) 
obd MGC10.0.0.22@tcp ctxt 1 is initialized
00000020:00000080:1.0:1387286251.959465:0:8291:0:(obd_config.c:550:class_setup())
 finished setup of obd MGC10.0.0.22@tcp (uuid 
5b07d52f-0e1d-ed7b-15e0-1b2cd6c6c0be)
10000000:01000000:1.0:1387286251.959470:0:8291:0:(mgc_request.c:1029:mgc_set_info_async())
 InitRecov MGC10.0.0.22@tcp 1/d0:i0:r0:or0:NEW
10000000:01000000:19.0:1387286251.959471:0:8321:0:(mgc_request.c:489:mgc_requeue_thread())
 Starting requeue thread
00000020:00000080:1.0:1387286251.959473:0:8291:0:(genops.c:1147:class_connect())
 connect: client 5b07d52f-0e1d-ed7b-15e0-1b2cd6c6c0be, cookie 0x82b123e5e3c1eb1b
00000100:00080000:1.0:1387286251.959476:0:8291:0:(import.c:625:ptlrpc_connect_import())
 ffff88205045c000 MGS: changing import state from NEW to CONNECTING
00000100:00080000:1.0:1387286251.959478:0:8291:0:(import.c:482:import_select_connection())
 MGC10.0.0.22@tcp: connect to NID 0@lo last attempt 0
00000100:00080000:1.0:1387286251.959479:0:8291:0:(import.c:560:import_select_connection())
 MGC10.0.0.22@tcp: import ffff88205045c000 using connection 
MGC10.0.0.22@tcp_0/0@lo
00000100:00080000:1.0:1387286251.959500:0:8291:0:(pinger.c:483:ptlrpc_pinger_add_import())
 adding pingable import 5b07d52f-0e1d-ed7b-15e0-1b2cd6c6c0be->MGS
00000020:01000004:1.0:1387286251.959504:0:8291:0:(obd_mount_server.c:1564:server_fill_super_common())
 Server sb, dev=19
20000000:01000000:19.0:1387286251.959585:0:8319:0:(mgs_handler.c:655:mgs_handle())
 @@@ connect  req@ffff8820177e4050 x1454670640841432/t0(0) o250-><?>@<?>:0/0 
lens 400/0 e 0 to 0 dl 1387286351 ref 1 fl Interpret:/0/ffffffff rc 0/-1
00010000:00080000:19.0:1387286251.959606:0:8319:0:(ldlm_lib.c:1014:target_handle_connect())
 MGS: connection from 5b07d52f-0e1d-ed7b-15e0-1b2cd6c6c0be@0@lo t0 exp (null) 
cur 1387286251 last 0
00000020:00000080:19.0:1387286251.959618:0:8319:0:(genops.c:1147:class_connect())
 connect: client 5b07d52f-0e1d-ed7b-15e0-1b2cd6c6c0be, cookie 0x82b123e5e3c1eb22
00000020:01000000:19.0:1387286251.959621:0:8319:0:(lprocfs_status.c:1945:lprocfs_exp_setup())
 using hash ffff882010363180
20000000:01000000:19.0:1387286251.959652:0:8319:0:(mgs_llog.c:445:mgs_find_or_make_fsdb())
 Creating new db
00000100:00080000:0.0F:1387286251.968985:0:3973:0:(import.c:816:ptlrpc_connect_interpret())
 MGC10.0.0.22@tcp: connect to target with instance 0
10000000:01000000:0.0:1387286251.968992:0:3973:0:(mgc_request.c:1148:mgc_import_event())
 import event 0x808005
00000100:00080000:0.0:1387286251.968994:0:3973:0:(import.c:871:ptlrpc_connect_interpret())
 ffff88205045c000 MGS: changing import state from CONNECTING to FULL
10000000:01000000:0.0:1387286251.968995:0:3973:0:(mgc_request.c:1148:mgc_import_event())
 import event 0x808004
00000100:00080000:0.0:1387286251.968999:0:3973:0:(pinger.c:239:ptlrpc_pinger_ir_up())
 IR up
00000100:00080000:0.0:1387286251.969002:0:3973:0:(import.c:1106:ptlrpc_connect_interpret())
 MGC10.0.0.22@tcp: Resetting ns_connect_flags to server flags: 0x11005000020
00000020:01200004:13.0:1387286253.630534:0:8332:0:(obd_mount.c:1203:lustre_fill_super())
 VFS Op: sb ffff88200e874800
00000020:01000004:13.0:1387286253.630548:0:8332:0:(obd_mount.c:789:lmd_print()) 
  mount data:
00000020:01000004:13.0:1387286253.630549:0:8332:0:(obd_mount.c:792:lmd_print()) 
device:  lustre-mdt0/mdt0
00000020:01000004:13.0:1387286253.630549:0:8332:0:(obd_mount.c:793:lmd_print()) 
flags:   1800
00000020:01000004:13.0:1387286253.630550:0:8332:0:(obd_mount.c:1248:lustre_fill_super())
 Mounting server from lustre-mdt0/mdt0
00000020:01000004:13.0:1387286253.630552:0:8332:0:(obd_mount_server.c:1604:osd_start())
 Attempting to start fs0-MDT0000, type=osd-zfs, lsifl=201021, mountfl=0
00000020:01000004:13.0:1387286253.630599:0:8332:0:(obd_mount.c:192:lustre_start_simple())
 Starting obd fs0-MDT0000-osd (typ=osd-zfs)
00000020:00000080:13.0:1387286253.630601:0:8332:0:(obd_config.c:1068:class_process_config())
 processing cmd: cf001
00000020:00000080:13.0:1387286253.630602:0:8332:0:(obd_config.c:366:class_attach())
 attach type osd-zfs name: fs0-MDT0000-osd uuid: fs0-MDT0000-osd_UUID
00000020:00000080:13.0:1387286253.630654:0:8332:0:(genops.c:357:class_newdev()) 
Adding new device fs0-MDT0000-osd (ffff882009d6c3b8)
00000020:00000080:13.0:1387286253.630656:0:8332:0:(obd_config.c:442:class_attach())
 OBD: dev 3 attached type osd-zfs with refcount 1
00000020:00000080:13.0:1387286253.630658:0:8332:0:(obd_config.c:1068:class_process_config())
 processing cmd: cf003
00000020:00000080:13.0:1387286253.865332:0:8332:0:(obd_config.c:550:class_setup())
 finished setup of obd fs0-MDT0000-osd (uuid fs0-MDT0000-osd_UUID)
00080000:01000000:13.0:1387286253.865338:0:8332:0:(osd_handler.c:788:osd_obd_connect())
 connect #0
00000020:00000080:13.0:1387286253.865342:0:8332:0:(genops.c:1147:class_connect())
 connect: client fs0-MDT0000-osd_UUID, cookie 0x82b123e5e3c1eb37
00000020:01000004:13.0:1387286253.865345:0:8332:0:(obd_mount_server.c:1671:server_fill_super())
 Found service fs0-MDT0000 on device lustre-mdt0/mdt0
00000020:01000004:13.0:1387286253.865441:0:8332:0:(obd_mount.c:333:lustre_start_mgc())
 Start MGC 'MGC10.0.0.23@tcp'
00000020:00000080:13.0:1387286253.865444:0:8332:0:(obd_config.c:1068:class_process_config())
 processing cmd: cf005
00000020:00000080:13.0:1387286253.865446:0:8332:0:(obd_config.c:1079:class_process_config())
 adding mapping from uuid MGC10.0.0.23@tcp_0 to nid 0x200000a000017 
(10.0.0.23@tcp)
00000020:01000004:13.0:1387286253.865459:0:8332:0:(obd_mount.c:192:lustre_start_simple())
 Starting obd MGC10.0.0.23@tcp (typ=mgc)
00000020:00000080:13.0:1387286253.865460:0:8332:0:(obd_config.c:1068:class_process_config())
 processing cmd: cf001
00000020:00000080:13.0:1387286253.865462:0:8332:0:(obd_config.c:366:class_attach())
 attach type mgc name: MGC10.0.0.23@tcp uuid: 
87412aae-cf15-a3f9-356c-6432e8c53a77
00000020:00000080:13.0:1387286253.865513:0:8332:0:(genops.c:357:class_newdev()) 
Adding new device MGC10.0.0.23@tcp (ffff881ff991a3f8)
00000020:00000080:13.0:1387286253.865515:0:8332:0:(obd_config.c:442:class_attach())
 OBD: dev 4 attached type mgc with refcount 1
00000020:00000080:13.0:1387286253.865517:0:8332:0:(obd_config.c:1068:class_process_config())
 processing cmd: cf003
00010000:00080000:13.0:1387286253.865539:0:8332:0:(ldlm_lib.c:115:import_set_conn())
 imp [email protected]@tcp: add connection MGC10.0.0.23@tcp_0 at 
head
00000040:01000000:13.0:1387286253.865570:0:8332:0:(llog_obd.c:212:llog_setup()) 
obd MGC10.0.0.23@tcp ctxt 1 is initialized
00000020:00000080:13.0:1387286253.865581:0:8332:0:(obd_config.c:550:class_setup())
 finished setup of obd MGC10.0.0.23@tcp (uuid 
87412aae-cf15-a3f9-356c-6432e8c53a77)
10000000:01000000:13.0:1387286253.865586:0:8332:0:(mgc_request.c:1029:mgc_set_info_async())
 InitRecov MGC10.0.0.23@tcp 1/d0:i0:r0:or0:NEW
00000020:00000080:13.0:1387286253.865589:0:8332:0:(genops.c:1147:class_connect())
 connect: client 87412aae-cf15-a3f9-356c-6432e8c53a77, cookie 0x82b123e5e3c1eb4c
00000100:00080000:13.0:1387286253.865591:0:8332:0:(import.c:625:ptlrpc_connect_import())
 ffff88201462a800 MGS: changing import state from NEW to CONNECTING
00000100:00080000:13.0:1387286253.865593:0:8332:0:(import.c:482:import_select_connection())
 MGC10.0.0.23@tcp: connect to NID 10.0.0.23@tcp last attempt 0
00000100:00080000:13.0:1387286253.865595:0:8332:0:(import.c:560:import_select_connection())
 MGC10.0.0.23@tcp: import ffff88201462a800 using connection 
MGC10.0.0.23@tcp_0/10.0.0.23@tcp
00000100:00080000:13.0:1387286253.865614:0:8332:0:(pinger.c:483:ptlrpc_pinger_add_import())
 adding pingable import 87412aae-cf15-a3f9-356c-6432e8c53a77->MGS
00000020:01000004:13.0:1387286253.865619:0:8332:0:(obd_mount_server.c:1193:server_start_targets())
 starting target fs0-MDT0000
00000020:01000004:13.0:1387286253.865663:0:8332:0:(obd_mount.c:192:lustre_start_simple())
 Starting obd MDS (typ=mds)
00000020:00000080:13.0:1387286253.865664:0:8332:0:(obd_config.c:1068:class_process_config())
 processing cmd: cf001
00000020:00000080:13.0:1387286253.865665:0:8332:0:(obd_config.c:366:class_attach())
 attach type mds name: MDS uuid: MDS_uuid
00000020:00000080:13.0:1387286253.865716:0:8332:0:(genops.c:357:class_newdev()) 
Adding new device MDS (ffff881ff993c438)
00000020:00000080:13.0:1387286253.865717:0:8332:0:(obd_config.c:442:class_attach())
 OBD: dev 5 attached type mds with refcount 1
00000020:00000080:13.0:1387286253.865718:0:8332:0:(obd_config.c:1068:class_process_config())
 processing cmd: cf003
00000020:00000080:13.0:1387286253.890541:0:8332:0:(obd_config.c:550:class_setup())
 finished setup of obd MDS (uuid MDS_uuid)
00000020:01000004:13.0:1387286253.890554:0:8332:0:(obd_mount_server.c:1098:server_register_target())
 Registration fs0-MDT0000, fs=fs0, 0@<0:0>, index=0000, flags=0x1061
10000000:01000000:13.0:1387286253.890558:0:8332:0:(mgc_request.c:1043:mgc_set_info_async())
 register_target fs0-MDT0000 0x10001061
10000000:01000000:13.0:1387286253.890571:0:8332:0:(mgc_request.c:994:mgc_target_register())
 register fs0-MDT0000
00000100:00080000:13.0:1387286253.890578:0:8332:0:(client.c:1403:ptlrpc_send_new_req())
 @@@ req from PID 8332 waiting for recovery: (FULL != CONNECTING)  
req@ffff88201a97dc00 x1454670640841440/t0(0) 
o253->MGC10.0.0.23@[email protected]@tcp:26/25 lens 4768/4768 e 0 to 0 dl 0 ref 2 
fl Rpc:W/0/ffffffff rc 0/-1
00000100:00000400:0.0:1387286258.865247:0:3973:0:(client.c:1868:ptlrpc_expire_one_request())
 @@@ Request sent has timed out for slow reply: [sent 1387286253/real 
1387286253]  req@ffff88201684f000 x1454670640841436/t0(0) 
o250->MGC10.0.0.23@[email protected]@tcp:26/25 lens 400/544 e 0 to 1 dl 1387286258 
ref 1 fl Rpc:XN/0/ffffffff rc 0/-1
00000100:00080000:0.0:1387286258.865268:0:3973:0:(import.c:1141:ptlrpc_connect_interpret())
 ffff88201462a800 MGS: changing import state from CONNECTING to DISCONN
00000100:00080000:0.0:1387286258.865270:0:3973:0:(import.c:1187:ptlrpc_connect_interpret())
 recovery of MGS on MGC10.0.0.23@tcp_0 failed (-110)
00000100:00020000:13.0:1387286264.890246:0:8332:0:(client.c:1052:ptlrpc_import_delay_req())
 @@@ send limit expired   req@ffff88201a97dc00 x1454670640841440/t0(0) 
o253->MGC10.0.0.23@[email protected]@tcp:26/25 lens 4768/4768 e 0 to 0 dl 0 ref 2 
fl Rpc:W/0/ffffffff rc 0/-1
00000020:00020000:13.0:1387286264.890520:0:8332:0:(obd_mount_server.c:1123:server_register_target())
 fs0-MDT0000: error registering with the MGS: rc = -5 (not fatal)
00000020:01000004:13.0:1387286264.890696:0:8332:0:(obd_mount_server.c:120:server_register_mount())
 reg_mnt (null) from fs0-MDT0000
10000000:01000000:13.0:1387286264.890700:0:8332:0:(mgc_request.c:1932:mgc_process_config())
 parse_log fs0-MDT0000 from 0
10000000:01000000:13.0:1387286264.890701:0:8332:0:(mgc_request.c:305:config_log_add())
 adding config log fs0-MDT0000:(null)
10000000:01000000:13.0:1387286264.890703:0:8332:0:(mgc_request.c:208:do_config_log_add())
 do adding config log fs0-sptlrpc:(null)
10000000:01000000:13.0:1387286264.890706:0:8332:0:(mgc_request.c:93:mgc_name2resid())
 log fs0-sptlrpc to resid 0x307366/0x0 (fs0)
10000000:01000000:13.0:1387286264.890708:0:8332:0:(mgc_request.c:1840:mgc_process_log())
 Process log fs0-sptlrpc:(null) from 1
10000000:01000000:13.0:1387286264.890710:0:8332:0:(mgc_request.c:921:mgc_enqueue())
 Enqueue for fs0-sptlrpc (res 0x307366)
00000100:00080000:13.0:1387286264.890727:0:8332:0:(client.c:1403:ptlrpc_send_new_req())
 @@@ req from PID 8332 waiting for recovery: (FULL != DISCONN)  
req@ffff88201a97dc00 x1454670640841444/t0(0) 
o101->MGC10.0.0.23@[email protected]@tcp:26/25 lens 328/344 e 0 to 0 dl 0 ref 2 fl 
Rpc:W/0/ffffffff rc 0/-1
00000100:00020000:13.0:1387286270.890294:0:8332:0:(client.c:1052:ptlrpc_import_delay_req())
 @@@ send limit expired   req@ffff88201a97dc00 x1454670640841444/t0(0) 
o101->MGC10.0.0.23@[email protected]@tcp:26/25 lens 328/344 e 0 to 0 dl 0 ref 2 fl 
Rpc:W/0/ffffffff rc 0/-1
10000000:01000000:13.0:1387286270.890305:0:8332:0:(mgc_request.c:826:mgc_blocking_ast())
 Lock res 0x307366 (fs0)
10000000:01000000:13.0:1387286270.890313:0:8332:0:(mgc_request.c:1852:mgc_process_log())
 Can't get cfg lock: -5
10000000:01000000:13.0:1387286270.890315:0:8332:0:(mgc_request.c:1871:mgc_process_log())
 MGC10.0.0.23@tcp: configuration from log 'fs0-sptlrpc' succeeded (0).
10000000:01000000:13.0:1387286270.890316:0:8332:0:(mgc_request.c:208:do_config_log_add())
 do adding config log fs0-MDT0000:(null)
10000000:01000000:13.0:1387286270.890318:0:8332:0:(mgc_request.c:93:mgc_name2resid())
 log fs0-MDT0000 to resid 0x307366/0x0 (fs0)
10000000:01000000:13.0:1387286270.890320:0:8332:0:(mgc_request.c:208:do_config_log_add())
 do adding config log fs0-mdtir:ffff88200e874800
10000000:01000000:13.0:1387286270.890321:0:8332:0:(mgc_request.c:93:mgc_name2resid())
 log fs0-mdtir to resid 0x307366/0x2 (fs0)
10000000:01000000:13.0:1387286270.890322:0:8332:0:(mgc_request.c:1840:mgc_process_log())
 Process log fs0-MDT0000:(null) from 1
10000000:01000000:13.0:1387286270.890323:0:8332:0:(mgc_request.c:921:mgc_enqueue())
 Enqueue for fs0-MDT0000 (res 0x307366)
00000100:00080000:13.0:1387286270.890332:0:8332:0:(client.c:1403:ptlrpc_send_new_req())
 @@@ req from PID 8332 waiting for recovery: (FULL != DISCONN)  
req@ffff88201a97dc00 x1454670640841448/t0(0) 
o101->MGC10.0.0.23@[email protected]@tcp:26/25 lens 328/344 e 0 to 0 dl 0 ref 2 fl 
Rpc:W/0/ffffffff rc 0/-1
00000100:00020000:13.0:1387286276.890246:0:8332:0:(client.c:1052:ptlrpc_import_delay_req())
 @@@ send limit expired   req@ffff88201a97dc00 x1454670640841448/t0(0) 
o101->MGC10.0.0.23@[email protected]@tcp:26/25 lens 328/344 e 0 to 0 dl 0 ref 2 fl 
Rpc:W/0/ffffffff rc 0/-1
10000000:01000000:13.0:1387286276.890257:0:8332:0:(mgc_request.c:826:mgc_blocking_ast())
 Lock res 0x307366 (fs0)
10000000:01000000:13.0:1387286276.890263:0:8332:0:(mgc_request.c:1852:mgc_process_log())
 Can't get cfg lock: -5
10000000:01000000:13.0:1387286276.890266:0:8332:0:(mgc_request.c:1871:mgc_process_log())
 MGC10.0.0.23@tcp: configuration from log 'fs0-MDT0000' failed (-5).
00000020:02020000:13.0:1387286276.890268:0:8332:0:(obd_mount.c:119:lustre_process_log())
 15c-8: MGC10.0.0.23@tcp: The configuration from log 'fs0-MDT0000' failed (-5). 
This may be the result of communication errors between this node and the MGS, a 
bad configuration, or other errors. See the syslog for more information.
00000020:00020000:13.0:1387286276.890606:0:8332:0:(obd_mount_server.c:1257:server_start_targets())
 failed to start server fs0-MDT0000: -5
00000020:01000000:13.0:1387286276.890776:0:8332:0:(obd_config.c:1724:class_manual_cleanup())
 Manual cleanup of MDS (flags='F')
00000020:00000080:13.0:1387286276.890779:0:8332:0:(obd_config.c:1068:class_process_config())
 processing cmd: cf004
00000020:00000080:13.0:1387286276.908667:0:8332:0:(obd_config.c:1068:class_process_config())
 processing cmd: cf002
00000020:00000080:13.0:1387286276.908672:0:8332:0:(obd_config.c:599:class_detach())
 detach on obd MDS (uuid MDS_uuid)
00000020:00000080:13.0:1387286276.908675:0:8332:0:(genops.c:825:class_export_put())
 final put ffff88201684f400/MDS_uuid
00000020:00020000:13.0:1387286276.908681:0:8332:0:(obd_mount_server.c:1699:server_fill_super())
 Unable to start targets: -5
00000020:00000080:1.0:1387286276.908686:0:3960:0:(genops.c:779:class_export_destroy())
 destroying export ffff88201684f400/MDS_uuid for MDS
00000020:01000000:1.0:1387286276.908689:0:3960:0:(obd_config.c:750:class_decref())
 finishing cleanup of obd MDS (MDS_uuid)
00000020:01000004:13.0:1387286276.908855:0:8332:0:(obd_mount_server.c:1415:server_put_super())
 server put_super fs0-MDT0000
00000020:00020000:13.0:1387286276.908908:0:8332:0:(obd_mount_server.c:844:lustre_disconnect_lwp())
 fs0-MDT0000-lwp-MDT0000: Can't end config log fs0-client.
00000020:00020000:13.0:1387286276.909080:0:8332:0:(obd_mount_server.c:1426:server_put_super())
 fs0-MDT0000: failed to disconnect lwp. (rc=-2)
10000000:01000000:13.0:1387286276.909258:0:8332:0:(mgc_request.c:147:config_log_put())
 dropping config log fs0-mdtir
10000000:01000000:13.0:1387286276.909260:0:8332:0:(mgc_request.c:412:config_log_end())
 end config log fs0-MDT0000 (0)
00000020:00020000:13.0:1387286276.909309:0:8332:0:(obd_mount_server.c:1456:server_put_super())
 no obd fs0-MDT0000
00000020:01000004:13.0:1387286276.909474:0:8332:0:(obd_mount_server.c:139:server_deregister_mount())
 dereg_mnt (null) from fs0-MDT0000
00000020:01000000:13.0:1387286276.909521:0:8332:0:(obd_mount_server.c:896:lustre_stop_lwp())
 fs0-MDT0000: lwp wasn't started.
00000020:01000004:13.0:1387286276.909522:0:8332:0:(obd_mount.c:765:lustre_common_put_super())
 dropping sb ffff88200e874800
00000100:00080000:13.0:1387286276.909525:0:8332:0:(pinger.c:507:ptlrpc_pinger_del_import())
 removing pingable import 87412aae-cf15-a3f9-356c-6432e8c53a77->MGS
00000100:00080000:13.0:1387286276.909530:0:8332:0:(import.c:1495:ptlrpc_disconnect_import())
 ffff88201462a800 MGS: changing import state from DISCONN to CLOSED
00000100:00080000:13.0:1387286276.909532:0:8332:0:(import.c:204:ptlrpc_deactivate_and_unlock_import())
 setting import MGS INVALID
10000000:01000000:13.0:1387286276.909534:0:8332:0:(mgc_request.c:1148:mgc_import_event())
 import event 0x808002
10000000:01000000:13.0:1387286276.909536:0:8332:0:(mgc_request.c:1148:mgc_import_event())
 import event 0x808003
00000020:00000080:13.0:1387286276.909538:0:8332:0:(genops.c:1225:class_disconnect())
 disconnect: cookie 0x82b123e5e3c1eb4c
00000020:00000080:13.0:1387286276.909540:0:8332:0:(genops.c:825:class_export_put())
 final put ffff88200a83d400/87412aae-cf15-a3f9-356c-6432e8c53a77
00000020:01000000:13.0:1387286276.909543:0:8332:0:(obd_config.c:1724:class_manual_cleanup())
 Manual cleanup of MGC10.0.0.23@tcp (flags='')
00000020:00000080:1.0:1387286276.909545:0:3960:0:(genops.c:779:class_export_destroy())
 destroying export ffff88200a83d400/87412aae-cf15-a3f9-356c-6432e8c53a77 for 
MGC10.0.0.23@tcp
00000020:00000080:13.0:1387286276.909546:0:8332:0:(obd_config.c:1068:class_process_config())
 processing cmd: cf004
00000020:00000080:13.0:1387286276.909548:0:8332:0:(obd_config.c:674:class_cleanup())
 MGC10.0.0.23@tcp: forcing exports to disconnect: 1
00000020:00080000:13.0:1387286276.909550:0:8332:0:(genops.c:1541:print_export_data())
 MGC10.0.0.23@tcp: ACTIVE ffff882015bcc400 87412aae-cf15-a3f9-356c-6432e8c53a77 
(no nid) 4 (0 0 0) 0 0 0 0: (null)  0
00000020:00080000:13.0:1387286276.909558:0:8332:0:(genops.c:1314:class_disconnect_exports())
 OBD device 4 (ffff881ff991a3f8) has exports, disconnecting them
00000020:00080000:13.0:1387286276.909559:0:8332:0:(genops.c:1277:class_disconnect_export_list())
 exp ffff882015bcc400 export uuid == obd uuid, don't discon
10000000:01000000:13.0:1387286276.909561:0:8332:0:(obd_class.h:673:obd_cleanup_client_import())
 MGC10.0.0.23@tcp: client import never connected
00000100:00080000:13.0:1387286276.909562:0:8332:0:(import.c:204:ptlrpc_deactivate_and_unlock_import())
 setting import MGS INVALID
10000000:01000000:13.0:1387286276.909563:0:8332:0:(mgc_request.c:1148:mgc_import_event())
 import event 0x808002
10000000:01000000:13.0:1387286276.909563:0:8332:0:(mgc_request.c:1148:mgc_import_event())
 import event 0x808003
00000020:00000080:1.0:1387286276.909570:0:3960:0:(genops.c:955:class_import_destroy())
 destroying import ffff88201462a800 for MGC10.0.0.23@tcp
00000020:00000080:13.0:1387286276.909577:0:8332:0:(obd_config.c:1068:class_process_config())
 processing cmd: cf002
00000020:00000080:13.0:1387286276.909578:0:8332:0:(obd_config.c:599:class_detach())
 detach on obd MGC10.0.0.23@tcp (uuid 87412aae-cf15-a3f9-356c-6432e8c53a77)
00000020:00000080:13.0:1387286276.909580:0:8332:0:(obd_config.c:1068:class_process_config())
 processing cmd: cf006
00000020:00000080:13.0:1387286276.909581:0:8332:0:(obd_config.c:1087:class_process_config())
 removing mappings for uuid MGC10.0.0.23@tcp_0
00000020:01000004:13.0:1387286276.909583:0:8332:0:(obd_mount.c:640:lustre_put_lsi())
 put ffff88200e874800 1
00000020:00000080:13.0:1387286276.909585:0:8332:0:(genops.c:1225:class_disconnect())
 disconnect: cookie 0x82b123e5e3c1eb37
00000020:00000080:13.0:1387286276.909586:0:8332:0:(genops.c:825:class_export_put())
 final put ffff88201a2f3000/fs0-MDT0000-osd_UUID
00000020:01000000:13.0:1387286276.909588:0:8332:0:(obd_config.c:1724:class_manual_cleanup())
 Manual cleanup of fs0-MDT0000-osd (flags='')
00000020:00000080:13.0:1387286276.909590:0:8332:0:(obd_config.c:1068:class_process_config())
 processing cmd: cf004
00000020:00000080:1.0:1387286276.909590:0:3960:0:(genops.c:779:class_export_destroy())
 destroying export ffff88201a2f3000/fs0-MDT0000-osd_UUID for fs0-MDT0000-osd
00000020:00000080:13.0:1387286276.909591:0:8332:0:(obd_config.c:674:class_cleanup())
 fs0-MDT0000-osd: forcing exports to disconnect: 1
00000020:00080000:13.0:1387286276.909593:0:8332:0:(genops.c:1541:print_export_data())
 fs0-MDT0000-osd: ACTIVE ffff88200ed3a000 fs0-MDT0000-osd_UUID (no nid) 1 (0 0 
0) 0 0 0 0: (null)  0
00000020:00080000:13.0:1387286276.909596:0:8332:0:(genops.c:1314:class_disconnect_exports())
 OBD device 3 (ffff882009d6c3b8) has exports, disconnecting them
00000020:00080000:13.0:1387286276.909597:0:8332:0:(genops.c:1277:class_disconnect_export_list())
 exp ffff88200ed3a000 export uuid == obd uuid, don't discon
00080000:00080000:13.0:1387286276.909617:0:8332:0:(osd_handler.c:375:osd_sync())
 syncing OSD osd-zfs
00000020:00000080:13.0:1387286278.455434:0:8332:0:(obd_config.c:1068:class_process_config())
 processing cmd: cf002
00000020:00000080:13.0:1387286278.455436:0:8332:0:(obd_config.c:599:class_detach())
 detach on obd fs0-MDT0000-osd (uuid fs0-MDT0000-osd_UUID)
00000020:00000080:13.0:1387286278.455438:0:8332:0:(genops.c:825:class_export_put())
 final put ffff88200ed3a000/fs0-MDT0000-osd_UUID
00000020:00000080:1.0:1387286278.455447:0:3960:0:(genops.c:779:class_export_destroy())
 destroying export ffff88200ed3a000/fs0-MDT0000-osd_UUID for fs0-MDT0000-osd
00000020:01000000:1.0:1387286278.455449:0:3960:0:(obd_config.c:750:class_decref())
 finishing cleanup of obd fs0-MDT0000-osd (fs0-MDT0000-osd_UUID)
00000020:01000004:13.0:1387286278.638723:0:8332:0:(obd_mount.c:590:lustre_free_lsi())
 Freeing lsi ffff88200ed3a800
00000020:02000400:13.0:1387286278.638779:0:8332:0:(obd_mount_server.c:1498:server_put_super())
 server umount fs0-MDT0000 complete
00000020:00020000:13.0:1387286278.638781:0:8332:0:(obd_mount.c:1267:lustre_fill_super())
 Unable to mount  (-5)
00000100:00080000:19.0:1387286278.865291:0:8319:0:(service.c:1079:ptlrpc_update_export_timer())
 updating export 5b07d52f-0e1d-ed7b-15e0-1b2cd6c6c0be at 1387286278 exp 
ffff88404fb8bc00

00000020:01200004:13.0F:1387286292.791620:0:8463:0:(obd_mount.c:1203:lustre_fill_super())
 VFS Op: sb ffff88200be9f000
00000020:01000004:13.0:1387286292.791635:0:8463:0:(obd_mount.c:789:lmd_print()) 
  mount data:
00000020:01000004:13.0:1387286292.791636:0:8463:0:(obd_mount.c:792:lmd_print()) 
device:  lustre-mdt0/mdt0
00000020:01000004:13.0:1387286292.791636:0:8463:0:(obd_mount.c:793:lmd_print()) 
flags:   1800
00000020:01000004:13.0:1387286292.791638:0:8463:0:(obd_mount.c:1248:lustre_fill_super())
 Mounting server from lustre-mdt0/mdt0
00000020:01000004:13.0:1387286292.791640:0:8463:0:(obd_mount_server.c:1604:osd_start())
 Attempting to start fs0-MDT0000, type=osd-zfs, lsifl=201021, mountfl=0
00000020:01000004:13.0:1387286292.791687:0:8463:0:(obd_mount.c:192:lustre_start_simple())
 Starting obd fs0-MDT0000-osd (typ=osd-zfs)
00000020:00000080:13.0:1387286292.791689:0:8463:0:(obd_config.c:1068:class_process_config())
 processing cmd: cf001
00000020:00000080:13.0:1387286292.791691:0:8463:0:(obd_config.c:366:class_attach())
 attach type osd-zfs name: fs0-MDT0000-osd uuid: fs0-MDT0000-osd_UUID
00000020:00000080:13.0:1387286292.791744:0:8463:0:(genops.c:357:class_newdev()) 
Adding new device fs0-MDT0000-osd (ffff8820177f6478)
00000020:00000080:13.0:1387286292.791746:0:8463:0:(obd_config.c:442:class_attach())
 OBD: dev 3 attached type osd-zfs with refcount 1
00000020:00000080:13.0:1387286292.791748:0:8463:0:(obd_config.c:1068:class_process_config())
 processing cmd: cf003
00000020:00000080:1.0F:1387286293.057300:0:8463:0:(obd_config.c:550:class_setup())
 finished setup of obd fs0-MDT0000-osd (uuid fs0-MDT0000-osd_UUID)
00080000:01000000:1.0:1387286293.057308:0:8463:0:(osd_handler.c:788:osd_obd_connect())
 connect #0
00000020:00000080:1.0:1387286293.057313:0:8463:0:(genops.c:1147:class_connect())
 connect: client fs0-MDT0000-osd_UUID, cookie 0x82b123e5e3c1eb6f
00000020:01000004:1.0:1387286293.057316:0:8463:0:(obd_mount_server.c:1671:server_fill_super())
 Found service fs0-MDT0000 on device lustre-mdt0/mdt0
00000020:01000004:1.0:1387286293.057368:0:8463:0:(obd_mount.c:333:lustre_start_mgc())
 Start MGC 'MGC10.0.0.23@tcp'
00000020:00000080:1.0:1387286293.057371:0:8463:0:(obd_config.c:1068:class_process_config())
 processing cmd: cf005
00000020:00000080:1.0:1387286293.057373:0:8463:0:(obd_config.c:1079:class_process_config())
 adding mapping from uuid MGC10.0.0.23@tcp_0 to nid 0x200000a000017 
(10.0.0.23@tcp)
00000020:01000004:1.0:1387286293.057386:0:8463:0:(obd_mount.c:192:lustre_start_simple())
 Starting obd MGC10.0.0.23@tcp (typ=mgc)
00000020:00000080:1.0:1387286293.057387:0:8463:0:(obd_config.c:1068:class_process_config())
 processing cmd: cf001
00000020:00000080:1.0:1387286293.057389:0:8463:0:(obd_config.c:366:class_attach())
 attach type mgc name: MGC10.0.0.23@tcp uuid: 
a5931a94-2934-fd69-c8ad-76c8918ddf86
00000020:00020000:1.0:1387286293.057394:0:8463:0:(genops.c:320:class_newdev()) 
Device MGC10.0.0.23@tcp already exists at 4, won't add
00000020:00020000:1.0:1387286293.057567:0:8463:0:(obd_config.c:374:class_attach())
 Cannot create device MGC10.0.0.23@tcp of type mgc : -17
00000020:00020000:1.0:1387286293.057734:0:8463:0:(obd_mount.c:196:lustre_start_simple())
 MGC10.0.0.23@tcp attach error -17
00000020:01000004:1.0:1387286293.057900:0:8463:0:(obd_mount_server.c:1415:server_put_super())
 server put_super fs0-MDT0000
00000020:00020000:1.0:1387286293.057947:0:8463:0:(obd_mount_server.c:844:lustre_disconnect_lwp())
 fs0-MDT0000-lwp-MDT0000: Can't end config log fs0-client.
00000020:00020000:1.0:1387286293.058119:0:8463:0:(obd_mount_server.c:1426:server_put_super())
 fs0-MDT0000: failed to disconnect lwp. (rc=-2)
00000020:00020000:1.0:1387286293.058343:0:8463:0:(obd_mount_server.c:1456:server_put_super())
 no obd fs0-MDT0000
00000020:00020000:1.0:1387286293.058509:0:8463:0:(obd_mount_server.c:135:server_deregister_mount())
 fs0-MDT0000 not registered
00000020:01000000:1.0:1387286293.058719:0:8463:0:(obd_mount_server.c:896:lustre_stop_lwp())
 fs0-MDT0000: lwp wasn't started.
00000020:01000004:1.0:1387286293.058720:0:8463:0:(obd_mount.c:765:lustre_common_put_super())
 dropping sb ffff88200be9f000
00000020:01000004:1.0:1387286293.058722:0:8463:0:(obd_mount.c:640:lustre_put_lsi())
 put ffff88200be9f000 1
00000020:00000080:1.0:1387286293.058723:0:8463:0:(genops.c:1225:class_disconnect())
 disconnect: cookie 0x82b123e5e3c1eb6f
00000020:00000080:1.0:1387286293.058725:0:8463:0:(genops.c:825:class_export_put())
 final put ffff88201f81e400/fs0-MDT0000-osd_UUID
00000020:01000000:1.0:1387286293.058729:0:8463:0:(obd_config.c:1724:class_manual_cleanup())
 Manual cleanup of fs0-MDT0000-osd (flags='')
00000020:00000080:1.0:1387286293.058731:0:8463:0:(obd_config.c:1068:class_process_config())
 processing cmd: cf004
00000020:00000080:13.0:1387286293.058733:0:3960:0:(genops.c:779:class_export_destroy())
 destroying export ffff88201f81e400/fs0-MDT0000-osd_UUID for fs0-MDT0000-osd
00000020:00000080:1.0:1387286293.058734:0:8463:0:(obd_config.c:674:class_cleanup())
 fs0-MDT0000-osd: forcing exports to disconnect: 1
00000020:00080000:1.0:1387286293.058736:0:8463:0:(genops.c:1541:print_export_data())
 fs0-MDT0000-osd: ACTIVE ffff88202716b400 fs0-MDT0000-osd_UUID (no nid) 1 (0 0 
0) 0 0 0 0: (null)  0
00000020:00080000:1.0:1387286293.058740:0:8463:0:(genops.c:1314:class_disconnect_exports())
 OBD device 3 (ffff8820177f6478) has exports, disconnecting them
00000020:00080000:1.0:1387286293.058741:0:8463:0:(genops.c:1277:class_disconnect_export_list())
 exp ffff88202716b400 export uuid == obd uuid, don't discon
00080000:00080000:1.0:1387286293.058766:0:8463:0:(osd_handler.c:375:osd_sync()) 
syncing OSD osd-zfs
00000020:00000080:1.0:1387286294.579629:0:8463:0:(obd_config.c:1068:class_process_config())
 processing cmd: cf002
00000020:00000080:1.0:1387286294.579632:0:8463:0:(obd_config.c:599:class_detach())
 detach on obd fs0-MDT0000-osd (uuid fs0-MDT0000-osd_UUID)
00000020:00000080:1.0:1387286294.579634:0:8463:0:(genops.c:825:class_export_put())
 final put ffff88202716b400/fs0-MDT0000-osd_UUID
00000020:00000080:13.0:1387286294.579644:0:3960:0:(genops.c:779:class_export_destroy())
 destroying export ffff88202716b400/fs0-MDT0000-osd_UUID for fs0-MDT0000-osd
00000020:01000000:13.0:1387286294.579647:0:3960:0:(obd_config.c:750:class_decref())
 finishing cleanup of obd fs0-MDT0000-osd (fs0-MDT0000-osd_UUID)
00000020:01000004:1.0:1387286294.763014:0:8463:0:(obd_mount.c:590:lustre_free_lsi())
 Freeing lsi ffff88200a8d9c00
00000020:02000400:1.0:1387286294.763070:0:8463:0:(obd_mount_server.c:1498:server_put_super())
 server umount fs0-MDT0000 complete
00000020:00020000:1.0:1387286294.763078:0:8463:0:(obd_mount.c:1267:lustre_fill_super())
 Unable to mount  (-17)

_______________________________________________
Lustre-discuss mailing list
[email protected]
http://lists.lustre.org/mailman/listinfo/lustre-discuss

Reply via email to