Re: Q: md-RAID1 creation with I/O error (attempt to access beyond end of device)

2010-10-22 Thread Rahsaan Page
Hi Ulrich,

I dont think thats going to work, arent you suppose to create the raid with
xdisk, than create LVM on top of the raid?

On Thu, Oct 21, 2010 at 7:16 AM, Ulrich Windl 
ulrich.wi...@rz.uni-regensburg.de wrote:

 Hi,

 not an iSCSI question, but maybe here's the right audience: I had
 sucessfully created a RAID1 using mdadm, multipath, iSCSI and some SAN
 storage on SLES10 SP3+Updates on x86_64. When I tried to do the same thing
 on a comparable machine, the RAID could not be created, because something
 tried to access the disk at a bad position. I reapeated the attempt with
 another set of disks of the same size, and it also failed.

 Details:
 # mdadm --verbose --create /dev/md5 --raid-devices=2 --level=raid1
 --bitmap=internal --assume-clean /dev/mapper/EVA1_L232_host05
 /dev/mapper/EVA2_L232_host05
 mdadm: /dev/mapper/EVA1_L232_host05 appears to be part of a raid array:
level=raid1 devices=2 ctime=Thu Oct 21 12:26:18 2010
 mdadm: /dev/mapper/EVA2_L232_host05 appears to be part of a raid array:
level=raid1 devices=2 ctime=Thu Oct 21 12:26:18 2010
 mdadm: size set to 31457216K
 Continue creating array? y
 mdadm: RUN_ARRAY failed: Input/output error
 mdadm: stopped /dev/md5
 hostname:~ # tail -24 /var/log/messages
 Oct 21 12:28:05 hostname kernel: md: md5 stopped.
 Oct 21 12:28:07 hostname kernel: md: md5 stopped.
 Oct 21 12:29:46 hostname kernel: md: binddm-21
 Oct 21 12:29:46 hostname kernel: md: binddm-14
 Oct 21 12:29:46 hostname kernel: raid1: raid set md5 active with 2 out of 2
 mirrors
 Oct 21 12:29:46 hostname kernel: md5: bitmap file is out of date (0  1) --
 forcing full recovery
 Oct 21 12:29:46 hostname kernel: md5: bitmap file is out of date, doing
 full recovery
 Oct 21 12:29:46 hostname kernel: attempt to access beyond end of device
 Oct 21 12:29:46 hostname kernel: dm-14: rw=8, want=62914568, limit=62914560
 Oct 21 12:29:46 hostname kernel: attempt to access beyond end of device
 Oct 21 12:29:46 hostname kernel: dm-21: rw=8, want=62914568, limit=62914560
 Oct 21 12:29:46 hostname kernel: md5: bitmap initialized from disk: read
 15/16 pages, set 489472 bits, status: -5
 Oct 21 12:29:46 hostname kernel: md5: failed to create bitmap (-5)
 Oct 21 12:29:46 hostname kernel: md: pers-run() failed ...
 Oct 21 12:29:46 hostname kernel: md: md5 stopped.
 Oct 21 12:29:46 hostname kernel: md: unbinddm-14
 Oct 21 12:29:46 hostname kernel: md: export_rdev(dm-14)
 Oct 21 12:29:46 hostname kernel: md: unbinddm-21
 Oct 21 12:29:46 hostname kernel: md: export_rdev(dm-21)

 hostname:~ # mdadm --query -X /dev/mapper/EVA2_L232_host05
 mdadm: WARNING: bitmap file is not large enough for array size 62914432!

Filename : /dev/mapper/EVA2_L232_host05
   Magic : 6d746962
 Version : 4
UUID : c522ab5a:24f292a0:0b3d7356:64ade12a
  Events : 0
  Events Cleared : 0
   State : Out of date
   Chunksize : 64 KB
  Daemon : 5s flush period
  Write Mode : Normal
   Sync Size : 31457216 (30.00 GiB 32.21 GB)
  Bitmap : 489472 bits (chunks), 489472 dirty (100.0%)
 hostname:~ # mdadm --query -X /dev/mapper/EVA1_L232_host05
 mdadm: WARNING: bitmap file is not large enough for array size 62914432!

Filename : /dev/mapper/EVA1_L232_host05
   Magic : 6d746962
 Version : 4
UUID : c522ab5a:24f292a0:0b3d7356:64ade12a
  Events : 0
  Events Cleared : 0
   State : Out of date
   Chunksize : 64 KB
  Daemon : 5s flush period
  Write Mode : Normal
   Sync Size : 31457216 (30.00 GiB 32.21 GB)
  Bitmap : 489472 bits (chunks), 489472 dirty (100.0%)

 blockdev --getsize reports 62914560 for each LUN being used.

 Regards,
 Ulrich


 --
 You received this message because you are subscribed to the Google Groups
 open-iscsi group.
 To post to this group, send email to open-is...@googlegroups.com.
 To unsubscribe from this group, send email to
 open-iscsi+unsubscr...@googlegroups.comopen-iscsi%2bunsubscr...@googlegroups.com
 .
 For more options, visit this group at
 http://groups.google.com/group/open-iscsi?hl=en.




-- 
Rahsaan D. Page

-- 
You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.



Re: change session portal

2010-10-22 Thread thomas wouters
On Oct 19, 1:55 am, Mike Christie micha...@cs.wisc.edu wrote:
 On 10/18/2010 04:15 AM, thomas wouters wrote:





  Hi,

  We have an HP Lefthand cluster with currently two storage nodes.
  This weekend we've done a maintenance on one of the nodes and noticed
  that all running iscsi sessions switched to the other node. This was
  an expected result, but now we are trying to switch back half of the
  sessions to the other node.
  We connect to one IP for target discovery (let's say 192.168.0.1) and
  each cluster node has it's own IP (192.168.0.2 and 192.168.0.3).
  If I check the Current Portal for all connected in sessions they are
  all 192.168.0.2, even after a restart of iscsid.
  I tried to remove everything in /etc/iscsi/nodes and /etc/iscsi/
  send_targets but as soon as I do a discovery and login on all targets
  they're all connected to 192.168.0.2. In the past these connections
  were distributed to the two nodes in a round-robin alike manner. So
  one session would connect to 192.168.0.2 and another would connect to
  192.168.0.3.
  Is there anything we can do about this? We would like to share the
  load on these two nodes in stead of doing everything on one node.
  Don't hesitate to ask if you need more details on the setup, I'm just
  not that familiar with open-iscsi yet.

 Do you know if the lefthand box does login redirect? If so the target
 might be the one sending us to the wrong portal.

 After you have done discovery, if you run

 iscsiadm -m node -P 1

 And then login and do

 iscsiadm -m session -P 1

 Is the Current Portal and Persistent Portal the same value? And are
 they the same as the one printed out by the iscsiadm -m node -P 1 command?

It's not the same:

# iscsiadm -m node -P 1
Target: iqn.2003-10.com.lefthandnetworks:lefthand0:64:testserver
Portal: 192.168.8.200:3260,1
Iface Name: default


Target: iqn.2003-10.com.lefthandnetworks:lefthand0:64:testserver
Current Portal: 192.168.8.201:3260,1
Persistent Portal: 192.168.8.200:3260,1
**
Interface:
...

I think that the lefthand cluster decides which portal to use after a
login and uses the same box for every LUN.
I think this because if I disconnect a LUN on all servers, so there
are no connections using this LUN at all.
If I connect any of the servers afterwards, I can see that it's using
a different portal.

-- 
You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.