Re: Persistent connections across initiator reboot

2009-04-30 Thread HIMANSHU

Exactly Ulrich,there can be a better approach.
I am now having iscsiadm version 2.0-870,but still I am having a
problem in case of 2 targets having different NODE authentication.

My scenario

Suppose 2 targets (having different NODE AUTHENTICATION,assuming same
DISCOVERY AUTHENTICATION) from same target machine(say 30.12) are
logged in.I set up node.startup=automatic in both cases.

My observation

If I restart initiator machine,re-login can be possible in case of
last logged in target only,not both of them.

Steps performed

1. Discover machine using -o new -o delete-Enter Node credentials for
1st target from the list shown-Discovery again as iscsid.conf is now
having node credentials of target1-login to target1-making
node.startup=automatic

2. Discover machine using -o new -o delete-Enter different Node
credentials for 2nd target from the list shown-Discovery using -o
update as iscsid.conf is now having node credentials of target2-login
to target2-making node.startup=automatic
(using -o update overwrites my node.startup setting of target1,so I
have to again make node.startup of target1 to automatic.without -o
update,it wont allow me to login as I need to change iscsid.conf to
target2's credentials.)

My conclusion
Basically all this problem might be because of single iscsid.conf
file.It should be different for every logged in target.Then only I can
have all previous credentials stored.

On Apr 30, 12:31 pm, Ulrich Windl ulrich.wi...@rz.uni-
regensburg.de wrote:
 On 29 Apr 2009 at 11:25, Mike Christie wrote:





  Ulrich Windl wrote:
   On 28 Apr 2009 at 7:10, HIMANSHU wrote:

   One more question analogues to this.

   Suppose I login to 1st target from machine 30.12,it was having node
   authentication.so I saved its credentials in iscsid.conf and then I
   fired the discovery command followed by login command.It was
   successful and those credentials also got stored in nodes and
   send_targets.

   Then if I want to login to 2nd target which is also having node
   authentication from same machine,I am overwriting same iscsid.conf
   file.So I am loosing my previous credentials from iscsid.conf.Also
   after discovery,I am loosing previous target information from nodes
   and send_targets.

   Hi,

   I'm no expert, but I think the credentials are stored per node/target in 
   the
   iSCSI database (like /etc/iscsi/send_targets/* and 
   /etc/iscsi/nodes/*/*).

  Yeah, that is correct. When you run the discovery command or manual
  addition command, iscsiadm will read iscsid.conf and use those for the
  initial defaults for what gets created in those dirs. You can then
  change what is in those dirs using iscsiadm -m node -o update

   /etc/iscsi.conf just has the defaults. Probably it would be better to 
   never touch
   the iscsid.conf, but provide auth information when discovering targets or 
   loggin
   in to nodes/targets. However then the secrets would be on the command 
   line (and
   process list, etc).

  I was thinking he has a issue where one target needs one set of CHAP
  values for the discovery session, then they need another set of CHAP
  values for another discovery session to another target. For this type of
  setup, you have to edit iscsid.conf, run iscsiadm -m discovery ..., then
  edit iscsid.conf again and then run iscsiadm -m discovery ... to the
  other target.

 Hi,

 That sounds like a workaround for some design deficit. Why not have a more
 flexible approach like ~/.netrc (a file that stores authentication 
 information for
 several systems, keeping secrets away from the command line and the process 
 list).
 I mean an option for discovery like 
 --credentials-file=~/iscsi-credentials-for-
  You get the idea?

 Regards,
 Ulrich
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



Re: multipath iSCSI installs

2009-04-30 Thread Hannes Reinecke
mala...@us.ibm.com wrote:
 Hi all,
 
   I am trying to install RHEL5.3 on an iSCSI disk with two paths.
 I booted with mapth option but the installer picked up only a single
 path. Is this the expected behavior when I use iBFT?
 
 The install went fine on a single path. I was trying to convert the
 single path to multi-path by running mkinitrd. RHEL was unable to boot
 (panics) with the new initrd image.  The only difference between the
 old initrd image and the new one is that the old initrd image was using
 iBFT method and the new image was trying to use the values from the
 existing session(s) at initrd creation time. For some reason the
 latter method doesn't work. Is this a known bug?
 
 I also tried installing SLES10 and SLES11. I believe they recommend
 installing on a single path and then converting to multipath. I have
 found that SLES11's initrd image can only find one path even after
 including multipath support into the initrd image. It creates a
 dm-multipath device with a single path and later converts to
 dm-multipath device with two paths later when it runs scripts in
 /etc/init.d. SLES10's behavior might be same, but I didn't analyse. Does
 anyone know if SLES11's initrd image can find more than one iSCSI path?
 
Yes, I do :-)

I have patched the SUSE initrd to activate all network interfaces which
are configured via ifup in the running system.

You'd need the attached patch to /lib/mkinitrd to achieve this.

Cheers,

Hannes
-- 
Dr. Hannes Reinecke   zSeries  Storage
h...@suse.de  +49 911 74053 688
SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: Markus Rex, HRB 16746 (AG Nürnberg)

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---

diff --git a/scripts/boot-network.sh b/scripts/boot-network.sh
index ac771e0..5c1bfad 100644
--- a/scripts/boot-network.sh
+++ b/scripts/boot-network.sh
@@ -4,7 +4,7 @@
 # dhcpcd reqires the af_packet module
 #%modules: af_packet $bonding_module
 #%udevmodules: $drvlink
-#%if: $interface -o $dhcp -o $ip -o $nfsaddrs
+#%if: $interface -o $dhcp -o $ip -o $nfsaddrs -o $drvlink
 #
 # network initialization
 ##
@@ -148,5 +148,12 @@ elif [ $nettype = static ]; then
 INTERFACE=${ip%%:*}
   fi
   echo 'hosts: files dns'  /etc/nsswitch.conf
+elif [ $nettype = ifup ] ; then
+for i in /etc/sysconfig/network/ifcfg-* ; do
+	interface=${i##*/ifcfg-}
+	[ -d /sys/class/net/$interface/device ] || continue
+	[ $interface = lo ]  continue
+	ifup $interface
+done
 fi
 
diff --git a/scripts/setup-network.sh b/scripts/setup-network.sh
index 031774c..b212f2b 100644
--- a/scripts/setup-network.sh
+++ b/scripts/setup-network.sh
@@ -159,13 +159,17 @@ get_network_module()
 echo $drvlink
 }
 
-if [ -z $interface ] ; then
-for addfeature in $ADDITIONAL_FEATURES; do
-if [ $addfeature = network ]; then
+for addfeature in $ADDITIONAL_FEATURES; do
+if [ $addfeature = network ]; then
+	if [ -z $interface ] ; then
 interface=default
 fi
-done
-fi
+fi
+if [ $addfeature = ifup ] ; then
+	nettype=ifup
+	interface=
+fi
+done
 
 ip=
 interface=${interface#/dev/}
@@ -215,6 +219,21 @@ if [ -n $interface ] ; then
 fi
 fi
 
+# Copy ifcfg settings
+if [ $nettype = ifup ] ; then
+mkdir -p $tmp_mnt/etc/sysconfig
+cp -rp /etc/sysconfig/network $tmp_mnt/etc/sysconfig
+for i in /etc/sysconfig/network/ifcfg-*; do
+	interface=${i##*/ifcfg-}
+	if [ -d /sys/class/net/$interface/device ] ; then
+	mod=$(get_network_module $interface)
+	drvlink=$drvlink $mod
+	verbose [NETWORK]\t$interface ($nettype)
+	fi
+done
+interface=
+fi
+
 # Copy the /etc/resolv.conf when the IP is static
 if [ $interface -a $nettype = static -a -f /etc/resolv.conf ] ; then
 verbose [NETWORK]\tUsing /etc/resolv.conf from the system in the initrd
@@ -236,6 +255,14 @@ mkdir -p $tmp_mnt/var/run
 cp_bin /lib/mkinitrd/bin/ipconfig.sh $tmp_mnt/bin/ipconfig
 if [ -f /etc/udev/rules.d/70-persistent-net.rules ] ; then
 cp /etc/udev/rules.d/70-persistent-net.rules $tmp_mnt/etc/udev/rules.d
+if [ $nettype = ifup ] ; then
+	cp /etc/udev/rules.d/77-network.rules $tmp_mnt/etc/udev/rules.d
+	cp_bin /sbin/ifup $tmp_mnt/sbin/ifup
+	cp_bin /bin/awk $tmp_mnt/bin/awk
+	cp_bin /bin/grep $tmp_mnt/bin/grep
+	cp_bin /bin/logger $tmp_mnt/bin/logger
+	cp_bin /bin/touch $tmp_mnt/bin/touch
+fi
 fi
 
 [ $interface ]  verbose [NETWORK]\t$interface ($nettype)
diff --git a/scripts/setup-prepare.sh b/scripts/setup-prepare.sh
index a4886e2..c6cfe0e 100644
--- a/scripts/setup-prepare.sh
+++ b/scripts/setup-prepare.sh
@@ -1,7 

open-iscsi does not detect logical volume

2009-04-30 Thread sundar mahadevan

Hi,
I have iscsitarget(0.4.15-89.10) installed on system 1(opensuse 11.1)
with firewall allowing port 3260 and open-iscsi(2.0.870-21.1)
installed on system 2(opensuse 11.1).

Here is my command from client:
sunny1:/etc/iscsi/ifaces # iscsiadm -m node -T
iqn.2009-09.com.ezhome:ocfs -p 10.1.1.1 -l
Logging in to [iface: default, target: iqn.2009-09.com.ezhome:ocfs,
portal: 10.1.1.1,3260]
Login to [iface: default, target: iqn.2009-09.com.ezhome:ocfs, portal:
10.1.1.1,3260]: successful

But i don't see the logical volume detected when i issue fdisk -l
Also there are no error messages in /var/log/messages
cat /var/log/messages | tail -n 20
...
...
Apr 30 12:13:23 sunny1 kernel: scsi4 : iSCSI Initiator over TCP/IP
Apr 30 12:13:24 sunny1 iscsid: connection3:0 is operational now

Newbie. Please help. Thanks in advance.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



Re: Persistent connections across initiator reboot

2009-04-30 Thread Mike Christie

HIMANSHU wrote:
 Exactly Ulrich,there can be a better approach.
 I am now having iscsiadm version 2.0-870,but still I am having a
 problem in case of 2 targets having different NODE authentication.
 
 My scenario
 
 Suppose 2 targets (having different NODE AUTHENTICATION,assuming same
 DISCOVERY AUTHENTICATION) from same target machine(say 30.12) are
 logged in.I set up node.startup=automatic in both cases.
 
 My observation
 
 If I restart initiator machine,re-login can be possible in case of
 last logged in target only,not both of them.
 
 Steps performed
 
 1. Discover machine using -o new -o delete-Enter Node credentials for
 1st target from the list shown-Discovery again as iscsid.conf is now
 having node credentials of target1-login to target1-making
 node.startup=automatic
 
 2. Discover machine using -o new -o delete-Enter different Node
 credentials for 2nd target from the list shown-Discovery using -o
 update as iscsid.conf is now having node credentials of target2-login
 to target2-making node.startup=automatic
 (using -o update overwrites my node.startup setting of target1,so I
 have to again make node.startup of target1 to automatic.without -o
 update,it wont allow me to login as I need to change iscsid.conf to
 target2's credentials.)

If you do this:

1. set DISCOVERY AUTHENTICATION in iscsid.conf.
2. run discovery iscsiadm -m discovery -t st -p ip
3. set NODE AUTHENTICATION for target1 found
iscsiadm -m node -T target1 -p ip:port -o update -n 
node.session.auth.username -v user
iscsiadm -m node -T target1 -p ip:port -o update -n 
node.session.auth.password -v pass

4.A If other target or portal or both is found through the same 
discovery address and was just found then all you have to do is run 
those iscsiadm -m node commands for the second target (you do not need 
to run discovery again):

iscsiadm -m node -T target2 -p ip2:port -o update -n 
node.session.auth.username -v user
iscsiadm -m node -T target2 -p ip2:port -o update -n 
node.session.auth.password -v pass


4.B if the other target or portal or both is found through a different 
discovery address but using the same DISCOVERY AUTHENTICATION then do 
discovery to the other address:
iscsiadm -m discovery -t st -p ip2
If for some reason the other discovery address also gives you the 
targets and portals from the other discovery address ip, then pass in 
the -o new -o delete to the discovery command.


Then run iscsiadm -m node -o update  to set the NODE AUTHENTICATION 
for those targets.

4.C if the other target or portal or both is found through a different 
discovery address but using a differenet DISCOVERY AUTHENTICATION then 
you have to edit the discovery chap settings in iscsid.conf and then run 
the discovery command
iscsiadm -m discovery -t st -p ip2

If for some reason the other discovery address also gives you the 
targets and portals from the other discovery address ip, then pass in 
the -o new -o delete to the discovery command.

Then just run iscsiadm -m node -o update  to set the chap settings 
for the targets found in there.


5. if then later on you want to discover new targets then run
iscsiadm -m discovery -t st -p ip -o new -o delete
And then run iscsiadm -m node -o update  to set the chap settings 
for what was found.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



Re: open-iscsi does not detect logical volume

2009-04-30 Thread Mike Christie

sundar mahadevan wrote:
 Hi,
 I have iscsitarget(0.4.15-89.10) installed on system 1(opensuse 11.1)
 with firewall allowing port 3260 and open-iscsi(2.0.870-21.1)
 installed on system 2(opensuse 11.1).
 
 Here is my command from client:
 sunny1:/etc/iscsi/ifaces # iscsiadm -m node -T
 iqn.2009-09.com.ezhome:ocfs -p 10.1.1.1 -l
 Logging in to [iface: default, target: iqn.2009-09.com.ezhome:ocfs,
 portal: 10.1.1.1,3260]
 Login to [iface: default, target: iqn.2009-09.com.ezhome:ocfs, portal:
 10.1.1.1,3260]: successful
 
 But i don't see the logical volume detected when i issue fdisk -l
 Also there are no error messages in /var/log/messages
 cat /var/log/messages | tail -n 20
 ...
 ...
 Apr 30 12:13:23 sunny1 kernel: scsi4 : iSCSI Initiator over TCP/IP
 Apr 30 12:13:24 sunny1 iscsid: connection3:0 is operational now
 
 Newbie. Please help. Thanks in advance.
 

Could you send your ietd.conf?

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



Re: open-iscsi does not detect logical volume

2009-04-30 Thread Konrad Rzeszutek

On Thu, Apr 30, 2009 at 12:16:16PM -0400, sundar mahadevan wrote:
 
 Hi,
 I have iscsitarget(0.4.15-89.10) installed on system 1(opensuse 11.1)
 with firewall allowing port 3260 and open-iscsi(2.0.870-21.1)
 installed on system 2(opensuse 11.1).
 
 Here is my command from client:
 sunny1:/etc/iscsi/ifaces # iscsiadm -m node -T
 iqn.2009-09.com.ezhome:ocfs -p 10.1.1.1 -l
 Logging in to [iface: default, target: iqn.2009-09.com.ezhome:ocfs,
 portal: 10.1.1.1,3260]
 Login to [iface: default, target: iqn.2009-09.com.ezhome:ocfs, portal:
 10.1.1.1,3260]: successful
 
 But i don't see the logical volume detected when i issue fdisk -l

You won't see it that way. The LVM metadata is not in the MBR, but
in the first 384 sectors, in its own format. You need to do 'pvscan' to see 
your logical
volumes. And after as well do 'vgchange -a ly' after the scan.

 Also there are no error messages in /var/log/messages
 cat /var/log/messages | tail -n 20
 ...
 ...
 Apr 30 12:13:23 sunny1 kernel: scsi4 : iSCSI Initiator over TCP/IP
 Apr 30 12:13:24 sunny1 iscsid: connection3:0 is operational now
 
 Newbie. Please help. Thanks in advance.
 
 

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



Re: open-iscsi does not detect logical volume

2009-04-30 Thread sundar mahadevan

As i said before, there is no real file for /dev/vg/ocfs
but
lvdisplay shows /dev/vg/ocfs as my logical volume
and
ls -ail /dev/vg/ocfs
35413 lrwxrwxrwx 1 root root 19 Apr 30 14:33 /dev/vg/ocfs - /dev/mapper/vg-ocfs

Now i'm really confused.
Mike: As i said before there are only 2 uncommented lines in /etc/ietd.conf

Target iqn.2009-09.com.ezhome:ocfs
Lun 0 Path=/dev/vg/ocfs,Type=fileio

Please help

On Thu, Apr 30, 2009 at 1:53 PM, sundar mahadevan
sundarmahadeva...@gmail.com wrote:
 I realise what my problem is. Earlier i used ubuntu and In ubuntu when
 a new logical volume is created, it is listed under /dev/vg/ocfs where
 vg is the volume group and ocfs is the logical volume. In opensuse, I
 'm unable to find out what/where my logical volume is:

 I had mentioned Path=/dev/vg/ocfs in /etc/ietd.conf which does not exist.

 Target iqn.2009-09.com.ezhome:ocfs
        Lun 0 Path=/dev/vg/ocfs,Type=fileio

 Thanks to Mike for guiding me in the right direction. I belive this is
 the problem. Experts, your inputs on how the logical volumes are
 mapped in opensuse please...

 On Thu, Apr 30, 2009 at 1:23 PM, Konrad Rzeszutek
 kon...@virtualiron.com wrote:

 On Thu, Apr 30, 2009 at 12:16:16PM -0400, sundar mahadevan wrote:

 Hi,
 I have iscsitarget(0.4.15-89.10) installed on system 1(opensuse 11.1)
 with firewall allowing port 3260 and open-iscsi(2.0.870-21.1)
 installed on system 2(opensuse 11.1).

 Here is my command from client:
 sunny1:/etc/iscsi/ifaces # iscsiadm -m node -T
 iqn.2009-09.com.ezhome:ocfs -p 10.1.1.1 -l
 Logging in to [iface: default, target: iqn.2009-09.com.ezhome:ocfs,
 portal: 10.1.1.1,3260]
 Login to [iface: default, target: iqn.2009-09.com.ezhome:ocfs, portal:
 10.1.1.1,3260]: successful

 But i don't see the logical volume detected when i issue fdisk -l

 You won't see it that way. The LVM metadata is not in the MBR, but
 in the first 384 sectors, in its own format. You need to do 'pvscan' to see 
 your logical
 volumes. And after as well do 'vgchange -a ly' after the scan.

 Also there are no error messages in /var/log/messages
 cat /var/log/messages | tail -n 20
 ...
 ...
 Apr 30 12:13:23 sunny1 kernel: scsi4 : iSCSI Initiator over TCP/IP
 Apr 30 12:13:24 sunny1 iscsid: connection3:0 is operational now

 Newbie. Please help. Thanks in advance.



 



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



Re: open-iscsi does not detect logical volume

2009-04-30 Thread Mike Christie

sundar mahadevan wrote:
 As i said before, there is no real file for /dev/vg/ocfs
 but
 lvdisplay shows /dev/vg/ocfs as my logical volume
 and
 ls -ail /dev/vg/ocfs
 35413 lrwxrwxrwx 1 root root 19 Apr 30 14:33 /dev/vg/ocfs - 
 /dev/mapper/vg-ocfs
 
 Now i'm really confused.
 Mike: As i said before there are only 2 uncommented lines in /etc/ietd.conf
 
 Target iqn.2009-09.com.ezhome:ocfs
 Lun 0 Path=/dev/vg/ocfs,Type=fileio

When you run service iscsi-target start to start IET, do you see any 
errors in /var/log/messages?


 
 Please help
 
 On Thu, Apr 30, 2009 at 1:53 PM, sundar mahadevan
 sundarmahadeva...@gmail.com wrote:
 I realise what my problem is. Earlier i used ubuntu and In ubuntu when
 a new logical volume is created, it is listed under /dev/vg/ocfs where
 vg is the volume group and ocfs is the logical volume. In opensuse, I
 'm unable to find out what/where my logical volume is:

 I had mentioned Path=/dev/vg/ocfs in /etc/ietd.conf which does not exist.

 Target iqn.2009-09.com.ezhome:ocfs
Lun 0 Path=/dev/vg/ocfs,Type=fileio

 Thanks to Mike for guiding me in the right direction. I belive this is
 the problem. Experts, your inputs on how the logical volumes are
 mapped in opensuse please...

 On Thu, Apr 30, 2009 at 1:23 PM, Konrad Rzeszutek
 kon...@virtualiron.com wrote:
 On Thu, Apr 30, 2009 at 12:16:16PM -0400, sundar mahadevan wrote:
 Hi,
 I have iscsitarget(0.4.15-89.10) installed on system 1(opensuse 11.1)
 with firewall allowing port 3260 and open-iscsi(2.0.870-21.1)
 installed on system 2(opensuse 11.1).

 Here is my command from client:
 sunny1:/etc/iscsi/ifaces # iscsiadm -m node -T
 iqn.2009-09.com.ezhome:ocfs -p 10.1.1.1 -l
 Logging in to [iface: default, target: iqn.2009-09.com.ezhome:ocfs,
 portal: 10.1.1.1,3260]
 Login to [iface: default, target: iqn.2009-09.com.ezhome:ocfs, portal:
 10.1.1.1,3260]: successful

 But i don't see the logical volume detected when i issue fdisk -l
 You won't see it that way. The LVM metadata is not in the MBR, but
 in the first 384 sectors, in its own format. You need to do 'pvscan' to see 
 your logical
 volumes. And after as well do 'vgchange -a ly' after the scan.

 Also there are no error messages in /var/log/messages
 cat /var/log/messages | tail -n 20
 ...
 ...
 Apr 30 12:13:23 sunny1 kernel: scsi4 : iSCSI Initiator over TCP/IP
 Apr 30 12:13:24 sunny1 iscsid: connection3:0 is operational now

 Newbie. Please help. Thanks in advance.


 
  


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



Re: open-iscsi does not detect logical volume

2009-04-30 Thread Mike Christie

Mike Christie wrote:
 sundar mahadevan wrote:
 As i said before, there is no real file for /dev/vg/ocfs
 but
 lvdisplay shows /dev/vg/ocfs as my logical volume
 and
 ls -ail /dev/vg/ocfs
 35413 lrwxrwxrwx 1 root root 19 Apr 30 14:33 /dev/vg/ocfs - 
 /dev/mapper/vg-ocfs

 Now i'm really confused.
 Mike: As i said before there are only 2 uncommented lines in /etc/ietd.conf

 Target iqn.2009-09.com.ezhome:ocfs
 Lun 0 Path=/dev/vg/ocfs,Type=fileio
 
 When you run service iscsi-target start to start IET, do you see any 
 errors in /var/log/messages?
 

You might just want to try something simple first.

- dd if=/dev/zero of=file bs=1G count=1
- open ietd.conf and do

Target iqn.2009-09.com.ezhome:ocfs
Lun 0 Path=/where_you_put_file/file,Type=fileio


service iscsi-target stop
service iscsi-target start

then on the initiator try to login. Do you see a disk getting added in 
/var/log/messages. if you do iscsiadm -m session -P 3 do you see your disk?

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---