Re: [Openstack] [Grizzly] cannot attach volume to instance due to wrong iscsi target

2013-04-27 Thread Darragh O'Reilly


it seems the ips for the targets are set at the time they were created

$ mysqlmysql> use cinder; 
mysql> select provider_location from volumes; 

Try creating a new volume - does it get the new iscsi_ip_address ?

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Grizzly] cannot attach volume to instance due to wrong iscsi target

2013-04-27 Thread Michaël Van de Borne

yes it was already.



Le 27/04/2013 14:08, Darragh O'Reilly a écrit :

is nova configured to use cinder?

in nova.conf
volume_api_class=nova.volume.cinder.API




___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Grizzly] cannot attach volume to instance due to wrong iscsi target

2013-04-27 Thread Darragh O'Reilly
is nova configured to use cinder?

in nova.conf
volume_api_class=nova.volume.cinder.API

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Grizzly] cannot attach volume to instance due to wrong iscsi target

2013-04-27 Thread Michaël Van de Borne

I set that key in both controller and compute

still no luck



Le 26/04/2013 18:26, Darragh O'Reilly a écrit :

its not really obvious, but I believe the iscsi_ip_address needs to be set in
the nova.conf on the **controller** - just want to check you did it there.


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp




___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Grizzly] cannot attach volume to instance due to wrong iscsi target

2013-04-27 Thread Michaël Van de Borne

No I'm running just one iscsi target service:

root@leonard:~# netstat -antp | grep 3260
tcp0  0 0.0.0.0:3260 0.0.0.0:*   LISTEN  
8927/tgtd

tcp6   0  0 :::3260 :::*LISTEN  8927/tgtd



Here is my cinder.conf:
 root@leonard:~# cat /etc/cinder/cinder.conf
[DEFAULT]
rootwrap_config = /etc/cinder/rootwrap.conf
api_paste_confg = /etc/cinder/api-paste.ini
iscsi_helper = tgtadm
volume_name_template = volume-%s
volume_group = cinder-volumes
verbose = True
auth_strategy = keystone
state_path = /var/lib/cinder
lock_path = /var/lock/cinder
volumes_dir = /var/lib/cinder/volumes
sql_connection = mysql://cinder:grizzly@leonard/cinder
rabbit_password = grizzly
iscsi_ip_address=192.168.203.103

So how can the compute node try to reach the public interface of the 
controller?  How can it possibly even know this IP?




Michaël Van de Borne
R&D Engineer, SOA team, CETIC
Phone: +32 (0)71 49 07 45 Mobile: +32 (0)472 69 57 16, Skype: mikemowgli
www.cetic.be, rue des Frères Wright, 29/3, B-6041 Charleroi

Le 26/04/2013 20:22, K, Shanthakumar a écrit :


I believe you are using both iscsitarget and tgt in your 
configuration, that's why you are getting this issue.


Stop any one target and mark the current target in cinder.conf 
iscsihelper=


Example :  iscsi_helper=tgtadm

Then restart the all the services, hopefully this will work.

Thanks

Shanthakumar K

*From:*Openstack 
[mailto:openstack-bounces+sk13=hp@lists.launchpad.net] *On Behalf 
Of *Michaël Van de Borne

*Sent:* Friday, April 26, 2013 8:11 PM
*To:* openstack@lists.launchpad.net
*Subject:* [Openstack] [Grizzly] cannot attach volume to instance due 
to wrong iscsi target


Hi all,

I installed three nodes like this topology:
http://docs.openstack.org/trunk/openstack-network/admin/content/connectivity.html

Here are my subnets:
management: 192.168.203.X/24
data: 192.168.201.X/24
external & API: 192.168.202.X/24

I'm running ubuntu 12.04.

When I try to attach a volume to a VM, I get the following error in 
nova-compute.log:
2013-04-26 16:27:59.439 WARNING nova.virt.libvirt.volume 
[req-f5b4e121-a3ac-456d-b5cb-ba389c7fb409 
6d72da42f39648c48d3dfb4cd190107d 93a48de7ef674f07a96e169383c34399] 
ISCSI volume not yet found at: vdr. Will rescan & retry.  Try number: 0
2013-04-26 16:27:59.490 ERROR nova.compute.manager 
[req-f5b4e121-a3ac-456d-b5cb-ba389c7fb409 
6d72da42f39648c48d3dfb4cd190107d 93a48de7ef674f07a96e169383c34399] 
[instance: 05141f81-04cc-4493-86da-d2c05fd8a2f9] Failed to attach 
volume d9424219-33f6-40c8-88d9-ecba4c8aa6be at /dev/vdr
2013-04-26 16:27:59.490 30092 TRACE nova.compute.manager [instance: 
05141f81-04cc-4493-86da-d2c05fd8a2f9] Traceback (most recent call last):
2013-04-26 16:27:59.490 30092 TRACE nova.compute.manager [instance: 
05141f81-04cc-4493-86da-d2c05fd8a2f9]   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2859, 
in _attach_volume
2013-04-26 16:27:59.490 30092 TRACE nova.compute.manager [instance: 
05141f81-04cc-4493-86da-d2c05fd8a2f9] mountpoint)
2013-04-26 16:27:59.490 30092 TRACE nova.compute.manager [instance: 
05141f81-04cc-4493-86da-d2c05fd8a2f9]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 
957, in attach_volume
2013-04-26 16:27:59.490 30092 TRACE nova.compute.manager [instance: 
05141f81-04cc-4493-86da-d2c05fd8a2f9] disk_info)
2013-04-26 16:27:59.490 30092 TRACE nova.compute.manager [instance: 
05141f81-04cc-4493-86da-d2c05fd8a2f9]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 
943, in volume_driver_method
2013-04-26 16:27:59.490 30092 TRACE nova.compute.manager [instance: 
05141f81-04cc-4493-86da-d2c05fd8a2f9] return 
method(connection_info, *args, **kwargs)
2013-04-26 16:27:59.490 30092 TRACE nova.compute.manager [instance: 
05141f81-04cc-4493-86da-d2c05fd8a2f9]   File 
"/usr/lib/python2.7/dist-packages/nova/openstack/common/lockutils.py", 
line 242, in inner
2013-04-26 16:27:59.490 30092 TRACE nova.compute.manager [instance: 
05141f81-04cc-4493-86da-d2c05fd8a2f9] retval = f(*args, **kwargs)
2013-04-26 16:27:59.490 30092 TRACE nova.compute.manager [instance: 
05141f81-04cc-4493-86da-d2c05fd8a2f9]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/volume.py", line 
245, in connect_volume
2013-04-26 16:27:59.490 30092 TRACE nova.compute.manager [instance: 
05141f81-04cc-4493-86da-d2c05fd8a2f9] 
self._run_iscsiadm(iscsi_properties, ("--rescan",))
2013-04-26 16:27:59.490 30092 TRACE nova.compute.manager [instance: 
05141f81-04cc-4493-86da-d2c05fd8a2f9]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/volume.py", line 
179, in _run_iscsiadm
2013-04-26 16:27:59.490 30092 TRACE nova.compute.manager [instance: 
05141f81-04cc-4493-86da-d2c05fd8a2f9] check_exit_code=check_exit_code)
2013-04-26 16:27:59.490 30092 TRACE nova.compute.manager [instance: 
05141f81-04

Re: [Openstack] [Grizzly] cannot attach volume to instance due to wrong iscsi target

2013-04-26 Thread Darragh O'Reilly
its not really obvious, but I believe the iscsi_ip_address needs to be set in 
the nova.conf on the **controller** - just want to check you did it there.


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] [Grizzly] cannot attach volume to instance due to wrong iscsi target

2013-04-26 Thread Michaël Van de Borne

Hi all,

I installed three nodes like this topology:
http://docs.openstack.org/trunk/openstack-network/admin/content/connectivity.html

Here are my subnets:
management: 192.168.203.X/24
data: 192.168.201.X/24
external & API: 192.168.202.X/24

I'm running ubuntu 12.04.

When I try to attach a volume to a VM, I get the following error in 
nova-compute.log:
2013-04-26 16:27:59.439 WARNING nova.virt.libvirt.volume 
[req-f5b4e121-a3ac-456d-b5cb-ba389c7fb409 
6d72da42f39648c48d3dfb4cd190107d 93a48de7ef674f07a96e169383c34399] ISCSI 
volume not yet found at: vdr. Will rescan & retry.  Try number: 0
2013-04-26 16:27:59.490 ERROR nova.compute.manager 
[req-f5b4e121-a3ac-456d-b5cb-ba389c7fb409 
6d72da42f39648c48d3dfb4cd190107d 93a48de7ef674f07a96e169383c34399] 
[instance: 05141f81-04cc-4493-86da-d2c05fd8a2f9] Failed to attach volume 
d9424219-33f6-40c8-88d9-ecba4c8aa6be at /dev/vdr
2013-04-26 16:27:59.490 30092 TRACE nova.compute.manager [instance: 
05141f81-04cc-4493-86da-d2c05fd8a2f9] Traceback (most recent call last):
2013-04-26 16:27:59.490 30092 TRACE nova.compute.manager [instance: 
05141f81-04cc-4493-86da-d2c05fd8a2f9]   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2859, 
in _attach_volume
2013-04-26 16:27:59.490 30092 TRACE nova.compute.manager [instance: 
05141f81-04cc-4493-86da-d2c05fd8a2f9] mountpoint)
2013-04-26 16:27:59.490 30092 TRACE nova.compute.manager [instance: 
05141f81-04cc-4493-86da-d2c05fd8a2f9]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 
957, in attach_volume
2013-04-26 16:27:59.490 30092 TRACE nova.compute.manager [instance: 
05141f81-04cc-4493-86da-d2c05fd8a2f9] disk_info)
2013-04-26 16:27:59.490 30092 TRACE nova.compute.manager [instance: 
05141f81-04cc-4493-86da-d2c05fd8a2f9]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 
943, in volume_driver_method
2013-04-26 16:27:59.490 30092 TRACE nova.compute.manager [instance: 
05141f81-04cc-4493-86da-d2c05fd8a2f9] return method(connection_info, 
*args, **kwargs)
2013-04-26 16:27:59.490 30092 TRACE nova.compute.manager [instance: 
05141f81-04cc-4493-86da-d2c05fd8a2f9]   File 
"/usr/lib/python2.7/dist-packages/nova/openstack/common/lockutils.py", 
line 242, in inner
2013-04-26 16:27:59.490 30092 TRACE nova.compute.manager [instance: 
05141f81-04cc-4493-86da-d2c05fd8a2f9] retval = f(*args, **kwargs)
2013-04-26 16:27:59.490 30092 TRACE nova.compute.manager [instance: 
05141f81-04cc-4493-86da-d2c05fd8a2f9]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/volume.py", line 
245, in connect_volume
2013-04-26 16:27:59.490 30092 TRACE nova.compute.manager [instance: 
05141f81-04cc-4493-86da-d2c05fd8a2f9] 
self._run_iscsiadm(iscsi_properties, ("--rescan",))
2013-04-26 16:27:59.490 30092 TRACE nova.compute.manager [instance: 
05141f81-04cc-4493-86da-d2c05fd8a2f9]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/volume.py", line 
179, in _run_iscsiadm
2013-04-26 16:27:59.490 30092 TRACE nova.compute.manager [instance: 
05141f81-04cc-4493-86da-d2c05fd8a2f9] check_exit_code=check_exit_code)
2013-04-26 16:27:59.490 30092 TRACE nova.compute.manager [instance: 
05141f81-04cc-4493-86da-d2c05fd8a2f9]   File 
"/usr/lib/python2.7/dist-packages/nova/utils.py", line 239, in execute
2013-04-26 16:27:59.490 30092 TRACE nova.compute.manager [instance: 
05141f81-04cc-4493-86da-d2c05fd8a2f9] cmd=' '.join(cmd))
2013-04-26 16:27:59.490 30092 TRACE nova.compute.manager [instance: 
05141f81-04cc-4493-86da-d2c05fd8a2f9] ProcessExecutionError: Unexpected 
error while running command.
2013-04-26 16:27:59.490 30092 TRACE nova.compute.manager [instance: 
05141f81-04cc-4493-86da-d2c05fd8a2f9] Command: sudo nova-rootwrap 
/etc/nova/rootwrap.conf iscsiadm -m node -T 
iqn.2010-10.org.openstack:volume-d9424219-33f6-40c8-88d9-ecba4c8aa6be -p 
_/*192.168.202.103*/_:3260 --rescan
2013-04-26 16:27:59.490 30092 TRACE nova.compute.manager [instance: 
05141f81-04cc-4493-86da-d2c05fd8a2f9] Exit code: 255
2013-04-26 16:27:59.490 30092 TRACE nova.compute.manager [instance: 
05141f81-04cc-4493-86da-d2c05fd8a2f9] Stdout: ''
2013-04-26 16:27:59.490 30092 TRACE nova.compute.manager [instance: 
05141f81-04cc-4493-86da-d2c05fd8a2f9] Stderr: 'iscsiadm: No portal found.\n'
2013-04-26 16:27:59.490 30092 TRACE nova.compute.manager [instance: 
05141f81-04cc-4493-86da-d2c05fd8a2f9]
2013-04-26 16:27:59.849 ERROR nova.openstack.common.rpc.amqp 
[req-f5b4e121-a3ac-456d-b5cb-ba389c7fb409 
6d72da42f39648c48d3dfb4cd190107d 93a48de7ef674f07a96e169383c34399] 
Exception during message handling



As we can see, the compute node tries to reach the API interface 
(192.168.202.103) of the controller node. Of course, it cannot, since 
the compute node only knows the data and the management subnets.


Is this the default behaviour and I'm missing a parameter somewhere?
I set this key in nova.conf:
iscsi_ip_address=192.168.203.103

but still no luck

any clue?

yours,

michaël


--
Michaël Van d