Re: [openstack-dev] [openstack-ansible] cinder volume lxc and iscsi

2016-09-16 Thread Fabrice Grelaud
Le 16/09/2016 12:18, Jesse Pretorius a écrit :
>>I found in google a bug for use of open-iscsi inside lxc-container
>>(https://bugs.launchpad.net/ubuntu/+source/lxc/+bug/1226855), a bug
>>commented by Kevin Carter (openstack-ansible core team) as a "blocking
>>issue" (in may 2015).
>>Is that bug still relevant ?
> Yes, unfortunately it is relevant. We implemented clarification patches in 
> Newton to clarify that:
> https://github.com/openstack/openstack-ansible/commit/a06d93daa9c0228abd46b1af462fb00651942b7e
> https://github.com/openstack/openstack-ansible-os_cinder/commit/d8daff7691de60ffc6bcc4faa851d9a90712d556
>
> So the documentation now makes it more clear in the note at the top of the 
> page:
> http://docs.openstack.org/developer/openstack-ansible-os_cinder/configure-cinder.html
Ok. Maybe will be great to backport to Mitaka documentation
(http://docs.openstack.org/developer/openstack-ansible/mitaka/install-guide/configure-cinder.html)
>>Do I need to rather deploy my cinder-volume on compute host (metal) to
>>solve my problem ?
> Yes, that is a known good configuration that is very stable.
Indeed. I redeploy cinder-volume on my compute host and everything is
functionnal.
Thanks again.
>
>
>
> 
> Rackspace Limited is a company registered in England & Wales (company 
> registered number 03897010) whose registered office is at 5 Millington Road, 
> Hyde Park Hayes, Middlesex UB3 4AZ. Rackspace Limited privacy policy can be 
> viewed at www.rackspace.co.uk/legal/privacy-policy - This e-mail message may 
> contain confidential or privileged information intended for the recipient. 
> Any dissemination, distribution or copying of the enclosed material is 
> prohibited. If you receive this transmission in error, please notify us 
> immediately by e-mail at ab...@rackspace.com and delete the original message. 
> Your cooperation is appreciated.
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


-- 
Fabrice Grelaud
Secteur Infrastructure et Production
DI - Univ. Bordeaux 1
05 40 00 - 65 92
message...@u-bordeaux1.fr


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-ansible] cinder volume lxc and iscsi

2016-09-16 Thread Jesse Pretorius
>I found in google a bug for use of open-iscsi inside lxc-container
>(https://bugs.launchpad.net/ubuntu/+source/lxc/+bug/1226855), a bug
>commented by Kevin Carter (openstack-ansible core team) as a "blocking
>issue" (in may 2015).
>Is that bug still relevant ?

Yes, unfortunately it is relevant. We implemented clarification patches in 
Newton to clarify that:
https://github.com/openstack/openstack-ansible/commit/a06d93daa9c0228abd46b1af462fb00651942b7e
https://github.com/openstack/openstack-ansible-os_cinder/commit/d8daff7691de60ffc6bcc4faa851d9a90712d556

So the documentation now makes it more clear in the note at the top of the page:
http://docs.openstack.org/developer/openstack-ansible-os_cinder/configure-cinder.html

>Do I need to rather deploy my cinder-volume on compute host (metal) to
>solve my problem ?

Yes, that is a known good configuration that is very stable.




Rackspace Limited is a company registered in England & Wales (company 
registered number 03897010) whose registered office is at 5 Millington Road, 
Hyde Park Hayes, Middlesex UB3 4AZ. Rackspace Limited privacy policy can be 
viewed at www.rackspace.co.uk/legal/privacy-policy - This e-mail message may 
contain confidential or privileged information intended for the recipient. Any 
dissemination, distribution or copying of the enclosed material is prohibited. 
If you receive this transmission in error, please notify us immediately by 
e-mail at ab...@rackspace.com and delete the original message. Your cooperation 
is appreciated.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-ansible] cinder volume lxc and iscsi

2016-09-15 Thread Michał Jastrzębski
So in Kolla we managed to put both iscsi and tgtd into containers. It
did require quite a few shares from host, not sure how feasible it is
with LXC

https://github.com/openstack/kolla/blob/master/ansible/roles/iscsi/tasks/start.yml

Look at volumes, this is what we share, you can try to mimic this behavior.

On 15 September 2016 at 13:58, Sean McGinnis  wrote:
> On Wed, Sep 14, 2016 at 05:08:51PM +0200, Fabrice Grelaud wrote:
>> Hi,
>>
>> i need recommendations to setup block storage with dell storage center
>> iscsi drivers.
>>
>> As seen in doc
>> (http://docs.openstack.org/developer/openstack-ansible/mitaka/install-guide/configure-cinder.html),
>> no need for ISCSI block storage to have a separate host.
>> So, i modify env.d/cinder.yml to remove "is_metal: true", and configure
>> openstack_user_config.yml with:
>> (http://docs.openstack.org/mitaka/config-reference/block-storage/drivers/dell-storagecenter-driver.html)
>>
>> storage_hosts:
>>   p-osinfra01:
>> ip: 172.29.236.11
>> container_vars:
>>   cinder_storage_availability_zone: Dell_SC
>>   cinder_default_availability_zone: Dell_SC
>>   cinder_default_volume_type: delliscsi
>>   cinder_backends:
>> limit_container_types: cinder_volume
>> delliscsi:
>>   volume_driver:
>> cinder.volume.drivers.dell.dell_storagecenter_iscsi.DellStorageCenterISCSIDriver
>>   volume_backend_name: dell_iscsi
>>   san_ip: 172.x.y.z
>>   san_login: admin
>>   san_password: 
>>   iscsi_ip_address: 10.a.b.c
>>   dell_sc_ssn: 46247
>>   dell_sc_api_port: 3033
>>   dell_sc_server_folder: Openstack
>>   dell_sc_volume_folder: Openstack
>>   iscsi_port: 3260
>>
>> Same for p-osinfra02 and p-osinfra03.
>>
>> I launch playbook os-cinder-install.yml and i have 3 cinder-volume
>> containers each on my infra hosts.
>> Everything is ok.
>>
>> In horizon, i can create a volume (seen on the storage center) and can
>> attach this volume to an instance. Perfect !
>>
>> But now, if i launch an instance with "Boot from image (create a new
>> volume)", i got an error from nova "Block Device Mapping is Invalid".
>> I checked my cinder-volume.log and i see:
>> ERROR cinder.volume.flows.manager.create_volume
>> FailedISCSITargetPortalLogin: Could not login to any iSCSI portal
>> ERROR cinder.volume.manager ImageCopyFailure: Failed to copy image to
>> volume: Could not login to any iSCSI portal.
>>
>> I test in one container iscsi connection:
>> root@p-osinfra03-cinder-volumes-container-2408e151:~# iscsiadm -m
>> discovery -t sendtargets -p 10.a.b.c
>> 10.a.b.c:3260,0 iqn.2002-03.com.compellent:5000d31000b4a724
>> 10.a.b.c:3260,0 iqn.2002-03.com.compellent:5000d31000b4a728
>> 10.a.b.c:3260,0 iqn.2002-03.com.compellent:5000d31000b4a723
>> 10.a.b.c:3260,0 iqn.2002-03.com.compellent:5000d31000b4a727
>>
>> But when login, i got:
>> root@p-osinfra03-cinder-volumes-container-2408e151:~# iscsiadm -m node
>> -T iqn.2002-03.com.compellent:5000d31000b4a724 --login
>> Logging in to [iface: default, target:
>> iqn.2002-03.com.compellent:5000d31000b4a724, portal: 10.a.b.c,3260]
>> (multiple)
>> iscsiadm: got read error (0/0), daemon died?
>> iscsiadm: Could not login to [iface: default, target:
>> iqn.2002-03.com.compellent:5000d31000b4a724, portal: 10.a.b.c,3260].
>> iscsiadm: initiator reported error (18 - could not communicate to iscsid)
>> iscsiadm: Could not log into all portals
>>
>> I found in google a bug for use of open-iscsi inside lxc-container
>> (https://bugs.launchpad.net/ubuntu/+source/lxc/+bug/1226855), a bug
>> commented by Kevin Carter (openstack-ansible core team) as a "blocking
>> issue" (in may 2015).
>>
>> Is that bug still relevant ?
>> Do i need to rather deploy my cinder-volume on compute host (metal) to
>> solve my problem ?
>> Or do you have others suggestions ?
>>
>> Thanks.
>> Regards,
>>
>> --
>> Fabrice Grelaud
>> Université de Bordeaux
>
> Everything else looks ok at first glance, so my guess would be that that
> lxc bug is still an issue. You should be able to log in to the iSCSI
> targets if you are able to connect and do the sendtargets to get the
> info in the first place. That "could not communicate to iscsid" looks
> very suspect.
>
> Sean (smcginnis)
>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (n

Re: [openstack-dev] [openstack-ansible] cinder volume lxc and iscsi

2016-09-15 Thread Sean McGinnis
On Wed, Sep 14, 2016 at 05:08:51PM +0200, Fabrice Grelaud wrote:
> Hi,
> 
> i need recommendations to setup block storage with dell storage center
> iscsi drivers.
> 
> As seen in doc
> (http://docs.openstack.org/developer/openstack-ansible/mitaka/install-guide/configure-cinder.html),
> no need for ISCSI block storage to have a separate host.
> So, i modify env.d/cinder.yml to remove "is_metal: true", and configure
> openstack_user_config.yml with:
> (http://docs.openstack.org/mitaka/config-reference/block-storage/drivers/dell-storagecenter-driver.html)
> 
> storage_hosts:
>   p-osinfra01:
> ip: 172.29.236.11
> container_vars:
>   cinder_storage_availability_zone: Dell_SC
>   cinder_default_availability_zone: Dell_SC
>   cinder_default_volume_type: delliscsi
>   cinder_backends:
> limit_container_types: cinder_volume
> delliscsi:
>   volume_driver:
> cinder.volume.drivers.dell.dell_storagecenter_iscsi.DellStorageCenterISCSIDriver
>   volume_backend_name: dell_iscsi
>   san_ip: 172.x.y.z
>   san_login: admin
>   san_password: 
>   iscsi_ip_address: 10.a.b.c
>   dell_sc_ssn: 46247
>   dell_sc_api_port: 3033
>   dell_sc_server_folder: Openstack
>   dell_sc_volume_folder: Openstack
>   iscsi_port: 3260
> 
> Same for p-osinfra02 and p-osinfra03.
> 
> I launch playbook os-cinder-install.yml and i have 3 cinder-volume
> containers each on my infra hosts.
> Everything is ok.
> 
> In horizon, i can create a volume (seen on the storage center) and can
> attach this volume to an instance. Perfect !
> 
> But now, if i launch an instance with "Boot from image (create a new
> volume)", i got an error from nova "Block Device Mapping is Invalid".
> I checked my cinder-volume.log and i see:
> ERROR cinder.volume.flows.manager.create_volume
> FailedISCSITargetPortalLogin: Could not login to any iSCSI portal
> ERROR cinder.volume.manager ImageCopyFailure: Failed to copy image to
> volume: Could not login to any iSCSI portal.
> 
> I test in one container iscsi connection:
> root@p-osinfra03-cinder-volumes-container-2408e151:~# iscsiadm -m
> discovery -t sendtargets -p 10.a.b.c
> 10.a.b.c:3260,0 iqn.2002-03.com.compellent:5000d31000b4a724
> 10.a.b.c:3260,0 iqn.2002-03.com.compellent:5000d31000b4a728
> 10.a.b.c:3260,0 iqn.2002-03.com.compellent:5000d31000b4a723
> 10.a.b.c:3260,0 iqn.2002-03.com.compellent:5000d31000b4a727
> 
> But when login, i got:
> root@p-osinfra03-cinder-volumes-container-2408e151:~# iscsiadm -m node
> -T iqn.2002-03.com.compellent:5000d31000b4a724 --login
> Logging in to [iface: default, target:
> iqn.2002-03.com.compellent:5000d31000b4a724, portal: 10.a.b.c,3260]
> (multiple)
> iscsiadm: got read error (0/0), daemon died?
> iscsiadm: Could not login to [iface: default, target:
> iqn.2002-03.com.compellent:5000d31000b4a724, portal: 10.a.b.c,3260].
> iscsiadm: initiator reported error (18 - could not communicate to iscsid)
> iscsiadm: Could not log into all portals
> 
> I found in google a bug for use of open-iscsi inside lxc-container
> (https://bugs.launchpad.net/ubuntu/+source/lxc/+bug/1226855), a bug
> commented by Kevin Carter (openstack-ansible core team) as a "blocking
> issue" (in may 2015).
> 
> Is that bug still relevant ?
> Do i need to rather deploy my cinder-volume on compute host (metal) to
> solve my problem ?
> Or do you have others suggestions ?
> 
> Thanks.
> Regards,
> 
> -- 
> Fabrice Grelaud
> Université de Bordeaux

Everything else looks ok at first glance, so my guess would be that that
lxc bug is still an issue. You should be able to log in to the iSCSI
targets if you are able to connect and do the sendtargets to get the
info in the first place. That "could not communicate to iscsid" looks
very suspect.

Sean (smcginnis)

> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev