[Openstack] [devstack] [neutron] How to make br-ex work in devstack

2014-07-28 Thread Deepak Shetty
Hi,
  I have a devstack setup inside a VM (all in one setup), lets say
devstack-vm
I have another VM with glusterfs server running, lets say glusterfs-vm

I am trying to enable connectivity between the nova instance (VM) created
on devstack-vm and my glusterfs server on glusterfs-vm.

I associated a public IP to my nova VM, but it doesn't work. I think the
br-ex bridge on my devstack-VM isn't wired to my eth0 on devstack-VM.

Is it documented anywhere on how to enable this so that I can have my nova
VMs connect to my external glusterfs server ?

I plan to try to do this manually by swapping the ifcfg-eth0 by ifcfg-br-ex
and add eth0 as a device inside br-ex, but before I do that, wanted to
check if there is a better or neutron cli based method to help me do this
in a saner way ?

Appreciate a response.

thanx,
deepak
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Tenant List

2014-06-20 Thread Deepak Shetty
Georgios,

   Why do you think its present ? Can you justify your claim pls ?


This is what I see...

[stack@devstack-large-vm ~]$ [admin] nova -h| grep tenant
[--os-tenant-name ]
[--os-tenant-id ] [--os-auth-url ]
flavor-access-add   Add flavor access for the given tenant.
Remove flavor access for the given tenant.
floating-ip-create  Allocate a floating IP for the current tenant.
quota-defaults  List the default quotas for a tenant.
quota-deleteDelete quota for a tenant/user so their quota will
quota-show  List the quotas for a tenant/user.
quota-updateUpdate the quotas for a tenant/user.
secgroup-list   List security groups for the current tenant.
usage   Show usage data for a single tenant.
usage-list  List usage data for all tenants.
x509-create-certCreate x509 cert for a user in tenant.
  --os-tenant-name 
  --os-tenant-id 

As you can see ^^ there is no --tenant available.


On Fri, Jun 20, 2014 at 3:23 PM, Georgios Dimitrakakis  wrote:

> Indeed there is available and in principle it should work as an
> admin...but it doesn't.
>
>
> Best,
>
> G.
>
>
>
> On Fri, 20 Jun 2014 12:30:31 +0300, Mārtiņš Jakubovičs wrote:
>
>> Look command as "nova help list", there is such option --tenant and I
>> can confirm, it didn't work for me either.
>>
>> On 2014.06.20. 12:16, Deepak Shetty wrote:
>>
>>> I think --tenant is not a supported option for nova
>>> I don't see it under nova -h
>>>
>>> What i see is this...
>>> --os-tenant-name 
>>>
>>> So maybe --tenant is just being ignored and hence the cmd reduces to
>>> `nova
>>> list` hence it shows only admin instance as u r logged in as admin
>>>
>>>
>>> On Fri, Jun 13, 2014 at 6:40 PM, Georgios Dimitrakakis <
>>> gior...@acmac.uoc.gr
>>>
>>>> wrote:
>>>>
>>>
>>>  I am in IceHouse and as an ADMIN I am trying to list the instances on a
>>>> specific tenant.
>>>>
>>>> I have the following tenants:
>>>>
>>>> # keystone tenant-list
>>>> +--+-+-+
>>>> |id|   name  | enabled |
>>>> +--+-+-+
>>>> | 4746936e9e4b49e382ad41d0b98f3644 |  admin  |   True  |
>>>> | 965bd35fbef24133951a6bee429cd5be |   demo  |   True  |
>>>> | ad1473de92c1496fa4dea3fb93101b8b | service |   True  |
>>>> +--+-+-+
>>>>
>>>> Both the commands
>>>>
>>>> # nova list --tenant demo
>>>> # nova list --tenant 965bd35fbef24133951a6bee429cd5be
>>>>
>>>>
>>>> produce the wrong output since they list the instances for the admin
>>>> tenant.
>>>>
>>>> Using
>>>>
>>>> # nova list --all-tenants
>>>>
>>>> shows everything but why the specific tenants cannot be shown?
>>>>
>>>> Does anyone else have seen this??
>>>>
>>>>
>>>> Best,
>>>>
>>>>
>>>> G.
>>>> --
>>>>
>>>> ___
>>>> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/
>>>> openstack
>>>> Post to : openstack@lists.openstack.org
>>>> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/
>>>> openstack
>>>>
>>>>
>>>
>>>
>>> ___
>>> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/
>>> openstack
>>> Post to : openstack@lists.openstack.org
>>> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/
>>> openstack
>>>
>>>
>>
>> ___
>> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/
>> openstack
>> Post to : openstack@lists.openstack.org
>> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/
>> openstack
>>
>
> --
>
> ___
> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/
> openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/
> openstack
>
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Tenant List

2014-06-20 Thread Deepak Shetty
Sorry! My bad.. i was wrong :)
nova help list does show the option Maybe you can raise a bug in LP if
its not working. My 2 cents


On Fri, Jun 20, 2014 at 4:44 PM, Deepak Shetty  wrote:

> Georgios,
>
>Why do you think its present ? Can you justify your claim pls ?
>
>
> This is what I see...
>
> [stack@devstack-large-vm ~]$ [admin] nova -h| grep tenant
> [--os-tenant-name ]
> [--os-tenant-id ] [--os-auth-url ]
> flavor-access-add   Add flavor access for the given tenant.
> Remove flavor access for the given tenant.
> floating-ip-create  Allocate a floating IP for the current tenant.
> quota-defaults  List the default quotas for a tenant.
> quota-deleteDelete quota for a tenant/user so their quota will
> quota-show  List the quotas for a tenant/user.
> quota-updateUpdate the quotas for a tenant/user.
> secgroup-list   List security groups for the current tenant.
> usage   Show usage data for a single tenant.
> usage-list  List usage data for all tenants.
> x509-create-certCreate x509 cert for a user in tenant.
>   --os-tenant-name 
>   --os-tenant-id 
>
> As you can see ^^ there is no --tenant available.
>
>
> On Fri, Jun 20, 2014 at 3:23 PM, Georgios Dimitrakakis <
> gior...@acmac.uoc.gr> wrote:
>
>> Indeed there is available and in principle it should work as an
>> admin...but it doesn't.
>>
>>
>> Best,
>>
>> G.
>>
>>
>>
>> On Fri, 20 Jun 2014 12:30:31 +0300, Mārtiņš Jakubovičs wrote:
>>
>>> Look command as "nova help list", there is such option --tenant and I
>>> can confirm, it didn't work for me either.
>>>
>>> On 2014.06.20. 12:16, Deepak Shetty wrote:
>>>
>>>> I think --tenant is not a supported option for nova
>>>> I don't see it under nova -h
>>>>
>>>> What i see is this...
>>>> --os-tenant-name 
>>>>
>>>> So maybe --tenant is just being ignored and hence the cmd reduces to
>>>> `nova
>>>> list` hence it shows only admin instance as u r logged in as admin
>>>>
>>>>
>>>> On Fri, Jun 13, 2014 at 6:40 PM, Georgios Dimitrakakis <
>>>> gior...@acmac.uoc.gr
>>>>
>>>>> wrote:
>>>>>
>>>>
>>>>  I am in IceHouse and as an ADMIN I am trying to list the instances on a
>>>>> specific tenant.
>>>>>
>>>>> I have the following tenants:
>>>>>
>>>>> # keystone tenant-list
>>>>> +--+-+-+
>>>>> |id|   name  | enabled |
>>>>> +--+-+-+
>>>>> | 4746936e9e4b49e382ad41d0b98f3644 |  admin  |   True  |
>>>>> | 965bd35fbef24133951a6bee429cd5be |   demo  |   True  |
>>>>> | ad1473de92c1496fa4dea3fb93101b8b | service |   True  |
>>>>> +--+-+-+
>>>>>
>>>>> Both the commands
>>>>>
>>>>> # nova list --tenant demo
>>>>> # nova list --tenant 965bd35fbef24133951a6bee429cd5be
>>>>>
>>>>>
>>>>> produce the wrong output since they list the instances for the admin
>>>>> tenant.
>>>>>
>>>>> Using
>>>>>
>>>>> # nova list --all-tenants
>>>>>
>>>>> shows everything but why the specific tenants cannot be shown?
>>>>>
>>>>> Does anyone else have seen this??
>>>>>
>>>>>
>>>>> Best,
>>>>>
>>>>>
>>>>> G.
>>>>> --
>>>>>
>>>>> ___
>>>>> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/
>>>>> openstack
>>>>> Post to : openstack@lists.openstack.org
>>>>> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/
>>>>> openstack
>>>>>
>>>>>
>>>>
>>>>
>>>> ___
>>>> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/
>>>> openstack
>>>> Post to : openstack@lists.openstack.org
>>>> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/
>>>> openstack
>>>>
>>>>
>>>
>>> ___
>>> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/
>>> openstack
>>> Post to : openstack@lists.openstack.org
>>> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/
>>> openstack
>>>
>>
>> --
>>
>> ___
>> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/
>> openstack
>> Post to : openstack@lists.openstack.org
>> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/
>> openstack
>>
>
>
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Tenant List

2014-06-20 Thread Deepak Shetty
I think --tenant is not a supported option for nova
I don't see it under nova -h

What i see is this...
--os-tenant-name 

So maybe --tenant is just being ignored and hence the cmd reduces to `nova
list` hence it shows only admin instance as u r logged in as admin


On Fri, Jun 13, 2014 at 6:40 PM, Georgios Dimitrakakis  wrote:

> I am in IceHouse and as an ADMIN I am trying to list the instances on a
> specific tenant.
>
> I have the following tenants:
>
> # keystone tenant-list
> +--+-+-+
> |id|   name  | enabled |
> +--+-+-+
> | 4746936e9e4b49e382ad41d0b98f3644 |  admin  |   True  |
> | 965bd35fbef24133951a6bee429cd5be |   demo  |   True  |
> | ad1473de92c1496fa4dea3fb93101b8b | service |   True  |
> +--+-+-+
>
> Both the commands
>
> # nova list --tenant demo
> # nova list --tenant 965bd35fbef24133951a6bee429cd5be
>
>
> produce the wrong output since they list the instances for the admin
> tenant.
>
> Using
>
> # nova list --all-tenants
>
> shows everything but why the specific tenants cannot be shown?
>
> Does anyone else have seen this??
>
>
> Best,
>
>
> G.
> --
>
> ___
> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/
> openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/
> openstack
>
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] [Manila] [solved] GenericDriver cinder volume error during manila create

2014-06-17 Thread Deepak Shetty
This got resolved after i started tgtd.service

For some reason devstack on F20 didn't have it started by default


On Mon, Jun 16, 2014 at 11:08 PM, Deepak Shetty  wrote:

> (Sending to users list, as realised this may not be a -dev list related)
>
> ---
>
> I am trying devstack on F20 setup with Manila sources.
>
> When i am trying to do
> *manila create --name cinder_vol_share_using_nfs2 --share-network-id
> 36ec5a17-cef6-44a8-a518-457a6f36faa0 NFS 2 *
>
> I see the below error in c-vol due to which even tho' my service VM is
> started, manila create errors out as cinder volume is not getting exported
> as iSCSI
>
> 2014-06-16 16:39:36.151 INFO cinder.volume.flows.manager.create_volume
> [req-15d0b435-f6ce-41cd-ae4a-3851b07cf774 1a7816e5f0144c539192360cdc9672d5
> b65a066f32df4aca80fa9a
> 6d5c795095] Volume 8bfd424d-9877-4c20-a9d1-058c06b9bdda: being created as
> raw with specification: {'status': u'creating', 'volume_size': 2,
> 'volume_name': u'volume-8bfd
> 424d-9877-4c20-a9d1-058c06b9bdda'}
> 2014-06-16 16:39:36.151 DEBUG cinder.openstack.common.processutils
> [req-15d0b435-f6ce-41cd-ae4a-3851b07cf774 1a7816e5f0144c539192360cdc9672d5
> b65a066f32df4aca80fa9a6d5c
> 795095] Running cmd (subprocess): sudo cinder-rootwrap
> /etc/cinder/rootwrap.conf lvcreate -n
> volume-8bfd424d-9877-4c20-a9d1-058c06b9bdda stack-volumes -L 2g from (pid=4
> 623) execute /opt/stack/cinder/cinder/openstack/common/processutils.py:142
> 2014-06-16 16:39:36.828 INFO cinder.volume.flows.manager.create_volume
> [req-15d0b435-f6ce-41cd-ae4a-3851b07cf774 1a7816e5f0144c539192360cdc9672d5
> b65a066f32df4aca80fa9a
> 6d5c795095] Volume volume-8bfd424d-9877-4c20-a9d1-058c06b9bdda
> (8bfd424d-9877-4c20-a9d1-058c06b9bdda): created successfully
> 2014-06-16 16:39:38.404 WARNING cinder.context [-] Arguments dropped when
> creating context: {'user': u'd9bb59a6a2394483902b382a991ffea2', 'tenant':
> u'b65a066f32df4aca80
> fa9a6d5c795095', 'user_identity': u'd9bb59a6a2394483902b382a991ffea2
> b65a066f32df4aca80fa9a6d5c795095 - - -'}
> 2014-06-16 16:39:38.426 DEBUG cinder.volume.manager
> [req-083cd582-1b4d-4e7c-a70c-2c6282d8d799 d9bb59a6a2394483902b382a991ffea2
> b65a066f32df4aca80fa9a6d5c795095] Volume
> 8bfd424d-9877-4c20-a9d1-058c06b9bdda: creating export from (pid=4623)
> initialize_connection /opt/stack/cinder/cinder/volume/manager.py:781
> 2014-06-16 16:39:38.428 INFO cinder.brick.iscsi.iscsi
> [req-083cd582-1b4d-4e7c-a70c-2c6282d8d799 d9bb59a6a2394483902b382a991ffea2
> b65a066f32df4aca80fa9a6d5c795095] Creat
> ing iscsi_target for: volume-8bfd424d-9877-4c20-a9d1-058c06b9bdda
> 2014-06-16 16:39:38.440 DEBUG cinder.brick.iscsi.iscsi
> [req-083cd582-1b4d-4e7c-a70c-2c6282d8d799 d9bb59a6a2394483902b382a991ffea2
> b65a066f32df4aca80fa9a6d5c795095] Crea
> ted volume path
> /opt/stack/data/cinder/volumes/volume-8bfd424d-9877-4c20-a9d1-058c06b9bdda,
> content:
> * iqn.2010-10.org.openstack:volume-8bfd424d-9877-4c20-a9d1-058c06b9bdda>*
> * backing-store
> /dev/stack-volumes/volume-8bfd424d-9877-4c20-a9d1-058c06b9bdda*
> * lld iscsi*
> * IncomingUser kZQ6rqqT7W6KGQvMZ7Lr k4qcE3G9g5z7mDWh2woe*
> * *
> from (pid=4623) create_iscsi_target
> /opt/stack/cinder/cinder/brick/iscsi/iscsi.py:183
> 2014-06-16 16:39:38.440 DEBUG cinder.openstack.common.processutils
> [req-083cd582-1b4d-4e7c-a70c-2c6282d8d799 d9bb59a6a2394483902b382a991ffea2
> b65a066f32df4aca80fa9a6d5c
> 795095] Running cmd (subprocess): sudo cinder-rootwrap
> /etc/cinder/rootwrap.conf tgt-admin --update
> iqn.2010-10.org.openstack:volume-8bfd424d-9877-4c20-a9d1-058c06b9bdd
> a from (pid=4623) execute
> /opt/stack/cinder/cinder/openstack/common/processutils.py:142
> 2014-06-16 16:39:38.981 DEBUG cinder.openstack.common.processutils
> [req-083cd582-1b4d-4e7c-a70c-2c6282d8d799 d9bb59a6a2394483902b382a991ffea2
> b65a066f32df4aca80fa9a6d5c
> 795095] Result was 107 from (pid=4623) execute
> /opt/stack/cinder/cinder/openstack/common/processutils.py:167
> 2014-06-16 16:39:38.981 WARNING cinder.brick.iscsi.iscsi
> [req-083cd582-1b4d-4e7c-a70c-2c6282d8d799 d9bb59a6a2394483902b382a991ffea2 
> *b65a066f32df4aca80fa9a6d5c795095]
> Fa*
> *iled to create iscsi target for volume
> id:volume-8bfd424d-9877-4c20-a9d1-058c06b9bdda: Unexpected error while
> running command.*
> *Command: sudo cinder-rootwrap /etc/cinder/rootwrap.conf tgt-admin
> --update
> iqn.2010-10.org.openstack:volume-8bfd424d-9877-4c20-a9d1-058c06b9bdda*
> *Exit code: 107*
> Stdout: 'Command:\n\ttgtadm -C 0 --lld iscsi --op new --mode target --tid
> 1 -T
> iq

[Openstack] [Manila] GenericDriver cinder volume error during manila create

2014-06-16 Thread Deepak Shetty
(Sending to users list, as realised this may not be a -dev list related)
---
I am trying devstack on F20 setup with Manila sources.

When i am trying to do
*manila create --name cinder_vol_share_using_nfs2 --share-network-id
36ec5a17-cef6-44a8-a518-457a6f36faa0 NFS 2 *

I see the below error in c-vol due to which even tho' my service VM is
started, manila create errors out as cinder volume is not getting exported
as iSCSI

2014-06-16 16:39:36.151 INFO cinder.volume.flows.manager.create_volume
[req-15d0b435-f6ce-41cd-ae4a-3851b07cf774 1a7816e5f0144c539192360cdc9672d5
b65a066f32df4aca80fa9a
6d5c795095] Volume 8bfd424d-9877-4c20-a9d1-058c06b9bdda: being created as
raw with specification: {'status': u'creating', 'volume_size': 2,
'volume_name': u'volume-8bfd
424d-9877-4c20-a9d1-058c06b9bdda'}
2014-06-16 16:39:36.151 DEBUG cinder.openstack.common.processutils
[req-15d0b435-f6ce-41cd-ae4a-3851b07cf774 1a7816e5f0144c539192360cdc9672d5
b65a066f32df4aca80fa9a6d5c
795095] Running cmd (subprocess): sudo cinder-rootwrap
/etc/cinder/rootwrap.conf lvcreate -n
volume-8bfd424d-9877-4c20-a9d1-058c06b9bdda stack-volumes -L 2g from (pid=4
623) execute /opt/stack/cinder/cinder/openstack/common/processutils.py:142
2014-06-16 16:39:36.828 INFO cinder.volume.flows.manager.create_volume
[req-15d0b435-f6ce-41cd-ae4a-3851b07cf774 1a7816e5f0144c539192360cdc9672d5
b65a066f32df4aca80fa9a
6d5c795095] Volume volume-8bfd424d-9877-4c20-a9d1-058c06b9bdda
(8bfd424d-9877-4c20-a9d1-058c06b9bdda): created successfully
2014-06-16 16:39:38.404 WARNING cinder.context [-] Arguments dropped when
creating context: {'user': u'd9bb59a6a2394483902b382a991ffea2', 'tenant':
u'b65a066f32df4aca80
fa9a6d5c795095', 'user_identity': u'd9bb59a6a2394483902b382a991ffea2
b65a066f32df4aca80fa9a6d5c795095 - - -'}
2014-06-16 16:39:38.426 DEBUG cinder.volume.manager
[req-083cd582-1b4d-4e7c-a70c-2c6282d8d799 d9bb59a6a2394483902b382a991ffea2
b65a066f32df4aca80fa9a6d5c795095] Volume
8bfd424d-9877-4c20-a9d1-058c06b9bdda: creating export from (pid=4623)
initialize_connection /opt/stack/cinder/cinder/volume/manager.py:781
2014-06-16 16:39:38.428 INFO cinder.brick.iscsi.iscsi
[req-083cd582-1b4d-4e7c-a70c-2c6282d8d799 d9bb59a6a2394483902b382a991ffea2
b65a066f32df4aca80fa9a6d5c795095] Creat
ing iscsi_target for: volume-8bfd424d-9877-4c20-a9d1-058c06b9bdda
2014-06-16 16:39:38.440 DEBUG cinder.brick.iscsi.iscsi
[req-083cd582-1b4d-4e7c-a70c-2c6282d8d799 d9bb59a6a2394483902b382a991ffea2
b65a066f32df4aca80fa9a6d5c795095] Crea
ted volume path
/opt/stack/data/cinder/volumes/volume-8bfd424d-9877-4c20-a9d1-058c06b9bdda,
content:
**
* backing-store
/dev/stack-volumes/volume-8bfd424d-9877-4c20-a9d1-058c06b9bdda*
* lld iscsi*
* IncomingUser kZQ6rqqT7W6KGQvMZ7Lr k4qcE3G9g5z7mDWh2woe*
* *
from (pid=4623) create_iscsi_target
/opt/stack/cinder/cinder/brick/iscsi/iscsi.py:183
2014-06-16 16:39:38.440 DEBUG cinder.openstack.common.processutils
[req-083cd582-1b4d-4e7c-a70c-2c6282d8d799 d9bb59a6a2394483902b382a991ffea2
b65a066f32df4aca80fa9a6d5c
795095] Running cmd (subprocess): sudo cinder-rootwrap
/etc/cinder/rootwrap.conf tgt-admin --update
iqn.2010-10.org.openstack:volume-8bfd424d-9877-4c20-a9d1-058c06b9bdd
a from (pid=4623) execute
/opt/stack/cinder/cinder/openstack/common/processutils.py:142
2014-06-16 16:39:38.981 DEBUG cinder.openstack.common.processutils
[req-083cd582-1b4d-4e7c-a70c-2c6282d8d799 d9bb59a6a2394483902b382a991ffea2
b65a066f32df4aca80fa9a6d5c
795095] Result was 107 from (pid=4623) execute
/opt/stack/cinder/cinder/openstack/common/processutils.py:167
2014-06-16 16:39:38.981 WARNING cinder.brick.iscsi.iscsi
[req-083cd582-1b4d-4e7c-a70c-2c6282d8d799
d9bb59a6a2394483902b382a991ffea2 *b65a066f32df4aca80fa9a6d5c795095]
Fa*
*iled to create iscsi target for volume
id:volume-8bfd424d-9877-4c20-a9d1-058c06b9bdda: Unexpected error while
running command.*
*Command: sudo cinder-rootwrap /etc/cinder/rootwrap.conf tgt-admin --update
iqn.2010-10.org.openstack:volume-8bfd424d-9877-4c20-a9d1-058c06b9bdda*
*Exit code: 107*
Stdout: 'Command:\n\ttgtadm -C 0 --lld iscsi --op new --mode target --tid 1
-T
iqn.2010-10.org.openstack:volume-8bfd424d-9877-4c20-a9d1-058c06b9bdda\nexited
with code: 107.\n'
Stderr: 'tgtadm: failed to send request hdr to tgt daemon, Transport
endpoint is not connected\ntgtadm: failed to send request hdr to tgt
daemon, Transport endpoint is not connected\ntgtadm: failed to send request
hdr to tgt daemon, Transport endpoint is not connected\ntgtadm: failed to
send request hdr to tgt daemon, Transport endpoint is not connected\n'
2014-06-16 16:39:38.982 ERROR oslo.messaging.rpc.dispatcher
[req-083cd582-1b4d-4e7c-a70c-2c6282d8d799 d9bb59a6a2394483902b382a991ffea2
b65a066f32df4aca80fa9a6d5c795095] Exception during message handling: Failed
to create iscsi target for volume
volume-8bfd424d-9877-4c20-a9d1-058c06b9bdda.
2014-06-16 16:39:38.982 TRACE oslo.messaging.rpc.dispatcher Traceback

Re: [Openstack] [Neutron]Installing openstack on a machine with single interface

2014-06-14 Thread Deepak Shetty
Ageeleshwar,
  This was a good article for neutron beginners like me, thanks
Looking fwd to more articles on neutron, its internals, and how to use them
w/ some good explanations of the networking concepts
For eg: In the above article, it would have been even better, if concepts
like bridge, veth, vlan are explained in simple terms for completeness


On Wed, Jun 11, 2014 at 8:20 PM, Gastón Keller 
wrote:

> On Tue, Jun 10, 2014 at 3:54 AM, Ageeleshwar Kandavelu
>  wrote:
> > Hi All,
> > I have seen several people asking how to set up openstack on a machine
> with
> > a single nic card. I have created a blog page for the same. The blog
> > includes aome information about openstack networking also.
> >
> >
> http://fosskb.wordpress.com/2014/06/10/managing-openstack-internaldataexternal-network-in-one-interface/
>
> Thanks, Ageeleshwar. I'm currently fighting with this same issue, so
> your post may be of help.
>
> Regards,
>
> --
> Gastón Keller, M.Sc.
> Ph.D. Student
> Department of Computer Science
> Middlesex College
> Western University
> London ON N6A 5B7 Canada
> (519) 661-2111 ext. 83566
> http://digs.csd.uwo.ca/
> http://www.linkedin.com/in/gastonkeller
>
> ___
> Mailing list:
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe :
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Need help setting up routing to my instances

2014-06-12 Thread Deepak Shetty
Yes, I am getting DHCP addresses on my instance. For eg: I have 10.0.0.11
as the IP
Mine is a all-in-one setup w/ nova-network service disabled and neutron
service enabled and cannot ping 8.8.8.8 or anything except 10.x.x.x IPs
from my instances



On Wed, Jun 11, 2014 at 11:25 PM, Eric Berg 
wrote:

>  What networking are you using?  I found Neutron to be unmanageable and
> fell back to nova, which worked with my small cloud implementation.  I'm
> using one control and one compute host, soon to be 3.
>
> Can you ping out to local IPs or 8.8.8.8?  I'd start with the interfaces
> involved on your compute and control and network hosts.  You have to see
> where the first place you see packets is and then dump the traffic on each
> of the ports going out from the instance in order to see where your packets
> are getting stopped.
>
> Are you getting DHCP addresses on your instances?
>
>
> On 6/11/14, 1:39 PM, Deepak Shetty wrote:
>
> Yup, i did it for the tenant user and admin both, but it still didn't work
> :(
> I can boot my instance and get inside it via VNC console, but cannot ping
> the instance from devstack host and vice-versa
> I am assuming its something to do with the way devstack sets up networking
> thats probably not working correctly.. just a guess!
>
>
> On Wed, Jun 11, 2014 at 10:54 PM, Eric Berg 
> wrote:
>
>>  I had added the icmp and ssh groups as admin, but had to do it as well
>> as the tenant user.  The docs don't seem to speak to the need to do things
>> as admin or tenant much, but I had to run both commands to set up the
>> security group rules for icmp and ssh as both admin and tenant user.
>>
>>
>>
>> On 6/11/14, 12:45 PM, Deepak Shetty wrote:
>>
>> Just to be clear.. I have added sec-group rule for ssh, icmp into the
>> default secgroup
>> and using the default secgroup while creating the instance.. and yet I am
>> unable to ping and/or ssh the instance from my devstack host!
>>
>>
>> On Wed, Jun 11, 2014 at 10:15 PM, Deepak Shetty 
>> wrote:
>>
>>>   I am actually hitting a similar issue with devstack setup on F20
>>>  I am able to spawn Nova instances and have setup keypair and sec-groups
>>> and using those key and secgroup while spawning the instance
>>>
>>>  My instance boots up fine and has a 10.x.x.x IP.. I can get into the
>>> instance usign VNC.. but cannot ping my host (On which VM is created) from
>>> inside the instnace and vice versa. I see that sshd is running inside the
>>> instnace and doing ssh root@localhost in the instnace works
>>>
>>>  So what else am I missing for the networking NOT to work ? ANy body has
>>> any suggestions ?
>>>
>>>
>>> On Wed, Jun 11, 2014 at 7:42 PM, Eric Berg 
>>> wrote:
>>>
>>>> please excuse my stupidity, but this is the fiftieth time I've done an
>>>> install and I had left out the secgroup-add-rule's for icmp and ssh.
>>>>
>>>> I'm good now!!
>>>>
>>>> I certainly appreciate your help, Yugang.
>>>>
>>>>
>>>> On Wed Jun 11 01:52:20 2014, Yugang LIU wrote:
>>>>
>>>>> Hi,
>>>>>
>>>>> For Nova-network, You can
>>>>>
>>>>> ping from vm to vm.
>>>>> ping from vm to internet
>>>>>
>>>>> You can not
>>>>> ping from any host to vm exclude host owned vm
>>>>>
>>>>> You need assign a floating ip to VM.
>>>>>
>>>>>
>>>>> Best regards
>>>>>
>>>>> Yugang LIU
>>>>>
>>>>> Keep It Simple, Stupid
>>>>>
>>>>> On 06/11/2014 08:36 AM, Eric Berg wrote:
>>>>>
>>>>>> Update.  I've done a fresh install and am successfully running
>>>>>> instances on my compute host, but, while I can connect out of my
>>>>>> instances just fine, I can't get into them from any host but my
>>>>>> compute host.
>>>>>>
>>>>>> I thought that RDO was going to set me up so that each compute host
>>>>>> handled the routing directly, but it appears that all of my instance's
>>>>>> traffic is routing through a bridge to my control host.
>>>>>>
>>>>>> My compute and control hosts are on a 192.168.0.0/16 network and are
>>>>>> using 192.168.20.0/24 for the instances.
>>>>>>
&g

Re: [Openstack] Need help setting up routing to my instances

2014-06-11 Thread Deepak Shetty
Yup, i did it for the tenant user and admin both, but it still didn't work
:(
I can boot my instance and get inside it via VNC console, but cannot ping
the instance from devstack host and vice-versa
I am assuming its something to do with the way devstack sets up networking
thats probably not working correctly.. just a guess!


On Wed, Jun 11, 2014 at 10:54 PM, Eric Berg 
wrote:

>  I had added the icmp and ssh groups as admin, but had to do it as well as
> the tenant user.  The docs don't seem to speak to the need to do things as
> admin or tenant much, but I had to run both commands to set up the security
> group rules for icmp and ssh as both admin and tenant user.
>
>
>
> On 6/11/14, 12:45 PM, Deepak Shetty wrote:
>
> Just to be clear.. I have added sec-group rule for ssh, icmp into the
> default secgroup
> and using the default secgroup while creating the instance.. and yet I am
> unable to ping and/or ssh the instance from my devstack host!
>
>
> On Wed, Jun 11, 2014 at 10:15 PM, Deepak Shetty 
> wrote:
>
>>   I am actually hitting a similar issue with devstack setup on F20
>>  I am able to spawn Nova instances and have setup keypair and sec-groups
>> and using those key and secgroup while spawning the instance
>>
>>  My instance boots up fine and has a 10.x.x.x IP.. I can get into the
>> instance usign VNC.. but cannot ping my host (On which VM is created) from
>> inside the instnace and vice versa. I see that sshd is running inside the
>> instnace and doing ssh root@localhost in the instnace works
>>
>>  So what else am I missing for the networking NOT to work ? ANy body has
>> any suggestions ?
>>
>>
>> On Wed, Jun 11, 2014 at 7:42 PM, Eric Berg 
>> wrote:
>>
>>> please excuse my stupidity, but this is the fiftieth time I've done an
>>> install and I had left out the secgroup-add-rule's for icmp and ssh.
>>>
>>> I'm good now!!
>>>
>>> I certainly appreciate your help, Yugang.
>>>
>>>
>>> On Wed Jun 11 01:52:20 2014, Yugang LIU wrote:
>>>
>>>> Hi,
>>>>
>>>> For Nova-network, You can
>>>>
>>>> ping from vm to vm.
>>>> ping from vm to internet
>>>>
>>>> You can not
>>>> ping from any host to vm exclude host owned vm
>>>>
>>>> You need assign a floating ip to VM.
>>>>
>>>>
>>>> Best regards
>>>>
>>>> Yugang LIU
>>>>
>>>> Keep It Simple, Stupid
>>>>
>>>> On 06/11/2014 08:36 AM, Eric Berg wrote:
>>>>
>>>>> Update.  I've done a fresh install and am successfully running
>>>>> instances on my compute host, but, while I can connect out of my
>>>>> instances just fine, I can't get into them from any host but my
>>>>> compute host.
>>>>>
>>>>> I thought that RDO was going to set me up so that each compute host
>>>>> handled the routing directly, but it appears that all of my instance's
>>>>> traffic is routing through a bridge to my control host.
>>>>>
>>>>> My compute and control hosts are on a 192.168.0.0/16 network and are
>>>>> using 192.168.20.0/24 for the instances.
>>>>>
>>>>> How do I get traffic routing into my instance hosts on 192.168.20.0/24
>>>>> on each compute host?  (I only have one now, but will be deploying 2
>>>>> more once I have OpenStack set up.
>>>>>
>>>>> Eric
>>>>>
>>>>>
>>>>>
>>>>> On 6/10/14, 4:53 PM, Eric Berg wrote:
>>>>>
>>>>>> I need some help setting up my network before doing an install of RDO
>>>>>> using nova-networking.  I've got 2 hosts -- one is a control and one
>>>>>> is a compute host.  Each has 2 NICs.
>>>>>>
>>>>>> It's my understanding that I need to configure the network before
>>>>>> doing the install, but I can't find any good docs on just what I have
>>>>>> to do.
>>>>>>
>>>>>> My initial install allowed me to create instances that I could get
>>>>>> into and out of via ssh, ping, etc., but when I created a new tenant
>>>>>> and a network for that tenant, the networking stopped working.
>>>>>>
>>>>>> I used this command to create the network:
>>>

Re: [Openstack] Need help setting up routing to my instances

2014-06-11 Thread Deepak Shetty
I am actually hitting a similar issue with devstack setup on F20
I am able to spawn Nova instances and have setup keypair and sec-groups and
using those key and secgroup while spawning the instance

My instance boots up fine and has a 10.x.x.x IP.. I can get into the
instance usign VNC.. but cannot ping my host (On which VM is created) from
inside the instnace and vice versa. I see that sshd is running inside the
instnace and doing ssh root@localhost in the instnace works

So what else am I missing for the networking NOT to work ? ANy body has any
suggestions ?


On Wed, Jun 11, 2014 at 7:42 PM, Eric Berg  wrote:

> please excuse my stupidity, but this is the fiftieth time I've done an
> install and I had left out the secgroup-add-rule's for icmp and ssh.
>
> I'm good now!!
>
> I certainly appreciate your help, Yugang.
>
>
> On Wed Jun 11 01:52:20 2014, Yugang LIU wrote:
>
>> Hi,
>>
>> For Nova-network, You can
>>
>> ping from vm to vm.
>> ping from vm to internet
>>
>> You can not
>> ping from any host to vm exclude host owned vm
>>
>> You need assign a floating ip to VM.
>>
>>
>> Best regards
>>
>> Yugang LIU
>>
>> Keep It Simple, Stupid
>>
>> On 06/11/2014 08:36 AM, Eric Berg wrote:
>>
>>> Update.  I've done a fresh install and am successfully running
>>> instances on my compute host, but, while I can connect out of my
>>> instances just fine, I can't get into them from any host but my
>>> compute host.
>>>
>>> I thought that RDO was going to set me up so that each compute host
>>> handled the routing directly, but it appears that all of my instance's
>>> traffic is routing through a bridge to my control host.
>>>
>>> My compute and control hosts are on a 192.168.0.0/16 network and are
>>> using 192.168.20.0/24 for the instances.
>>>
>>> How do I get traffic routing into my instance hosts on 192.168.20.0/24
>>> on each compute host?  (I only have one now, but will be deploying 2
>>> more once I have OpenStack set up.
>>>
>>> Eric
>>>
>>>
>>>
>>> On 6/10/14, 4:53 PM, Eric Berg wrote:
>>>
 I need some help setting up my network before doing an install of RDO
 using nova-networking.  I've got 2 hosts -- one is a control and one
 is a compute host.  Each has 2 NICs.

 It's my understanding that I need to configure the network before
 doing the install, but I can't find any good docs on just what I have
 to do.

 My initial install allowed me to create instances that I could get
 into and out of via ssh, ping, etc., but when I created a new tenant
 and a network for that tenant, the networking stopped working.

 I used this command to create the network:

 "nova network-create ruby-net --bridge br100 --multi-host T
 --fixed-range-v4 192.168.20.0/24"

 While I found more documentation for neutron, I'm not finding much
 for nova.  I have the following questions:

 1) how should I set up my network interfaces on the control and
 compute host for a nova-networking installation?
 2) where are the docs for installation (including such prep as
 above), as well as post-install tenant set-up for this type of network?

 Thanks for your consideration.

 Eric


>>>
>>
>> ___
>> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/
>> openstack
>> Post to : openstack@lists.openstack.org
>> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/
>> openstack
>>
>
> --
> Eric Berg
> Sr. Software Engineer
> Rubenstein Technology Group
> 55 Broad Street, 14th Floor
> New York, NY 10004-2501
>
> (212) 518-6400
> (212) 518-6467 fax
> eb...@rubensteintech.com
> www.rubensteintech.com
>
> ___
> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/
> openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/
> openstack
>
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Need help setting up routing to my instances

2014-06-11 Thread Deepak Shetty
Just to be clear.. I have added sec-group rule for ssh, icmp into the
default secgroup
and using the default secgroup while creating the instance.. and yet I am
unable to ping and/or ssh the instance from my devstack host!


On Wed, Jun 11, 2014 at 10:15 PM, Deepak Shetty  wrote:

> I am actually hitting a similar issue with devstack setup on F20
> I am able to spawn Nova instances and have setup keypair and sec-groups
> and using those key and secgroup while spawning the instance
>
> My instance boots up fine and has a 10.x.x.x IP.. I can get into the
> instance usign VNC.. but cannot ping my host (On which VM is created) from
> inside the instnace and vice versa. I see that sshd is running inside the
> instnace and doing ssh root@localhost in the instnace works
>
> So what else am I missing for the networking NOT to work ? ANy body has
> any suggestions ?
>
>
> On Wed, Jun 11, 2014 at 7:42 PM, Eric Berg 
> wrote:
>
>> please excuse my stupidity, but this is the fiftieth time I've done an
>> install and I had left out the secgroup-add-rule's for icmp and ssh.
>>
>> I'm good now!!
>>
>> I certainly appreciate your help, Yugang.
>>
>>
>> On Wed Jun 11 01:52:20 2014, Yugang LIU wrote:
>>
>>> Hi,
>>>
>>> For Nova-network, You can
>>>
>>> ping from vm to vm.
>>> ping from vm to internet
>>>
>>> You can not
>>> ping from any host to vm exclude host owned vm
>>>
>>> You need assign a floating ip to VM.
>>>
>>>
>>> Best regards
>>>
>>> Yugang LIU
>>>
>>> Keep It Simple, Stupid
>>>
>>> On 06/11/2014 08:36 AM, Eric Berg wrote:
>>>
>>>> Update.  I've done a fresh install and am successfully running
>>>> instances on my compute host, but, while I can connect out of my
>>>> instances just fine, I can't get into them from any host but my
>>>> compute host.
>>>>
>>>> I thought that RDO was going to set me up so that each compute host
>>>> handled the routing directly, but it appears that all of my instance's
>>>> traffic is routing through a bridge to my control host.
>>>>
>>>> My compute and control hosts are on a 192.168.0.0/16 network and are
>>>> using 192.168.20.0/24 for the instances.
>>>>
>>>> How do I get traffic routing into my instance hosts on 192.168.20.0/24
>>>> on each compute host?  (I only have one now, but will be deploying 2
>>>> more once I have OpenStack set up.
>>>>
>>>> Eric
>>>>
>>>>
>>>>
>>>> On 6/10/14, 4:53 PM, Eric Berg wrote:
>>>>
>>>>> I need some help setting up my network before doing an install of RDO
>>>>> using nova-networking.  I've got 2 hosts -- one is a control and one
>>>>> is a compute host.  Each has 2 NICs.
>>>>>
>>>>> It's my understanding that I need to configure the network before
>>>>> doing the install, but I can't find any good docs on just what I have
>>>>> to do.
>>>>>
>>>>> My initial install allowed me to create instances that I could get
>>>>> into and out of via ssh, ping, etc., but when I created a new tenant
>>>>> and a network for that tenant, the networking stopped working.
>>>>>
>>>>> I used this command to create the network:
>>>>>
>>>>> "nova network-create ruby-net --bridge br100 --multi-host T
>>>>> --fixed-range-v4 192.168.20.0/24"
>>>>>
>>>>> While I found more documentation for neutron, I'm not finding much
>>>>> for nova.  I have the following questions:
>>>>>
>>>>> 1) how should I set up my network interfaces on the control and
>>>>> compute host for a nova-networking installation?
>>>>> 2) where are the docs for installation (including such prep as
>>>>> above), as well as post-install tenant set-up for this type of network?
>>>>>
>>>>> Thanks for your consideration.
>>>>>
>>>>> Eric
>>>>>
>>>>>
>>>>
>>>
>>> ___
>>> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/
>>> openstack
>>> Post to : openstack@lists.openstack.org
>>> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/
>>> openstack
>>>
>>
>> --
>> Eric Berg
>> Sr. Software Engineer
>> Rubenstein Technology Group
>> 55 Broad Street, 14th Floor
>> New York, NY 10004-2501
>>
>> (212) 518-6400
>> (212) 518-6467 fax
>> eb...@rubensteintech.com
>> www.rubensteintech.com
>>
>> ___
>> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/
>> openstack
>> Post to : openstack@lists.openstack.org
>> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/
>> openstack
>>
>
>
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] [Nova] How is nova flavour related to VNC usage ?

2014-06-11 Thread Deepak Shetty
Thanks, switching from virt_type=kvm to qemu works!
I spent close to 4 days on this :(


On Wed, Jun 11, 2014 at 7:50 PM, Lars Kellogg-Stedman 
wrote:

> On Wed, Jun 11, 2014 at 11:24:04AM +0530, Deepak Shetty wrote:
> >Thanks that helped, just curious, why does it need to use libguestfs ?
> > Why can't it just use qemu-img resize to check ?
>
> Nova is checking whether or not the given disk image has partitions.
> If has partitions, it cannot be blindly resized.  qemu-img cannot
> perform this check.
>
> This is from memory; for authoritative details, see the source:
> nova/virt/disk/api.py, "is_image_partitionless".
>
> > Also in my devstack setup i see that my Nova VM is stuck at this step..
> so
> > it looks like nova libvirt driver debug says "Creating image" in the
> n-cpu
> > log
>
> I have run into this bug when attempting to use nested KVM.  There's a
> bug report here:
>
>   https://bugs.launchpad.net/nova/+bug/1286256
>
> --
> Lars Kellogg-Stedman  | larsks @ irc
> Cloud Engineering / OpenStack  | "   "  @ twitter
>
>
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] [Nova] How is nova flavour related to VNC usage ?

2014-06-10 Thread Deepak Shetty
Lars,
   Thanks that helped, just curious, why does it need to use libguestfs ?
Why can't it just use qemu-img resize to check ?
Also in my devstack setup i see that my Nova VM is stuck at this step.. so
it looks like nova libvirt driver debug says "Creating image" in the n-cpu
log
and then it stuck there which causes my Instance state to be 'spawning' for
ever.. Any idea if/why libguestfs can never return or cause nova instance
to hang at BUILD/Spawning status ?

I am on F20+devstack combo

thanx,
deepak



On Tue, Jun 10, 2014 at 7:57 PM, Lars Kellogg-Stedman 
wrote:

> On Tue, Jun 10, 2014 at 06:03:14PM +0530, Deepak Shetty wrote:
> > When i use m1.tiny flavour
> > I see that qemu process does not have -vnc and instance name is
> guestfs-XXX
> > and uses libguestfs stuff
>
> Prior to booting your instance, Nova uses libguestfs to check the image
> to see whether it can be dynamically resized or not.  Libguests starts
> up a qemu instance, which is what you are seeing.
>
> This guestfs-XXX instance should disappear once Nova has finished
> inspecting the image and you should see the normal Nova instance with
> the -vnc flag, etc.
>
> --
> Lars Kellogg-Stedman  | larsks @ irc
> Cloud Engineering / OpenStack  | "   "  @ twitter
>
>
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] [devstack][Nova] Unable to spawn VMs

2014-06-10 Thread Deepak Shetty
I got one break-thru tho'
If i use flavours m1.micro or m1.nano, they work for the devstack built-in
image (cirros-0.3.2-x86_64-uec)
But the moment i change it to m1.tiny.. it gets stuck in 'spawning' state.

So looks like disk size =  in the flavour is causing this to
happen ?


On Tue, Jun 10, 2014 at 4:21 PM, Deepak Shetty  wrote:

> Sergey,
>   Do you have any other suggestions for me here ? :)
>
>
> On Mon, Jun 9, 2014 at 6:55 PM, Deepak Shetty  wrote:
>
>> No errors in instance-0001.log.. just that it says qemu terminating
>> on signal 15. which is fine and expected i feel.
>> I had virt_type = qemu all the while now, maybe i will try with virt_type
>> = kvm now
>>
>> Am i the only one seeing this issue.. if yes, then I am wondering if
>> there is somethign really silly in my setup ! :)
>>
>>
>> On Mon, Jun 9, 2014 at 4:53 PM, Sergey Kolekonov > > wrote:
>>
>>> Are there any errors in instance-0001.log and other log files?
>>> Try to change virt_type = kvm to virt_type = qemu in
>>> /etc/nova/nova.conf to exclude kvm problems
>>>
>>>
>>> On Mon, Jun 9, 2014 at 2:28 PM, Deepak Shetty 
>>> wrote:
>>>
>>>> Hi sergey
>>>>Don't see the .log in qemu/instnace-XXX at all
>>>>
>>>> But there is more...
>>>>
>>>> [stack@devstack-large-vm ~]$ [ready] nova show
>>>> *bc598be6-999b-4ab9-8a0a-1c4d29dd811a*
>>>>
>>>> +--++
>>>> | Property |
>>>> Value  |
>>>>
>>>> +--++
>>>> | OS-DCF:diskConfig|
>>>> MANUAL |
>>>> | OS-EXT-AZ:availability_zone  |
>>>> nova   |
>>>> | OS-EXT-SRV-ATTR:host |
>>>> devstack-large-vm.localdomain  |
>>>> | OS-EXT-SRV-ATTR:hypervisor_hostname  |
>>>> devstack-large-vm.localdomain  |
>>>> *| OS-EXT-SRV-ATTR:instance_name| instance-000a *
>>>> |
>>>> | OS-EXT-STS:power_state   |
>>>> 0  |
>>>> | OS-EXT-STS:task_state|
>>>> spawning   |
>>>> | OS-EXT-STS:vm_state  |
>>>> building   |
>>>> | OS-SRV-USG:launched_at   |
>>>> -  |
>>>> | OS-SRV-USG:terminated_at |
>>>> -  |
>>>> | accessIPv4
>>>> ||
>>>> | accessIPv6
>>>> ||
>>>> | config_drive
>>>> ||
>>>> | created  |
>>>> 2014-06-09T10:18:53Z   |
>>>> | flavor   | m1.tiny
>>>> (1)|
>>>> | hostId   |
>>>> 393d36ca7cc7c30957286e775b5808c8103b4b7be1af4f2163cc2fe4   |
>>>> | id   |
>>>> bc598be6-999b-4ab9-8a0a-1c4d29dd811a   |
>>>> | image| cirros-0.3.2-x86_64-uec
>>>> (99471623-273c-46ed-b1b0-5a58605aab76) |
>>>> | key_name |
>>>> -  |
>>>> | metadata |
>>>> {} |
>>>> | name |
>>>> mynewvm|
>>>> | os-extended-volumes:volumes_attach

[Openstack] [Nova] How is nova flavour related to VNC usage ?

2014-06-10 Thread Deepak Shetty
Hi,
   I am usign devstack and spawned Nova instance using the default
cirros-0.3.2-x86_64-uec
image that comes along w/ devstack.

When i use m1.micro or m1.nano flavour
I see that the qemu process uses -vnc  and instance name is
instance-X

but

When i use m1.tiny flavour
I see that qemu process does not have -vnc and instance name is guestfs-XXX
and uses libguestfs stuff

I am trying to understand why changing the flavours causes the above to
happen ?
How are they related ?

thanx,
deepak
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] [devstack][Nova] Unable to spawn VMs

2014-06-10 Thread Deepak Shetty
Sergey,
  Do you have any other suggestions for me here ? :)


On Mon, Jun 9, 2014 at 6:55 PM, Deepak Shetty  wrote:

> No errors in instance-0001.log.. just that it says qemu terminating on
> signal 15. which is fine and expected i feel.
> I had virt_type = qemu all the while now, maybe i will try with virt_type
> = kvm now
>
> Am i the only one seeing this issue.. if yes, then I am wondering if there
> is somethign really silly in my setup ! :)
>
>
> On Mon, Jun 9, 2014 at 4:53 PM, Sergey Kolekonov 
> wrote:
>
>> Are there any errors in instance-0001.log and other log files?
>> Try to change virt_type = kvm to virt_type = qemu in /etc/nova/nova.conf
>> to exclude kvm problems
>>
>>
>> On Mon, Jun 9, 2014 at 2:28 PM, Deepak Shetty 
>> wrote:
>>
>>> Hi sergey
>>>Don't see the .log in qemu/instnace-XXX at all
>>>
>>> But there is more...
>>>
>>> [stack@devstack-large-vm ~]$ [ready] nova show
>>> *bc598be6-999b-4ab9-8a0a-1c4d29dd811a*
>>>
>>> +--++
>>> | Property |
>>> Value  |
>>>
>>> +--++
>>> | OS-DCF:diskConfig|
>>> MANUAL |
>>> | OS-EXT-AZ:availability_zone  |
>>> nova   |
>>> | OS-EXT-SRV-ATTR:host |
>>> devstack-large-vm.localdomain  |
>>> | OS-EXT-SRV-ATTR:hypervisor_hostname  |
>>> devstack-large-vm.localdomain  |
>>> *| OS-EXT-SRV-ATTR:instance_name| instance-000a *
>>> |
>>> | OS-EXT-STS:power_state   |
>>> 0  |
>>> | OS-EXT-STS:task_state|
>>> spawning   |
>>> | OS-EXT-STS:vm_state  |
>>> building   |
>>> | OS-SRV-USG:launched_at   |
>>> -  |
>>> | OS-SRV-USG:terminated_at |
>>> -  |
>>> | accessIPv4
>>> ||
>>> | accessIPv6
>>> ||
>>> | config_drive
>>> ||
>>> | created  |
>>> 2014-06-09T10:18:53Z   |
>>> | flavor   | m1.tiny
>>> (1)|
>>> | hostId   |
>>> 393d36ca7cc7c30957286e775b5808c8103b4b7be1af4f2163cc2fe4   |
>>> | id   |
>>> bc598be6-999b-4ab9-8a0a-1c4d29dd811a   |
>>> | image| cirros-0.3.2-x86_64-uec
>>> (99471623-273c-46ed-b1b0-5a58605aab76) |
>>> | key_name |
>>> -  |
>>> | metadata |
>>> {} |
>>> | name |
>>> mynewvm|
>>> | os-extended-volumes:volumes_attached |
>>> [] |
>>> | private network  |
>>> 10.0.0.2   |
>>> | progress |
>>> 0  |
>>> | security_groups  |
>>> default|
>>> | status   |
>>> BUILD  |
>>> | tenant_id|
>>> 1d0dedb6e6f344258a1ab44df3fcd4ee   |
>>> | updated

Re: [Openstack] [devstack][Nova] Unable to spawn VMs

2014-06-09 Thread Deepak Shetty
No errors in instance-0001.log.. just that it says qemu terminating on
signal 15. which is fine and expected i feel.
I had virt_type = qemu all the while now, maybe i will try with virt_type =
kvm now

Am i the only one seeing this issue.. if yes, then I am wondering if there
is somethign really silly in my setup ! :)


On Mon, Jun 9, 2014 at 4:53 PM, Sergey Kolekonov 
wrote:

> Are there any errors in instance-0001.log and other log files?
> Try to change virt_type = kvm to virt_type = qemu in /etc/nova/nova.conf
> to exclude kvm problems
>
>
> On Mon, Jun 9, 2014 at 2:28 PM, Deepak Shetty  wrote:
>
>> Hi sergey
>>Don't see the .log in qemu/instnace-XXX at all
>>
>> But there is more...
>>
>> [stack@devstack-large-vm ~]$ [ready] nova show
>> *bc598be6-999b-4ab9-8a0a-1c4d29dd811a*
>>
>> +--++
>> | Property |
>> Value  |
>>
>> +--++
>> | OS-DCF:diskConfig|
>> MANUAL |
>> | OS-EXT-AZ:availability_zone  |
>> nova   |
>> | OS-EXT-SRV-ATTR:host |
>> devstack-large-vm.localdomain  |
>> | OS-EXT-SRV-ATTR:hypervisor_hostname  |
>> devstack-large-vm.localdomain  |
>> *| OS-EXT-SRV-ATTR:instance_name| instance-000a *
>> |
>> | OS-EXT-STS:power_state   |
>> 0  |
>> | OS-EXT-STS:task_state|
>> spawning   |
>> | OS-EXT-STS:vm_state  |
>> building   |
>> | OS-SRV-USG:launched_at   |
>> -  |
>> | OS-SRV-USG:terminated_at |
>> -  |
>> | accessIPv4
>> ||
>> | accessIPv6
>> ||
>> | config_drive
>> ||
>> | created  |
>> 2014-06-09T10:18:53Z   |
>> | flavor   | m1.tiny
>> (1)|
>> | hostId   |
>> 393d36ca7cc7c30957286e775b5808c8103b4b7be1af4f2163cc2fe4   |
>> | id   |
>> bc598be6-999b-4ab9-8a0a-1c4d29dd811a   |
>> | image| cirros-0.3.2-x86_64-uec
>> (99471623-273c-46ed-b1b0-5a58605aab76) |
>> | key_name |
>> -  |
>> | metadata |
>> {} |
>> | name |
>> mynewvm|
>> | os-extended-volumes:volumes_attached |
>> [] |
>> | private network  |
>> 10.0.0.2   |
>> | progress |
>> 0  |
>> | security_groups  |
>> default|
>> | status   |
>> BUILD  |
>> | tenant_id|
>> 1d0dedb6e6f344258a1ab44df3fcd4ee   |
>> | updated  |
>> 2014-06-09T10:18:53Z   |
>> | user_id  |
>> 730ccd3aaf7140789f4929edd71d3d31   |
>>
>> +--++
>> [stack@devstack-large-vm ~]$ [ready] ps aux| grep qemu| grep instance
>> stack11388 99.1  0.9 1054588 36884 ?   Sl   10:18   

Re: [Openstack] [devstack][Nova] Unable to spawn VMs

2014-06-09 Thread Deepak Shetty
scsi-hd,bus=scsi0.0,channel=0,scsi-id=1,lun=0,drive=drive-scsi0-0-1-0,id=scsi0-0-1-0
-chardev socket,id=charserial0,path=/tmp/libguestfsagFFSQ/console.sock
-device isa-serial,chardev=charserial0,id=serial0 -chardev
socket,id=charchannel0,path=/tmp/libguestfsagFFSQ/guestfsd.sock -device
virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.libguestfs.channel.0
-device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x4

[root@devstack-large-vm qemu]# ls -lt
total 84
-rw---. 1 root root  8279 Jun  9 09:59 instance-0001.log
-rw---. 1 root root  2070 Jun  9 09:29 instance-0007.log
-rw---. 1 root root 63444 Jun  6 09:20 instance-0004.log
-rw---. 1 root root  2071 Jun  6 07:12 instance-0002.log
[root@devstack-large-vm qemu]# date
Mon Jun  9 10:24:32 UTC 2014

^^ No instance with 0a name above ^^

So the qemu process above is using the disk from my instance but its -name
is not instance-X tho'
*OS-EXT-SRV-ATTR:instance_name| instance-000a *
show 0a as instance name

virsh shows it as running...

[stack@devstack-large-vm ~]$ [ready] virsh list
 IdName   State

* 2 guestfs-tgx65m4jvhgmu8la   running*

thanx,
deepak


On Mon, Jun 9, 2014 at 3:50 PM, Sergey Kolekonov 
wrote:

> Hi,
>
> have you inspected /var/log/libvirt/qemu/instance-.log files?
> Such problem can be connected with your hypervisor. Especially if you use
> nested kvm virtualization on your host machine.
>
>
> On Mon, Jun 9, 2014 at 2:06 PM, Deepak Shetty  wrote:
>
>> (Hit send by mistake.. continuing below)
>>
>>
>> On Mon, Jun 9, 2014 at 3:32 PM, Deepak Shetty 
>> wrote:
>>
>>> Hi All,
>>>The last time I sent this issue, I didn't had the right tags in the
>>> subject so sending with the right tags, in the hope that the right folks
>>> might help provide some clues.
>>>
>>> I am usign latest devstack on F20 and I see VMs stuck in 'spawning'
>>> state.
>>> No errors in n-sch, n-cpu and I checked other n-* screen logs too, no
>>> errors whatsoever
>>>
>>> There is enuf memory and disk space available for the tiny and small
>>> flavour VMs I am trying to run
>>>
>>>
>> 2014-06-09 09:57:19.935 AUDIT nova.compute.resource_tracker [-] Free ram
>> (MB): 1905
>> 2014-06-09 09:57:19.935 AUDIT nova.compute.resource_tracker [-] Free disk
>> (GB): 46
>> 2014-06-09 09:57:19.936 AUDIT nova.compute.resource_tracker [-] Free
>> VCPUS: 1
>> 2014-06-09 09:57:19.936 AUDIT nova.compute.resource_tracker [-] PCI
>> stats: []
>>
>> The only thing that i see other than INFO/DEbug is the below
>> 2014-06-09 09:57:42.700 WARNING nova.compute.manager [-] Found 3 in the
>> database and 0 on the hypervisor.
>>
>> What does that mean ? Does that say that Nova sees 3 VMs in its DB but a
>> query to the hyp (via libvirt I assume) returns none ?
>>
>> Interestingly I can see the qemu process for my VM that nova says is
>> stuck in Spawning state and virsh list also lists' it
>> But i don't see the libvirt.xml for this stuck VM (from nova's
>> perspective) in the /opt/stack/data/nova/instances/ folder.
>>
>> I was trying to figure what this means ? Where is nova stuck ? Screen
>> logs just doesn't point to anything useful to debug further.
>>
>> From the n-cpu logs i see the "Creating image" info log from
>> nova.virt.libvirt and thats the last
>> I don't see the info log "Creating isntance" from nova.virt.. so it looks
>> like its stuck somewhere between creating image and spawning instance.. but
>> virsh list and qemu process says otherwise
>>
>> Looking for some debug hints here
>>
>> thanx,
>> deepak
>>
>> ___
>> Mailing list:
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>> Post to : openstack@lists.openstack.org
>> Unsubscribe :
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>
>>
>
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] [devstack][Nova] Unable to spawn VMs

2014-06-09 Thread Deepak Shetty
(Hit send by mistake.. continuing below)


On Mon, Jun 9, 2014 at 3:32 PM, Deepak Shetty  wrote:

> Hi All,
>The last time I sent this issue, I didn't had the right tags in the
> subject so sending with the right tags, in the hope that the right folks
> might help provide some clues.
>
> I am usign latest devstack on F20 and I see VMs stuck in 'spawning' state.
> No errors in n-sch, n-cpu and I checked other n-* screen logs too, no
> errors whatsoever
>
> There is enuf memory and disk space available for the tiny and small
> flavour VMs I am trying to run
>
>
2014-06-09 09:57:19.935 AUDIT nova.compute.resource_tracker [-] Free ram
(MB): 1905
2014-06-09 09:57:19.935 AUDIT nova.compute.resource_tracker [-] Free disk
(GB): 46
2014-06-09 09:57:19.936 AUDIT nova.compute.resource_tracker [-] Free VCPUS:
1
2014-06-09 09:57:19.936 AUDIT nova.compute.resource_tracker [-] PCI stats:
[]

The only thing that i see other than INFO/DEbug is the below
2014-06-09 09:57:42.700 WARNING nova.compute.manager [-] Found 3 in the
database and 0 on the hypervisor.

What does that mean ? Does that say that Nova sees 3 VMs in its DB but a
query to the hyp (via libvirt I assume) returns none ?

Interestingly I can see the qemu process for my VM that nova says is stuck
in Spawning state and virsh list also lists' it
But i don't see the libvirt.xml for this stuck VM (from nova's perspective)
in the /opt/stack/data/nova/instances/ folder.

I was trying to figure what this means ? Where is nova stuck ? Screen logs
just doesn't point to anything useful to debug further.

>From the n-cpu logs i see the "Creating image" info log from
nova.virt.libvirt and thats the last
I don't see the info log "Creating isntance" from nova.virt.. so it looks
like its stuck somewhere between creating image and spawning instance.. but
virsh list and qemu process says otherwise

Looking for some debug hints here

thanx,
deepak
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack] [devstack][Nova] Unable to spawn VMs

2014-06-09 Thread Deepak Shetty
Hi All,
   The last time I sent this issue, I didn't had the right tags in the
subject so sending with the right tags, in the hope that the right folks
might help provide some clues.

I am usign latest devstack on F20 and I see VMs stuck in 'spawning' state.
No errors in n-sch, n-cpu and I checked other n-* screen logs too, no
errors whatsoever

There is enuf memory and disk space available for the tiny and small
flavour VMs I am trying to run
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack] Error creating Nova instances

2014-06-05 Thread Deepak Shetty
Hi,
   I am getting the below error when spawning Nova instances. I freshly
uploaded a new ubuntu cloud img into glance and using that i get the below
issue..


2014-06-05 09:18:19.759 AUDIT nova.compute.manager
[req-9e9861b8-e788-4892-976a-56229ac61694 admin admin] [instance:
aa69fd1e-184d-48f6-934c-855e8b02ea4f] Starting instance...
2014-06-05 09:18:19.827 DEBUG nova.openstack.common.lockutils
[req-9e9861b8-e788-4892-976a-56229ac61694 admin admin] Got semaphore
"compute_resources" from (pid=22474) lock
/opt/stack/nova/nova/openstack/common/lockutils.py:168
2014-06-05 09:18:19.827 DEBUG nova.openstack.common.lockutils
[req-9e9861b8-e788-4892-976a-56229ac61694 admin admin] Got semaphore / lock
"instance_claim" from (pid=22474) inner
/opt/stack/nova/nova/openstack/common/lockutils.py:248
2014-06-05 09:18:19.828 WARNING nova.compute.resource_tracker
[req-9e9861b8-e788-4892-976a-56229ac61694 admin admin] [instance:
aa69fd1e-184d-48f6-934c-855e8b02ea4f] Host field should not be set on the
instance until resources have been
claimed.
2014-06-05 09:18:19.828 WARNING nova.compute.resource_tracker
[req-9e9861b8-e788-4892-976a-56229ac61694 admin admin] [instance:
aa69fd1e-184d-48f6-934c-855e8b02ea4f] Node field should not be set on the
instance until resources have been
claimed.
2014-06-05 09:18:19.828 DEBUG nova.compute.resource_tracker
[req-9e9861b8-e788-4892-976a-56229ac61694 admin admin] Memory overhead for
2048 MB instance; 0 MB from (pid=22474) instance_claim
/opt/stack/nova/nova/compute/resource_tracker.p
y:119
2014-06-05 09:18:19.830 AUDIT nova.compute.claims
[req-9e9861b8-e788-4892-976a-56229ac61694 admin admin] [instance:
aa69fd1e-184d-48f6-934c-855e8b02ea4f] Attempting claim: memory 2048 MB,
disk 20 GB, VCPUs 1
2014-06-05 09:18:19.830 AUDIT nova.compute.claims
[req-9e9861b8-e788-4892-976a-56229ac61694 admin admin] [instance:
aa69fd1e-184d-48f6-934c-855e8b02ea4f] Total memory: 3953 MB, used: 512.00 MB
2014-06-05 09:18:19.831 AUDIT nova.compute.claims
[req-9e9861b8-e788-4892-976a-56229ac61694 admin admin] [instance:
aa69fd1e-184d-48f6-934c-855e8b02ea4f] memory limit not specified,
defaulting to unlimited
2014-06-05 09:18:19.831 AUDIT nova.compute.claims
[req-9e9861b8-e788-4892-976a-56229ac61694 admin admin] [instance:
aa69fd1e-184d-48f6-934c-855e8b02ea4f] Total disk: 9 GB, used: 0.00 GB
2014-06-05 09:18:19.831 AUDIT nova.compute.claims
[req-9e9861b8-e788-4892-976a-56229ac61694 admin admin] [instance:
aa69fd1e-184d-48f6-934c-855e8b02ea4f] disk limit not specified, defaulting
to unlimited
2014-06-05 09:18:19.831 AUDIT nova.compute.claims
[req-9e9861b8-e788-4892-976a-56229ac61694 admin admin] [instance:
aa69fd1e-184d-48f6-934c-855e8b02ea4f] Total CPUs: 2 VCPUs, used: 0.00 VCPUs
2014-06-05 09:18:19.832 AUDIT nova.compute.claims
[req-9e9861b8-e788-4892-976a-56229ac61694 admin admin] [instance:
aa69fd1e-184d-48f6-934c-855e8b02ea4f] CPUs limit not specified, defaulting
to unlimited
2014-06-05 09:18:19.832 AUDIT nova.compute.claims
[req-9e9861b8-e788-4892-976a-56229ac61694 admin admin] [instance:
aa69fd1e-184d-48f6-934c-855e8b02ea4f] Claim successful
2014-06-05 09:18:19.898 DEBUG nova.openstack.common.lockutils
[req-9e9861b8-e788-4892-976a-56229ac61694 admin admin] Semaphore / lock
released "instance_claim" from (pid=22474) inner
/opt/stack/nova/nova/openstack/common/lockutils.py:252
2014-06-05 09:18:20.016 DEBUG nova.network.base_api
[req-9e9861b8-e788-4892-976a-56229ac61694 admin admin] Updating cache with
info: [VIF({'ovs_interfaceid': None, 'network': Network({'bridge':
u'br100', 'subnets': [Subnet({'ips': [Fixed
IP({'meta': {}, 'version': 4, 'type': u'fixed', 'floating_ips': [],
'address': u'10.0.0.2'})], 'version': 4, 'meta': {u'dhcp_server':
u'10.0.0.1'}, 'dns': [IP({'meta': {}, 'version': 4, 'type': u'dns',
'address': u'8.8.4.4'})], 'routes':
 [], 'cidr': u'10.0.0.0/24', 'gateway': IP({'meta': {}, 'version': 4,
'type': u'gateway', 'address': u'10.0.0.1'})}), Subnet({'ips': [],
'version': None, 'meta': {u'dhcp_server': None}, 'dns': [], 'routes': [],
'cidr': None, 'gateway': I
P({'meta': {}, 'version': None, 'type': u'gateway', 'address': None})})],
'meta': {u'tenant_id': None, u'should_create_bridge': True,
u'bridge_interface': u'eth0'}, 'id':
u'f1f132df-9b29-4db9-b7b6-433ffc50e14c', 'label': u'private'}), 'd
evname': None, 'qbh_params': None, 'meta': {}, 'details': {}, 'address':
u'fa:16:3e:28:c7:10', 'active': False, 'type': u'bridge', 'id':
u'1d9e192e-cb5e-46b4-b008-82965290b709', 'qbg_params': None})] from
(pid=22474) update_instance_cach
e_with_nw_info /opt/stack/nova/nova/network/base_api.py:37
2014-06-05 09:18:20.075 DEBUG nova.openstack.common.lockutils
[req-9e9861b8-e788-4892-976a-56229ac61694 admin admin] Got semaphore
"compute_resources" from (pid=22474) lock
/opt/stack/nova/nova/openstack/common/lockutils.py:168
2014-06-05 09:18:20.075 DEBUG nova.openstack.common.lockutils
[req-9e9861b8-e788-4892-976a-56229ac61694 admin admin] Got semaphore / lock
"update_usage" from (pid=22474) inner
/opt/stack