Re: [openstack-dev] [nova] boot images in power state PAUSED for stable/juno

2015-01-02 Thread Ben Nemec
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

I ran into similar behavior once, and it turned out I was running out
of space on the system.  This blog post helped me track down the
problem:
http://porkrind.org/missives/libvirt-based-qemu-vm-pausing-by-itself/

Not sure whether it's relevant to your situation, but it's something
to check.

- -Ben

On 12/31/2014 09:41 AM, Paul Michali (pcm) wrote:
 Not sure if I’m going crazy or what. I’m using DevStack and, after
 stacking I tried booting a Cirros 3.2, 3.3, and Ubuntu cloud 14.04
 image. Each time, the image ends up in PAUSED power state:
 
 ubuntu@juno:/opt/stack/neutron$ nova show peter 
 +--++

 
| Property | Value
|
 +--++

 
| OS-DCF:diskConfig| MANUAL
|
 | OS-EXT-AZ:availability_zone  | nova
 | | OS-EXT-SRV-ATTR:host | juno
 | | OS-EXT-SRV-ATTR:hypervisor_hostname  | juno
 | | OS-EXT-SRV-ATTR:instance_name| instance-0001
 | | OS-EXT-STS:power_state   | 3
 | | OS-EXT-STS:task_state| -
 | | OS-EXT-STS:vm_state  | active
 | | OS-SRV-USG:launched_at   |
 2014-12-31T15:15:33.00 | |
 OS-SRV-USG:terminated_at | -
 | | accessIPv4   |
 | | accessIPv6   |
 | | config_drive |
 | | created  | 2014-12-31T15:15:24Z
 | | flavor   | m1.tiny (1)
 | | hostId   |
 5b0c48250ccc0ac3fca8a821e29e4b154ec0b101f9cc0a0b27071a3f   | |
 id   |
 ec5c8d70-ae80-4cc3-a5bb-b68019170dd6   | |
 image| cirros-0.3.3-x86_64-uec
 (797e4dee-8c03-497f-8dac-a44b9351dfa3) | | key_name
 | -  | 
 | metadata | {}
 | | name | peter
 | | os-extended-volumes:volumes_attached | []
 | | private network  | 10.0.0.4
 | | progress | 0
 | | security_groups  | default
 | | status   | ACTIVE
 | | tenant_id|
 7afb5bc1d88d462c8d57178437d3c277   | |
 updated  | 2014-12-31T15:15:34Z
 | | user_id  |
 4ff18bdbeb4d436ea4ff1bcd29e269a9   | 
 +--++

 
ubuntu@juno:/opt/stack/neutron$ nova list
 +--+---+++-+--+

 
| ID   | Name  | Status | Task State |
Power State | Networks |
 +--+---+++-+--+

 
| ec5c8d70-ae80-4cc3-a5bb-b68019170dd6 | peter | ACTIVE | -  |
Paused  | private=10.0.0.4 |
 
 
 I don’t see this with Kilo latest images. Any idea what I may be
 doing wrong, or if there is an issue (I didn’t see anything on
 Google search)?
 
 IMAGE_ID=`nova image-list | grep 'cloudimg-amd64 ' | cut -d' ' -f
 2` PRIVATE_NET=`neutron net-list | grep 'private ' | cut -f 2 -d'
 ‘`
 
 nova boot peter --flavor 3 --image $IMAGE_ID --user-data
 ~/devstack/user_data.txt --nic net-id=$PRIVATE_NET nova boot
 --flavor 1 --image cirros-0.3.3-x86_64-uec --nic
 net-id=$PRIVATE_NET paul
 
 Thanks.
 
 
 PCM (Paul Michali)
 
 MAIL …..…. p...@cisco.com IRC ……..… pc_m (irc.freenode.com) TW
 ………... @pmichali GPG Key … 4525ECC253E31A83 Fingerprint .. 307A
 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83
 
 
 
 
 
 
 
 ___ OpenStack-dev
 mailing list OpenStack-dev@lists.openstack.org 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

-BEGIN PGP SIGNATURE-
Version: GnuPG v1

iQEcBAEBAgAGBQJUpryOAAoJEDehGd0Fy7uqVQgH/3EmJBE2Z8DMQqCqhHFLat5b
H34R2sXz0ODP+X6nu9MykXjTk7O/zDW9aSgW8nNRa7pbyZm+R0AOTpqcc3P1T6uE
zZ6LqL+d8GEVaC4BNIrnCO3Ip3hDmhr+HQcAZa0LYdgxF4/Oc9merycTy5UDzwbZ
hcUwULr4OdnJqdkcnp1XfqfEKsRWi7varkj6nnuB46dOJBeH8Tmr/9NTBo+veglK
kpmKISuH+TyWpjZekmkRpPq97vEQ1pxBeJcqHfhF5x5q14CleN51JKg8J7xckcmO
XS9rrQtHjMxIB86b6oAAfKlTGhz9FsZ7c1C3QmCxco3NqCtFtCoXsFoYhiLoyIo=
=4AwW
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] boot images in power state PAUSED for stable/juno

2015-01-02 Thread Paul Michali (pcm)
These VMs that are running devstack are 50GB disk, so there is plenty of disk 
space.  I don’t have cinder set up with this devstack setup.

I looked in the log for the instance and in the failing case the same message 
was displayed (with different MAC and uuid), and then on the next line, I see 
this error, register dump.

KVM: entry failed, hardware error 0x0
EAX= EBX= ECX= EDX=0663
ESI= EDI= EBP= ESP=
EIP=e05b EFL=0002 [---] CPL=0 II=0 A20=1 SMM=0 HLT=0
ES =   9300
CS =f000 000f  9b00
SS =   9300
DS =   9300
FS =   9300
GS =   9300
LDT=   8200
TR =   8b00
GDT=  
IDT=  
CR0=6010 CR2= CR3= CR4=
DR0= DR1= DR2= 
DR3=
DR6=0ff0 DR7=0400
EFER=
Code=85 00 87 00 89 00 8b 00 00 00 86 00 88 00 8a 00 8c 00 00 90 2e 66 83 3e 
a4 65 00 0f 85 53 f2 31 c0 8e
d0 66 bc 00 70 00 00 66 ba 4f 3c 0f 00 e9 b1 f0

Can anyone glen anything from this?

In the libvirtd.log, I see:

2015-01-02 15:48:21.257+: 20711: info : libvirt version: 1.2.2
2015-01-02 15:48:21.257+: 20711: error : virNetSocketReadWire:1454 : End of 
file while reading data: Input/output error

Not much info (for me :).


PCM (Paul Michali)

MAIL …..…. p...@cisco.commailto:p...@cisco.com
IRC ……..… pc_m (irc.freenode.comhttp://irc.freenode.com)
TW ………... @pmichali
GPG Key … 4525ECC253E31A83
Fingerprint .. 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83




On Jan 2, 2015, at 10:41 AM, James Downs e...@egon.ccmailto:e...@egon.cc 
wrote:


On Jan 2, 2015, at 4:53 AM, Paul Michali (pcm) 
p...@cisco.commailto:p...@cisco.com wrote:

I don’t see what the difference is between a working and non-working setup. :(

One other time I’ve seen this happen is if the compute node is low (or out) of 
disk space. If there’s connectivity problems with a cinder device, this would 
be a similar situation. As Kevin suggested, I’d also start looking into any 
logs KVM/libvirt might be generating.

Cheers,
-j


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable] Stable check of openstack/horizon failed

2015-01-02 Thread Julie Pichon
On 02/01/15 06:15, A mailing list for the OpenStack Stable Branch test
reports. wrote:
 Build failed.
 
 - periodic-horizon-docs-icehouse 
 http://logs.openstack.org/periodic-stableperiodic-horizon-docs-icehouse/b0d18a6/
  : SUCCESS in 4m 35s
 - periodic-horizon-python26-icehouse 
 http://logs.openstack.org/periodic-stableperiodic-horizon-python26-icehouse/b38b4c9/
  : FAILURE in 3m 23s
 - periodic-horizon-python27-icehouse 
 http://logs.openstack.org/periodic-stableperiodic-horizon-python27-icehouse/147df74/
  : FAILURE in 3m 42s
 - periodic-horizon-docs-juno 
 http://logs.openstack.org/periodic-stableperiodic-horizon-docs-juno/f5e1427/ 
 : SUCCESS in 5m 19s
 - periodic-horizon-python26-juno 
 http://logs.openstack.org/periodic-stableperiodic-horizon-python26-juno/ef680e0/
  : FAILURE in 6m 34s
 - periodic-horizon-python27-juno 
 http://logs.openstack.org/periodic-stableperiodic-horizon-python27-juno/890ebda/
  : FAILURE in 4m 26s

This is due to https://bugs.launchpad.net/horizon/+bug/1407055 , the fix
for master just merged and the Juno and Icehouse backports are up for
review at https://review.openstack.org/#/c/144736/ and
https://review.openstack.org/#/c/144735/ respectively. Thanks!

Julie


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla] Refactored heat-kubernetes templates

2015-01-02 Thread Lars Kellogg-Stedman
Hello Kolla folks (et al),

I've refactored the heat-kubernetes templates at
https://github.com/larsks/heat-kubernetes to work with Centos Atomic
Host and Fedora 21 Atomic, and to replace the homegrown overlay
network solution with Flannel.

These changes are available on the master branch.

The previous version of the templates, which worked with F20 and
included some Kolla-specific networking logic, is available in the
kolla branch:

  https://github.com/larsks/heat-kubernetes/tree/kolla

-- 
Lars Kellogg-Stedman l...@redhat.com | larsks @ {freenode,twitter,github}
Cloud Engineering / OpenStack  | http://blog.oddbit.com/



pgpSeMghsxO2I.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] boot images in power state PAUSED for stable/juno

2015-01-02 Thread James Downs

On Jan 2, 2015, at 4:53 AM, Paul Michali (pcm) p...@cisco.com wrote:

 I don’t see what the difference is between a working and non-working setup. :(

One other time I’ve seen this happen is if the compute node is low (or out) of 
disk space. If there’s connectivity problems with a cinder device, this would 
be a similar situation. As Kevin suggested, I’d also start looking into any 
logs KVM/libvirt might be generating.

Cheers,
-j


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [devstack] [IceHouse] Install prettytable=0.7 to satisfy pip 6/PEP 440

2015-01-02 Thread Yogesh Prasad
Hi Stackers,

I observe that this commit is present in master branch.

commit 6ec66bb3d1354062ec70be972dba990e886084d5

Install prettytable=0.7 to satisfy pip 6/PEP 440
...

However, I am facing the issues due to PEP 440 in devstack's
stable/icehouse branch. Is devstack icehouse still maintained ? In other
words will these fixes get into icehouse branch ?

Regards,
Yogesh
*CloudByte Inc.* http://www.cloudbyte.com/
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] [Glance] [Swift] cinder upload-to-image creates image with 'queued' state

2015-01-02 Thread Mike Perez
On 16:42 Thu 01 Jan , Timur Nurlygayanov wrote:
 Hi all,
 
 I have the strange error with Cinder: I have several volumes (with 30-100
 Gb size) and I want to create Glance images based on these volumes, but
 when I execute
snip
 How I can debug this issue? (I have no any errors/exceptions in
 Glance/Cinder/Swift logs)

Regardless, please provide your cinder and glance logs.

-- 
Mike Perez

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] boot images in power state PAUSED for stable/juno

2015-01-02 Thread Kevin Benton
Ah, doesn't seem to be a Neutron issue then since the
'network-vif-plugged' event is showing up and it's attempting to
resume.

The red flag looks like that Instance is paused unexpectedly.
Ignore.. If you grep the nova code base for that, it brings up a note
linking to bug 1097806.[1] The VM is paused when Nova didn't expect it
to be. Do you have any other tools running that might be affecting
kvm?

1. https://bugs.launchpad.net/nova/+bug/1097806

On Thu, Jan 1, 2015 at 8:09 AM, Paul Michali (pcm) p...@cisco.com wrote:
 Hi Kevin,

 No exceptions/tracebacks/errors in Neutron at all. In the Nova logs, it
 seems to create the instance, pause, and then resume, but it looks like
 maybe it is not resuming?

 2015-01-01 14:44:30.716 3516 DEBUG nova.openstack.common.processutils [-]
 Running cmd (subprocess): sudo nova-rootwrap /etc/nova/rootwrap.conf
 ovs-vsctl --timeout=120 -- --if-exists del-port qvoded0d35f-20 -- add-port
 br-int qvoded0d35\
 f-20 -- set Interface qvoded0d35f-20
 external-ids:iface-id=ded0d35f-204f-4ca8-a85b-85decb53d9fe
 external-ids:iface-status=active external-ids:attached-mac=fa:16:3e:81:ab:12
 external-ids:vm-uuid=c32ac737-1788-4420-b200-2a107d5ad335 exec\
 ute /opt/stack/nova/nova/openstack/common/processutils.py:161
 2015-01-01 14:44:30.786 3516 DEBUG nova.openstack.common.processutils [-]
 Result was 0 execute
 /opt/stack/nova/nova/openstack/common/processutils.py:195
 2015-01-01 14:44:31.542 3516 DEBUG nova.virt.driver [-] Emitting event
 LifecycleEvent: 1420123471.54, c32ac737-1788-4420-b200-2a107d5ad335 =
 Started emit_event /opt/stack/nova/nova/virt/driver.py:1298
 2015-01-01 14:44:31.543 3516 INFO nova.compute.manager [-] [instance:
 c32ac737-1788-4420-b200-2a107d5ad335] VM Started (Lifecycle Event)
 2015-01-01 14:44:31.584 DEBUG nova.compute.manager
 [req-77c13ae6-ccf9-48ee-881a-8bb7f04ee4bc None None] [instance:
 c32ac737-1788-4420-b200-2a107d5ad335] Synchronizing instance power state
 after lifecycle event Started; current vm_sta\
 te: building, current task_state: spawning, current DB power_state: 0, VM
 power_state: 1 handle_lifecycle_event
 /opt/stack/nova/nova/compute/manager.py:1105
 2015-01-01 14:44:31.629 INFO nova.compute.manager
 [req-77c13ae6-ccf9-48ee-881a-8bb7f04ee4bc None None] [instance:
 c32ac737-1788-4420-b200-2a107d5ad335] During sync_power_state the instance
 has a pending task (spawning). Skip.
 2015-01-01 14:44:31.630 3516 DEBUG nova.virt.driver [-] Emitting event
 LifecycleEvent: 1420123471.54, c32ac737-1788-4420-b200-2a107d5ad335 =
 Paused emit_event /opt/stack/nova/nova/virt/driver.py:1298
 2015-01-01 14:44:31.630 3516 INFO nova.compute.manager [-] [instance:
 c32ac737-1788-4420-b200-2a107d5ad335] VM Paused (Lifecycle Event)
 2015-01-01 14:44:31.670 3516 DEBUG nova.compute.manager [-] [instance:
 c32ac737-1788-4420-b200-2a107d5ad335] Synchronizing instance power state
 after lifecycle event Paused; current vm_state: building, current
 task_state: spawning, c\
 urrent DB power_state: 0, VM power_state: 3 handle_lifecycle_event
 /opt/stack/nova/nova/compute/manager.py:1105
 2015-01-01 14:44:31.714 3516 INFO nova.compute.manager [-] [instance:
 c32ac737-1788-4420-b200-2a107d5ad335] During sync_power_state the instance
 has a pending task (spawning). Skip.
 2015-01-01 14:44:38.293 DEBUG nova.compute.manager
 [req-0dc50994-e997-41b5-99f2-0a0333f1ea11 nova service] [instance:
 c32ac737-1788-4420-b200-2a107d5ad335] Received event
 network-vif-plugged-ded0d35f-204f-4ca8-a85b-85decb53d9fe externa\
 l_instance_event /opt/stack/nova/nova/compute/manager.py:6180
 2015-01-01 14:44:38.293 DEBUG nova.openstack.common.lockutils
 [req-0dc50994-e997-41b5-99f2-0a0333f1ea11 nova service] Created new
 semaphore c32ac737-1788-4420-b200-2a107d5ad335-events internal_lock
 /opt/stack/nova/nova/openstack/comm\
 on/lockutils.py:206
 2015-01-01 14:44:38.294 DEBUG nova.openstack.common.lockutils
 [req-0dc50994-e997-41b5-99f2-0a0333f1ea11 nova service] Acquired semaphore
 c32ac737-1788-4420-b200-2a107d5ad335-events lock
 /opt/stack/nova/nova/openstack/common/lockutils\
 .py:229
 2015-01-01 14:44:38.294 DEBUG nova.openstack.common.lockutils
 [req-0dc50994-e997-41b5-99f2-0a0333f1ea11 nova service] Got semaphore / lock
 _pop_event inner /opt/stack/nova/nova/openstack/common/lockutils.py:271
 2015-01-01 14:44:38.294 DEBUG nova.openstack.common.lockutils
 [req-0dc50994-e997-41b5-99f2-0a0333f1ea11 nova service] Releasing semaphore
 c32ac737-1788-4420-b200-2a107d5ad335-events lock
 /opt/stack/nova/nova/openstack/common/lockutil\
 s.py:238
 2015-01-01 14:44:38.295 DEBUG nova.openstack.common.lockutils
 [req-0dc50994-e997-41b5-99f2-0a0333f1ea11 nova service] Semaphore / lock
 released _pop_event inner
 /opt/stack/nova/nova/openstack/common/lockutils.py:275
 2015-01-01 14:44:38.295 DEBUG nova.compute.manager
 [req-0dc50994-e997-41b5-99f2-0a0333f1ea11 nova service] [instance:
 c32ac737-1788-4420-b200-2a107d5ad335] Processing event
 network-vif-plugged-ded0d35f-204f-4ca8-a85b-85decb53d9fe 

Re: [openstack-dev] [nova] boot images in power state PAUSED for stable/juno

2015-01-02 Thread Paul Michali (pcm)
No other tools. Running a stock Ubuntu 14.04 server, installed devstack, 
created local.conf, stacked, and tried to create a VM.  I’ve since seen this on 
another VM I have running with Kilo code, so it is not specifically a Juno 
issue.

I don’t see what the difference is between a working and non-working setup. :(

On all instances, virsh -v shows 1.2.2.

Baffled.


PCM (Paul Michali)

MAIL …..…. p...@cisco.com
IRC ……..… pc_m (irc.freenode.com)
TW ………... @pmichali
GPG Key … 4525ECC253E31A83
Fingerprint .. 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83




On Jan 2, 2015, at 4:48 AM, Kevin Benton blak...@gmail.com wrote:

 Ah, doesn't seem to be a Neutron issue then since the
 'network-vif-plugged' event is showing up and it's attempting to
 resume.
 
 The red flag looks like that Instance is paused unexpectedly.
 Ignore.. If you grep the nova code base for that, it brings up a note
 linking to bug 1097806.[1] The VM is paused when Nova didn't expect it
 to be. Do you have any other tools running that might be affecting
 kvm?
 
 1. https://bugs.launchpad.net/nova/+bug/1097806
 
 On Thu, Jan 1, 2015 at 8:09 AM, Paul Michali (pcm) p...@cisco.com wrote:
 Hi Kevin,
 
 No exceptions/tracebacks/errors in Neutron at all. In the Nova logs, it
 seems to create the instance, pause, and then resume, but it looks like
 maybe it is not resuming?
 
 2015-01-01 14:44:30.716 3516 DEBUG nova.openstack.common.processutils [-]
 Running cmd (subprocess): sudo nova-rootwrap /etc/nova/rootwrap.conf
 ovs-vsctl --timeout=120 -- --if-exists del-port qvoded0d35f-20 -- add-port
 br-int qvoded0d35\
 f-20 -- set Interface qvoded0d35f-20
 external-ids:iface-id=ded0d35f-204f-4ca8-a85b-85decb53d9fe
 external-ids:iface-status=active external-ids:attached-mac=fa:16:3e:81:ab:12
 external-ids:vm-uuid=c32ac737-1788-4420-b200-2a107d5ad335 exec\
 ute /opt/stack/nova/nova/openstack/common/processutils.py:161
 2015-01-01 14:44:30.786 3516 DEBUG nova.openstack.common.processutils [-]
 Result was 0 execute
 /opt/stack/nova/nova/openstack/common/processutils.py:195
 2015-01-01 14:44:31.542 3516 DEBUG nova.virt.driver [-] Emitting event
 LifecycleEvent: 1420123471.54, c32ac737-1788-4420-b200-2a107d5ad335 =
 Started emit_event /opt/stack/nova/nova/virt/driver.py:1298
 2015-01-01 14:44:31.543 3516 INFO nova.compute.manager [-] [instance:
 c32ac737-1788-4420-b200-2a107d5ad335] VM Started (Lifecycle Event)
 2015-01-01 14:44:31.584 DEBUG nova.compute.manager
 [req-77c13ae6-ccf9-48ee-881a-8bb7f04ee4bc None None] [instance:
 c32ac737-1788-4420-b200-2a107d5ad335] Synchronizing instance power state
 after lifecycle event Started; current vm_sta\
 te: building, current task_state: spawning, current DB power_state: 0, VM
 power_state: 1 handle_lifecycle_event
 /opt/stack/nova/nova/compute/manager.py:1105
 2015-01-01 14:44:31.629 INFO nova.compute.manager
 [req-77c13ae6-ccf9-48ee-881a-8bb7f04ee4bc None None] [instance:
 c32ac737-1788-4420-b200-2a107d5ad335] During sync_power_state the instance
 has a pending task (spawning). Skip.
 2015-01-01 14:44:31.630 3516 DEBUG nova.virt.driver [-] Emitting event
 LifecycleEvent: 1420123471.54, c32ac737-1788-4420-b200-2a107d5ad335 =
 Paused emit_event /opt/stack/nova/nova/virt/driver.py:1298
 2015-01-01 14:44:31.630 3516 INFO nova.compute.manager [-] [instance:
 c32ac737-1788-4420-b200-2a107d5ad335] VM Paused (Lifecycle Event)
 2015-01-01 14:44:31.670 3516 DEBUG nova.compute.manager [-] [instance:
 c32ac737-1788-4420-b200-2a107d5ad335] Synchronizing instance power state
 after lifecycle event Paused; current vm_state: building, current
 task_state: spawning, c\
 urrent DB power_state: 0, VM power_state: 3 handle_lifecycle_event
 /opt/stack/nova/nova/compute/manager.py:1105
 2015-01-01 14:44:31.714 3516 INFO nova.compute.manager [-] [instance:
 c32ac737-1788-4420-b200-2a107d5ad335] During sync_power_state the instance
 has a pending task (spawning). Skip.
 2015-01-01 14:44:38.293 DEBUG nova.compute.manager
 [req-0dc50994-e997-41b5-99f2-0a0333f1ea11 nova service] [instance:
 c32ac737-1788-4420-b200-2a107d5ad335] Received event
 network-vif-plugged-ded0d35f-204f-4ca8-a85b-85decb53d9fe externa\
 l_instance_event /opt/stack/nova/nova/compute/manager.py:6180
 2015-01-01 14:44:38.293 DEBUG nova.openstack.common.lockutils
 [req-0dc50994-e997-41b5-99f2-0a0333f1ea11 nova service] Created new
 semaphore c32ac737-1788-4420-b200-2a107d5ad335-events internal_lock
 /opt/stack/nova/nova/openstack/comm\
 on/lockutils.py:206
 2015-01-01 14:44:38.294 DEBUG nova.openstack.common.lockutils
 [req-0dc50994-e997-41b5-99f2-0a0333f1ea11 nova service] Acquired semaphore
 c32ac737-1788-4420-b200-2a107d5ad335-events lock
 /opt/stack/nova/nova/openstack/common/lockutils\
 .py:229
 2015-01-01 14:44:38.294 DEBUG nova.openstack.common.lockutils
 [req-0dc50994-e997-41b5-99f2-0a0333f1ea11 nova service] Got semaphore / lock
 _pop_event inner /opt/stack/nova/nova/openstack/common/lockutils.py:271
 2015-01-01 14:44:38.294 DEBUG 

Re: [openstack-dev] [neutron] Need help getting DevStack setup working for VPN testing

2015-01-02 Thread Paul Michali (pcm)
To summarize what I’m trying to do with option (A)…

I want to test VPN in DevStack by setting up two private networks, two routers, 
and a shared public network. The VMs created in the private networks should be 
able to access the public network, but not the other private network (e.g. VM 
on private-A subnet can ping public interface of router2 on private-B subnet)

  |
VM-a


Do I need to create the second router and private network using a different 
tenant?
Do I need to setup security group rules to allow the access desired?
What local.conf settings do I need for this setup (beyond what I have below)?

I’ve been trying so many different combinations (using both single and two 
devstack setups, trying provider net, using single/multiple tenants) and have 
been getting a variety of different results, from unexpected ping results, to 
VMs stuck in power state PAUSED, that I’m lost as to how to set this up. I 
think I’m hung up on the security group rules and how to setup the bridges.

What I’d like to do, is just focus on this option (A) - using a single devstack 
with multiple routers, and see if that works. If not, I can focus on option 
(B), using two devstacks/hosts.

Since I’m pretty much out of ideas on how to fix this for now, I’m going to try 
to see if I can get on a bare metal setup, which has worked in the past.

Any ideas? I’d like to verify VPNaaS reference implementation with the new repo 
changes. Been spending some time over the holiday vacation playing with this, 
with no joy. :(


PCM (Paul Michali)

MAIL …..…. p...@cisco.commailto:p...@cisco.com
IRC ……..… pc_m (irc.freenode.comhttp://irc.freenode.com)
TW ………... @pmichali
GPG Key … 4525ECC253E31A83
Fingerprint .. 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83




On Dec 31, 2014, at 2:35 PM, Paul Michali (pcm) 
p...@cisco.commailto:p...@cisco.com wrote:

Just more data…

I keep consistently seeing that on private subnet, the VM can only access 
router (as expected), but on privateB subnet, the VM can access the private I/F 
of router1 on private subnet. From the router’s namespace, I cannot ping the 
local VM (why not?). Oddly, I can ping router1’s private IP from router2 
namespace!

I tried these commands to create security group rules (are they wrong?):

# There are two default groups created by DevStack
group=`neutron security-group-list | grep default | cut -f 2 -d' ' | head -1`
neutron security-group-rule-create --protocol ICMP $group
neutron security-group-rule-create --protocol tcp --port-range-min 22 
--port-range-max 22 $group
group=`neutron security-group-list | grep default | cut -f 2 -d' ' | tail -1`
neutron security-group-rule-create --protocol ICMP $group
neutron security-group-rule-create --protocol tcp --port-range-min 22 
--port-range-max 22 $group

The only change that happens, when I do these commands, is that the VM in 
privateB subnet can now ping the VM from private subnet, but not vice versa. 
From router1 namespace, it can then access local VMs. From router2 namespace it 
can access local VMs and VMs in private subnet (all access).

It seems like I have some issue with security groups, and I need to square that 
away, before I can test VPN out.

Am I creating the security group rules correctly?
My goal is that the private nets can access the public net, but not each other 
(until VPN connection is established).

Lastly, in this latest try, I set OVS_PHYSICAL_BRIDGE=br-ex. In earlier runs 
w/o that, there were QVO interfaces, but no QVB or QBR interfaces at all. It 
didn’t seem to change connectivity, however.

Ideas?

PCM (Paul Michali)

MAIL …..…. p...@cisco.commailto:p...@cisco.com
IRC ……..… pc_m (irc.freenode.comhttp://irc.freenode.com/)
TW ………... @pmichali
GPG Key … 4525ECC253E31A83
Fingerprint .. 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83




On Dec 31, 2014, at 10:33 AM, Paul Michali (pcm) 
p...@cisco.commailto:p...@cisco.com wrote:

I’ve been playing a bit with trying to get VPNaaS working post-repo split, and 
haven’t been successful. I’m trying it a few ways with DevStack, and I’m not 
sure whether I have a config error, setup issue, or there is something due to 
the split.

In the past (and it’s been a few months since I verified VPN operation), I used 
two bare metal machines and an external switch connecting them. With a DevStack 
cloud running on each. That configuration is currently setup for a vendor VPN 
solution, so I wanted to try different methods to test the reference VPN 
implementation. I’ve got two ideas to do this:

A) Run DevStack and create two routers with a shared “public” network, and two 
private networks, setting up a VPN connection between the private nets.
B) Run two DevStack instances (on two VMs) and try to setup a provider network 
between them.

I’m starting with A (though I did try B quickly, but it didn’t work), and I 
spun up the stack, added a second router (all under the same tenant), created 
another private network, and booted a Cirros VM in each private 

Re: [openstack-dev] [nova] boot images in power state PAUSED for stable/juno

2015-01-02 Thread Paul Michali (pcm)
I checked and the disk has plenty of space:

Filesystem  Size  Used Avail Use% Mounted on
/dev/vda150G  6.2G   41G  14% /
none4.0K 0  4.0K   0% /sys/fs/cgroup
udev3.9G   12K  3.9G   1% /dev
tmpfs   799M  408K  799M   1% /run
none5.0M 0  5.0M   0% /run/lock
none3.9G 0  3.9G   0% /run/shm
none100M 0  100M   0% /run/user

One google search mentioned checking user is in libvirtd group (it is).  I have 
8GB ram with 600+ MB available, when Devstack is running.

I didn’t Cinder was running, but I see cinder-api and cinder-volume processes 
running in top. These are running on the bare metal config that does work.

This is a Ubuntu 14.04 image for the host (cloud image).

It  seems I either get the problem where the VM is in paused state, or it 
doesn’t even start up and is in the BUILDING/SPAWNED state forever.



PCM (Paul Michali)

MAIL …..…. p...@cisco.com
IRC ……..… pc_m (irc.freenode.com)
TW ………... @pmichali
GPG Key … 4525ECC253E31A83
Fingerprint .. 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83




On Jan 2, 2015, at 10:43 AM, Ben Nemec openst...@nemebean.com wrote:

 Signed PGP part
 I ran into similar behavior once, and it turned out I was running out
 of space on the system.  This blog post helped me track down the
 problem:
 http://porkrind.org/missives/libvirt-based-qemu-vm-pausing-by-itself/
 
 Not sure whether it's relevant to your situation, but it's something
 to check.
 
 -Ben
 
 On 12/31/2014 09:41 AM, Paul Michali (pcm) wrote:
  Not sure if I’m going crazy or what. I’m using DevStack and, after
  stacking I tried booting a Cirros 3.2, 3.3, and Ubuntu cloud 14.04
  image. Each time, the image ends up in PAUSED power state:
 
  ubuntu@juno:/opt/stack/neutron$ nova show peter
  +--++
 
 
 | Property | Value
 |
  +--++
 
 
 | OS-DCF:diskConfig| MANUAL
 |
  | OS-EXT-AZ:availability_zone  | nova
  | | OS-EXT-SRV-ATTR:host | juno
  | | OS-EXT-SRV-ATTR:hypervisor_hostname  | juno
  | | OS-EXT-SRV-ATTR:instance_name| instance-0001
  | | OS-EXT-STS:power_state   | 3
  | | OS-EXT-STS:task_state| -
  | | OS-EXT-STS:vm_state  | active
  | | OS-SRV-USG:launched_at   |
  2014-12-31T15:15:33.00 | |
  OS-SRV-USG:terminated_at | -
  | | accessIPv4   |
  | | accessIPv6   |
  | | config_drive |
  | | created  | 2014-12-31T15:15:24Z
  | | flavor   | m1.tiny (1)
  | | hostId   |
  5b0c48250ccc0ac3fca8a821e29e4b154ec0b101f9cc0a0b27071a3f   | |
  id   |
  ec5c8d70-ae80-4cc3-a5bb-b68019170dd6   | |
  image| cirros-0.3.3-x86_64-uec
  (797e4dee-8c03-497f-8dac-a44b9351dfa3) | | key_name
  | -  |
  | metadata | {}
  | | name | peter
  | | os-extended-volumes:volumes_attached | []
  | | private network  | 10.0.0.4
  | | progress | 0
  | | security_groups  | default
  | | status   | ACTIVE
  | | tenant_id|
  7afb5bc1d88d462c8d57178437d3c277   | |
  updated  | 2014-12-31T15:15:34Z
  | | user_id  |
  4ff18bdbeb4d436ea4ff1bcd29e269a9   |
  +--++
 
 
 ubuntu@juno:/opt/stack/neutron$ nova list
  +--+---+++-+--+
 
 
 | ID   | Name  | Status | Task State |
 Power State | Networks |
  +--+---+++-+--+
 
 
 | ec5c8d70-ae80-4cc3-a5bb-b68019170dd6 | peter | ACTIVE | -  |
 Paused  | private=10.0.0.4 |
 
 
  I don’t see this with Kilo latest images. Any idea what I may be
  doing wrong, or if there is an issue (I didn’t see anything on
  Google search)?
 
  IMAGE_ID=`nova image-list | grep 'cloudimg-amd64 ' | cut -d' ' -f
  2` PRIVATE_NET=`neutron net-list | grep 'private ' | cut -f 2 -d'
  ‘`
 
  nova boot peter --flavor 3 --image $IMAGE_ID --user-data
  ~/devstack/user_data.txt 

Re: [openstack-dev] [heat] Application level HA via Heat

2015-01-02 Thread Zane Bitter

On 24/12/14 05:17, Steven Hardy wrote:

On Mon, Dec 22, 2014 at 03:42:37PM -0500, Zane Bitter wrote:

On 22/12/14 13:21, Steven Hardy wrote:

Hi all,

So, lately I've been having various discussions around $subject, and I know
it's something several folks in our community are interested in, so I
wanted to get some ideas I've been pondering out there for discussion.

I'll start with a proposal of how we might replace HARestarter with
AutoScaling group, then give some initial ideas of how we might evolve that
into something capable of a sort-of active/active failover.

1. HARestarter replacement.

My position on HARestarter has long been that equivalent functionality
should be available via AutoScalingGroups of size 1.  Turns out that
shouldn't be too hard to do:

  resources:
   server_group:
 type: OS::Heat::AutoScalingGroup
 properties:
   min_size: 1
   max_size: 1
   resource:
 type: ha_server.yaml

   server_replacement_policy:
 type: OS::Heat::ScalingPolicy
 properties:
   # FIXME: this adjustment_type doesn't exist yet
   adjustment_type: replace_oldest
   auto_scaling_group_id: {get_resource: server_group}
   scaling_adjustment: 1


One potential issue with this is that it is a little bit _too_ equivalent to
HARestarter - it will replace your whole scaled unit (ha_server.yaml in this
case) rather than just the failed resource inside.


Personally I don't see that as a problem, because the interface makes that
explicit - if you put a resource in an AutoScalingGroup, you expect it to
get created/deleted on group adjustment, so anything you don't want
replaced stays outside the group.


I guess I was thinking about having the same mechanism work when the 
size of the scaling group is not fixed at 1.



Happy to consider other alternatives which do less destructive replacement,
but to me this seems like the simplest possible way to replace HARestarter
with something we can actually support long term.


Yeah, I just get uneasy about features that don't compose. Here you have 
to decide between the replacement policy feature and the feature of 
being able to scale out arbitrary stacks. The two uses are so different 
that they almost don't make sense as the same resource. The result will 
be a lot of people implementing scaling groups inside scaling groups in 
order to take advantage of both sets of behaviour.



Even if just replace failed resource is somehow made available later,
we'll still want to support AutoScalingGroup, and replace_oldest is
likely to be useful in other situations, not just this use-case.

Do you have specific ideas of how the just-replace-failed-resource feature
might be implemented?  A way for a signal to declare a resource failed so
convergence auto-healing does a less destructive replacement?


So, currently our ScalingPolicy resource can only support three adjustment
types, all of which change the group capacity.  AutoScalingGroup already
supports batched replacements for rolling updates, so if we modify the
interface to allow a signal to trigger replacement of a group member, then
the snippet above should be logically equivalent to HARestarter AFAICT.

The steps to do this should be:

  - Standardize the ScalingPolicy-AutoScaling group interface, so
aynchronous adjustments (e.g signals) between the two resources don't use
the adjust method.

  - Add an option to replace a member to the signal interface of
AutoScalingGroup

  - Add the new replace adjustment type to ScalingPolicy


I think I am broadly in favour of this.


Ok, great - I think we'll probably want replace_oldest, replace_newest, and
replace_specific, such that both alarm and operator driven replacement have
flexibility over what member is replaced.


We probably want to allow users to specify the replacement policy (e.g. 
oldest first vs. newest first) for the scaling group itself to use when 
scaling down or during rolling updates. If we had that, we'd probably 
only need a single replace adjustment type - if a particular member is 
specified in the message then it would replace that specific one, 
otherwise the scaling group would choose which to replace based on the 
specified policy.



I posted a patch which implements the first step, and the second will be
required for TripleO, e.g we should be doing it soon.

https://review.openstack.org/#/c/143496/
https://review.openstack.org/#/c/140781/

2. A possible next step towards active/active HA failover

The next part is the ability to notify before replacement that a scaling
action is about to happen (just like we do for LoadBalancer resources
already) and orchestrate some or all of the following:

- Attempt to quiesce the currently active node (may be impossible if it's
   in a bad state)

- Detach resources (e.g volumes primarily?) from the current active node,
   and attach them to the new active node

- Run some config action to activate the new node (e.g run some config
   script to fsck and mount a 

Re: [openstack-dev] [kolla][ironic][tripleo] Refactored heat-kubernetes templates

2015-01-02 Thread Steven Dake

On 01/02/2015 08:39 AM, Lars Kellogg-Stedman wrote:

Hello Kolla folks (et al),

I've refactored the heat-kubernetes templates at
https://github.com/larsks/heat-kubernetes to work with Centos Atomic
Host and Fedora 21 Atomic, and to replace the homegrown overlay
network solution with Flannel.

These changes are available on the master branch.

The previous version of the templates, which worked with F20 and
included some Kolla-specific networking logic, is available in the
kolla branch:

   https://github.com/larsks/heat-kubernetes/tree/kolla



Lars,

Really great work!!

The only thing needed next is to sort out how these could work with 
Ironic for bare metal deployment.  Currently these templates only 
support virtualized non-baremetal deployments because of how their 
networking configuration is done.  In an ideal world, it would be nice 
to deploy kubernetes via Atomic+Heat on non-virt baremetal.


The thing to sort out for these templates regarding Ironic is really the 
Networking part.  I'm not quite sure how the network configuration 
should operate with a Heat template and a baremetal Nova flavor.


Regards
-steve



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Bug in federation

2015-01-02 Thread David Chadwick
Hi Marco

I think the current design is wrong because it is mixing up access
control with service endpoint location. The endpoint of a service should
be independent of the access control rules determining who can contact
the service. Any entity should be able to contact a service endpoint
(subject to firewall rules of course, but this is out of scope of
Keystone), and once connected, access control should then be enforced.
Unfortunately the current design directly ties access control (which
IdP) to the service endpoint by building the IDP name into the URL. This
is fundamentally a bad design. Not only is it too limiting, but also it
is mixing up different concerns, rather than separating them out, which
is a good computer science principle.

So, applying the separation of concerns principle to Keystone, the
federated login endpoint should not be tied to any specific IdP. There
are many practical reasons for this, such as:

a) in the general case the users of an openstack service could be from
multiple different organisations, and hence multiple different IdPs, but
they may all need to access the same service and hence same endpoint,
b) users who are authorised to access an openstack service might be
authorised based on their identity attributes that are not IdP specific
(e.g. email address), so they might have a choice of IDP to use
c) federations are getting larger and larger, and interfederations are
exploding the number of IdPs that users can use. The GEANT eduGAIN
interfederation for example now has IdPs from about 20 countries, and
each country can have over a 100 IdPs. So we are talking about thousands
of IdPs in a federation. It is conceivable that users from all of these
might wish to access a given cloud service.

Here is my proposal for how federation should be re-engineered

1. The federation endpoint URL for Keystone can be anything intuitive
and in keeping with existing guidelines, and should be IDP independent

2. Apache will protect this endpoint with whatever federation
protocol(s) it is able to. The Keystone administrator and Apache
administrator will liaise out of band to determine the name of the
endpoint and the federation protocol and IDPs that will be able to
access it.

3. Keystone will have its list of trusted IdPs as now.

4. Keystone will have its mapping rules as now (although I still believe
it would be better for mapping rules to be IDP independent, and to have
lists of trusted attributes from trusted IDPs instead)

5. Apache will return to Keystone two new parameters indicating the IdP
and protocol that were used by the user in connecting to the endpoint.
Apache knows what these are.

6. Keystone will use these new parameters for access control and mapping
rules. i.e. it will reject any users who are from untrusted IdPs, and it
will determine the right mapping rule to use based on the values of the
two new parameters. A simple table in Keystone will map the IdPs and
protocols into the correct mapping rule to use.

This is not a huge change to make, in fact it should be a rather simple
re-engineering task.

regards

David


On 24/12/2014 17:50, Marco Fargetta wrote:
 
 On 24 Dec 2014, at 17:34, David Chadwick d.w.chadw...@kent.ac.uk wrote:

 If I understand the bug fix correctly, it is firmly tying the URL to the
 IDP to the mapping rule. But I think this is going in the wrong
 direction for several reasons:

 1. With Shibboleth, if you use a WAYF service, then anyone from hundreds
 of different federated IDPs may end up being used to authenticate the
 user who is accessing OpenStack/Keystone. We dont want to have hundreds
 of URLs. One is sufficient. Plus we dont know which IDP the user will
 eventually choose, as this is decided by the WAYF service. So the
 correct URL cannot be pre-chosen by the user.

 
 
 With the proposed configuration of shibboleth when you access the URL then 
 you are
 redirect only to the IdP configured for the URL. Since a URL is tied to only 
 an IDP there
 is not need of a WAYF.
 
 Anyway, this is a change only in the documentation and it was the first fix 
 because there was
 an agreement to provide a solution also for Juno with the minimal change in 
 the code.
 
 The other fix I proposed, which is under review, requires an additional 
 parameter when you
 configure the IdP in OS-Federation. This accepts one or more EntityIDs so you 
 can map the entities
 with the URL. This also requires to specify the http variable where you can 
 get the entityID (this
 is a parameter so it can be compatible with different SAML plug-ins).
 If you do not specify these values the behaviour is like the current 
 implementation otherwise
 providing the list of entities and the parameter the access to the URL is 
 allowed only to the
 IDP included in the list and the other are rejected.
 
 I tried to be more compatible with the current implementation as possible.
 
 Is this in the right direction? Could you comment on the review page? It will 
 be better to 

[openstack-dev] [Magnum] Proposed Changes to Magnum Core

2015-01-02 Thread Adrian Otto
Magnum Cores,

I propose the following addition to the Magnum Core group[1]:

+ Jay Lau (jay-lau-513)

Please let me know your votes by replying to this message.

Thanks,

Adrian

[1] https://review.openstack.org/#/admin/groups/473,members Current Members

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][L2-Gateway] Meetings announcement

2015-01-02 Thread Sukhdev Kapur
Hi all,

HAPPY NEW YEAR.

Starting Monday (Jan 5th, 2015) we will be kicking of bi-weekly meetings
for L2 Gateway discussions.

We are hoping to come up with an initial version of L2 Gateway API in Kilo
cycle. The intent of these bi-weekly meetings is to discuss issues related
to L2 Gateway API.

Anybody interested in this topic is invited to join us in these meetings
and share your wisdom with the similar minded members.

Here is the details of these meetings:

https://wiki.openstack.org/wiki/Meetings#Networking_L2_Gateway_meeting

I have put together a wiki for this project. Next week is the initial
meeting and the agenda is pretty much open. We will give introduction of
the members of the team as well the progress made so far on this topic. If
you would like to add anything to the agenda, feel free to update the
agenda at the following wiki:

https://wiki.openstack.org/wiki/Meetings/L2Gateway

Look forward to on the IRC.

-Sukhdev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] Proposed Changes to Magnum Core

2015-01-02 Thread Davanum Srinivas
+1 from me. Welcome Jay!
On Jan 2, 2015 7:02 PM, Adrian Otto adrian.o...@rackspace.com wrote:

 Magnum Cores,

 I propose the following addition to the Magnum Core group[1]:

 + Jay Lau (jay-lau-513)

 Please let me know your votes by replying to this message.

 Thanks,

 Adrian

 [1] https://review.openstack.org/#/admin/groups/473,members Current
 Members

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev