Re: [openstack-dev] [Neutron][ServiceVM] servicevm IRC meeting reminder (June 28 Tuesday 5:00(AM)UTC-)

2014-06-25 Thread Isaku Yamahata
As action items, I've moved API spec to google-doc
until stackforge repo is created.

Here is the link
https://docs.google.com/document/d/10v818QsHWw5lSpiCMfh908PAvVzkw7_ZUL0cgDiH3Vk/edit?usp=sharing

thanks,

On Mon, Jun 23, 2014 at 11:25:03PM +0900,
Isaku Yamahata isaku.yamah...@gmail.com wrote:

 Hi. This is a reminder mail for the servicevm IRC meeting
 June 28, 2014 Tuesdays 5:00(AM)UTC-
 #openstack-meeting on freenode
 https://wiki.openstack.org/wiki/Meetings/ServiceVM
 
 
 agenda: (feel free to add your items)
 * announcements
 * action items from the last week
 * new repo in github and API discussion
   I hoped to use stackforge so far, but the process of config seems
   too slow. 
   So I'd like to start actual discussion with github until stackforge repo
   is created.
 * API discussion for consolidation
   consolidate multiple existing implementations
 * NFV meeting follow up
 * blueprint follow up
 * open discussion
 * add your items
 -- 
 Isaku Yamahata isaku.yamah...@gmail.com

-- 
Isaku Yamahata isaku.yamah...@gmail.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Keystone] Token invalidation in deleting role assignments

2014-06-25 Thread Takashi Natsume
Hi all,

When deleting role assignments, not only tokens that are related with
deleted role assignments but also other tokens that the(same) user has are
invalidated in stable/icehouse(2014.1.1).

For example,
A) Role assignment between domain and user by OS-INHERIT(*1)
1. Assign a role(For example,'Member') between 'Domain1' and 'user' by
OS-INHERIT
2. Assign the role('Member') between 'Domain2' and 'user' by OS-INHERIT
3. Get a token with specifying 'user' and 'Project1'(in 'Domain1')
4. Get a token with specifying 'user' and 'Project2'(in 'Domain2')
5. Create reources(For example, cinder volumes) in 'Project1' with the token
that was gotten in 3.
it is possible to create them.
6. Create reources in 'Project2' with the token that was gotten in 4.
it is possible to create them.
7. Delete the role assignment between 'Domain1' and 'user' (that was added
in 1.)

(After validated token cache is expired in cinder, etc.)
8. Create reources in 'Project1' with the token that was gotten in 3.
it is not possible to create them. 401 Unauthorized.
9. Create reources in 'Project2' with the token that was gotten in 4.
it is not possible to create them. 401 Unauthorized.

In 9., my expectation is that it is possible to create resources with the
token that was gotten in 4..

*1:
v3/OS-INHERIT/domains/{domain_id}/users/{user_id}/roles/{role_id}/inherited_
to_projects

B) Role assignment between project and user
1. Assign a role(For example,'Member') between 'Project1' and 'user'
2. Assign the role('Member') between 'Project2' and 'user'
3. Get a token with specifying 'user' and 'Project1'
4. Get a token with specifying 'user' and 'Project2'
5. Create reources(For example, cinder volumes) in 'Project1' with the token
that was gotten in 3.
it is possible to create them.
6. Create reources in 'Project2' with the token that was gotten in 4.
it is possible to create them.
7. Delete the role assignment between 'Project1' and 'user' (that was added
in 1.)

(After validated token cache is expired in cinder, etc.)
8. Create reources in 'Project1' with the token that was gotten in 3.
it is not possible to create them. 401 Unauthorized.
9. Create reources in 'Project2' with the token that was gotten in 4.
it is not possible to create them. 401 Unauthorized.

In 9., my expectation is that it is possible to create resources with the
token that was gotten in 4..


Are these bugs?
Or are there any reasons to implement these ways?

Regards,
Takashi Natsume
NTT Software Innovation Center
Tel: +81-422-59-4399
E-mail: natsume.taka...@lab.ntt.co.jp




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Why is there a 'None' task_state between 'SCHEDULING' 'BLOCK_DEVICE_MAPPING'?

2014-06-25 Thread wu jiang
Hi all,

Recently, some of my instances were stuck in task_state 'None' during VM
creation in my environment.

So I checked  found there's a 'None' task_state between 'SCHEDULING' 
'BLOCK_DEVICE_MAPPING'.

The related codes are implemented like this:

#def _start_building():
#self._instance_update(context, instance['uuid'],
#  vm_state=vm_states.BUILDING,
#  task_state=None,
#  expected_task_state=(task_states.SCHEDULING,
#   None))

So if compute node is rebooted after that procession, all building VMs on
it will always stay in 'None' task_state. And it's useless and not
convenient for locating problems.

Why not a new task_state for this step?


WingWJ
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] Weekly meeting

2014-06-25 Thread James Polley
Minutes are at
http://eavesdrop.openstack.org/meetings/tripleo/2014/tripleo.2014-06-25-06.59.html
(or
http://eavesdrop.openstack.org/meetings/tripleo/2014/tripleo.2014-06-25-06.59.txt
if you prefer plain-text). For your convenience I've pasted them below.

During the meeting it was suggested we add a Specs item to the agenda for
any specs people want to ask for more eyeballs on. I've added this to the
agenda for next week.

I don't plan to regularly mail these minutes (they're always at
http://eavesdrop.openstack.org/meetings/tripleo) but we had particularly
low attendance today and we have a couple of time-sensitive one-off items.
#openstack-meeting-alt: tripleoMeeting started by tchaypo at 06:59:56 UTC (full
logs
http://eavesdrop.openstack.org/meetings/tripleo/2014/tripleo.2014-06-25-06.59.log.html
).

Meeting summary

   1. *bugs* (tchaypo
   
http://eavesdrop.openstack.org/meetings/tripleo/2014/tripleo.2014-06-25-06.59.log.html#l-11,
   07:01:20)
  1. https://bugs.launchpad.net/tripleo/ (tchaypo
  
http://eavesdrop.openstack.org/meetings/tripleo/2014/tripleo.2014-06-25-06.59.log.html#l-12,
  07:01:29)
  2. https://bugs.launchpad.net/diskimage-builder/ (tchaypo
  
http://eavesdrop.openstack.org/meetings/tripleo/2014/tripleo.2014-06-25-06.59.log.html#l-13,
  07:01:32)
  3. https://bugs.launchpad.net/os-refresh-config (tchaypo
  
http://eavesdrop.openstack.org/meetings/tripleo/2014/tripleo.2014-06-25-06.59.log.html#l-14,
  07:01:34)
  4. https://bugs.launchpad.net/os-apply-config (tchaypo
  
http://eavesdrop.openstack.org/meetings/tripleo/2014/tripleo.2014-06-25-06.59.log.html#l-15,
  07:01:36)
  5. https://bugs.launchpad.net/os-collect-config (tchaypo
  
http://eavesdrop.openstack.org/meetings/tripleo/2014/tripleo.2014-06-25-06.59.log.html#l-16,
  07:01:38)
  6. https://bugs.launchpad.net/os-cloud-config (tchaypo
  
http://eavesdrop.openstack.org/meetings/tripleo/2014/tripleo.2014-06-25-06.59.log.html#l-17,
  07:01:40)
  7. https://bugs.launchpad.net/tuskar (tchaypo
  
http://eavesdrop.openstack.org/meetings/tripleo/2014/tripleo.2014-06-25-06.59.log.html#l-18,
  07:01:42)
  8. https://bugs.launchpad.net/python-tuskarclient (tchaypo
  
http://eavesdrop.openstack.org/meetings/tripleo/2014/tripleo.2014-06-25-06.59.log.html#l-19,
  07:01:44)
  9. https://review.openstack.org/#/c/98758/ (GheRivero
  
http://eavesdrop.openstack.org/meetings/tripleo/2014/tripleo.2014-06-25-06.59.log.html#l-29,
  07:06:07)

   2. *reviews* (tchaypo
   
http://eavesdrop.openstack.org/meetings/tripleo/2014/tripleo.2014-06-25-06.59.log.html#l-34,
   07:07:40)
  1. There's a new dashboard linked from
  https://wiki.openstack.org/wiki/TripleO#Review_team - look for
  TripleO Inbox Dashboard (tchaypo
  
http://eavesdrop.openstack.org/meetings/tripleo/2014/tripleo.2014-06-25-06.59.log.html#l-35,
  07:07:44)
  2. http://russellbryant.net/openstack-stats/tripleo-openreviews.html (
  tchaypo
  
http://eavesdrop.openstack.org/meetings/tripleo/2014/tripleo.2014-06-25-06.59.log.html#l-36,
  07:07:47)
  3. http://russellbryant.net/openstack-stats/tripleo-reviewers-30.txt (
  tchaypo
  
http://eavesdrop.openstack.org/meetings/tripleo/2014/tripleo.2014-06-25-06.59.log.html#l-37,
  07:07:49)
  4. http://russellbryant.net/openstack-stats/tripleo-reviewers-90.txt (
  tchaypo
  
http://eavesdrop.openstack.org/meetings/tripleo/2014/tripleo.2014-06-25-06.59.log.html#l-38,
  07:07:51)
  5. 3rd quartile has dropped from 13 days 12 hours to 11 days 21 hours
  over the last week, other quartiles roughly unchanged (tchaypo
  
http://eavesdrop.openstack.org/meetings/tripleo/2014/tripleo.2014-06-25-06.59.log.html#l-50,
  07:10:20)
  6. stackforge/gertty is a tty-based locally-caching client for
  gerrit, so you can do reviews on a plane (tchaypo
  
http://eavesdrop.openstack.org/meetings/tripleo/2014/tripleo.2014-06-25-06.59.log.html#l-68,
  07:14:41)

   3. *Projects needing releases* (tchaypo
   
http://eavesdrop.openstack.org/meetings/tripleo/2014/tripleo.2014-06-25-06.59.log.html#l-78,
   07:18:20)
  1. shadower will release all the things (shadower
  
http://eavesdrop.openstack.org/meetings/tripleo/2014/tripleo.2014-06-25-06.59.log.html#l-100,
  07:21:46)
  2. https://wiki.openstack.org/wiki/TripleO/ReleaseManagement (tchaypo
  
http://eavesdrop.openstack.org/meetings/tripleo/2014/tripleo.2014-06-25-06.59.log.html#l-105,
  07:22:24)

   4. *CD Cloud status* (tchaypo
   
http://eavesdrop.openstack.org/meetings/tripleo/2014/tripleo.2014-06-25-06.59.log.html#l-109,
   07:22:37)
  1. https://review.openstack.org/#/c/102291/ should land before
  releasing dif-utils (shadower
  
http://eavesdrop.openstack.org/meetings/tripleo/2014/tripleo.2014-06-25-06.59.log.html#l-112,
  07:23:18)
  2. 

[openstack-dev] [TripleO] Time to break backwards compatibility for *cloud-password file location?

2014-06-25 Thread James Polley
Until https://review.openstack.org/#/c/83250/, the setup-*-password scripts
used to drop password files into $CWD, which meant that if you ran the
script from a different location next time, your old passwords wouldn't be
found.

https://review.openstack.org/#/c/83250/ changed this so that the default
behaviour is to put the password files in $TRIPLEO_ROOT; but for backwards
compatibility we left the script checking to see if there's a file in the
current directory, and using that file in preference to $TRIPLEO_ROOT if it
exists.

However, this behaviour is still confusing to people. I'm not entirely
clear on why it's confusing (it makes perfect sense to me...) but I imagine
it's because we still have the problem that the code works fine if run from
one directory, but run from a different directory it can't find passwords.

There are two open patches which would break backwards compatibility and
only ever use the files in $TRIPLEO_ROOT:

https://review.openstack.org/#/c/93981/
https://review.openstack.org/#/c/97657/

The latter review is under more active development, and has suggestions
that the directory containing the password files should be parameterised,
defaulting to $TRIPLEO_ROOT. This would still break for anyone who relies
on the password files being in the directory they run the script from, but
at least there would be a fairly easy fix for them.

To help decide if it's time to break backwards compatibility in this case
(and if so, how), I'd love to see some more comments on 97657. If we don't
want to break backwards compatibility, maybe comments about a better way to
handle the ambiguity would be helpful.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [DevStack] neutron config not working

2014-06-25 Thread Kashyap Chamarthy
On Tue, Jun 24, 2014 at 06:59:17PM -0400, Rob Crittenden wrote:
 Before I get punted onto the operators list, I post this here because
 this is the default config and I'd expect the defaults to just work.
 
 Running devstack inside a VM with a single NIC configured and this in
 localrc:
 
 disable_service n-net
 enable_service q-svc
 enable_service q-agt
 enable_service q-dhcp
 enable_service q-l3
 enable_service q-meta
 enable_service neutron
 Q_USE_DEBUG_COMMAND=True
 
 Results in a successful install but no DHCP address assigned to hosts I
 launch and other oddities like no CIDR in nova net-list output.
 
 Is this still the default way to set things up for single node? It is
 according to https://wiki.openstack.org/wiki/NeutronDevstack

I've used something simliar in my local.conf[1] w/ today's git. I get a
successfull install too[2]. However, booting an instance is just
perpetually stuck in SCHEDULING state:

  $ nova list
  
+--++++-+--+
  | ID   | Name   | Status | Task State | Power 
State | Networks |
  
+--++++-+--+
  | 425a12e8-0b7e-4ad1-97db-20a912dea7df | f20vm2 | BUILD  | scheduling | 
NOSTATE |  |
  
+--++++-+--+


I don't see anything interesting in Scheduler/CPU logs:

  $ grep  ERROR ../data/new/screen-logs/screen-n-cpu.log 
  $ grep  ERROR ../data/new/screen-logs/screen-n-sch.log
  2014-06-25 02:37:37.674 DEBUG nova.openstack.common.db.sqlalchemy.session 
[req-62d4dfe1-55f4-46fd-94c6-e1b270eca5e4 None None] MySQL server mode set to 
STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION
 _mysql_check_effective_sql_mode 
/opt/stack/nova/nova/openstack/common/db/sqlalchemy/session.py:562
 
 
Examining my install log[2], I only see 3 ERRORs that looked legitimate:

(1) A fatal error about 'yaml.h' header file not found:
---
[. . .]
2014-06-25 06:22:38.963 | gcc -pthread -fno-strict-aliasing -O2 -g -pipe 
-Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong 
--param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic 
-D_GNU_SOURCE -fPIC -fwrapv -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 
-fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 
-grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -fPIC 
-I/usr/include/python2.7 -c build/temp.linux-x86_64-2.7/check_libyaml.c -o 
build/temp.linux-x86_64-2.7/check_libyaml.o
2014-06-25 06:22:38.976 | build/temp.linux-x86_64-2.7/check_libyaml.c:2:18: 
fatal error: yaml.h: No such file or directory
2014-06-25 06:22:38.977 |  #include yaml.h
2014-06-25 06:22:38.977 |   ^
2014-06-25 06:22:38.978 | compilation terminated.
2014-06-25 06:22:38.995 | 
2014-06-25 06:22:38.996 | libyaml is not found or a compiler error: forcing 
--without-libyaml
2014-06-25 06:22:38.996 | (if libyaml is installed correctly, you may need 
to
2014-06-25 06:22:38.997 |  specify the option --include-dirs or uncomment 
and
2014-06-25 06:22:38.997 |  modify the parameter include_dirs in setup.cfg)
2014-06-25 06:22:39.044 |
[. . .]
---


(2) For some reason, it couldn't connect to Libvirt Hypervisor, as it
couldn't find the Libvirt socket file.
---
[. . .]
2014-06-25 06:32:08.942 | error: failed to connect to the hypervisor
2014-06-25 06:32:08.943 | error: no valid connection
2014-06-25 06:32:08.943 | error: Failed to connect socket to 
'/var/run/libvirt/libvirt-sock': No such file or directory
2014-06-25 06:32:08.948 | + instances=
[. . .]
---

However, the file _does_ exist:

$ file /var/run/libvirt/libvirt-sock
/var/run/libvirt/libvirt-sock: socket


(3) A Neutron complaint that it couldn't find a certain qprobe network
namespace:
---
[. . .]
2014-06-25 06:37:21.009 | + neutron-debug --os-tenant-name admin --os-username 
admin --os-password fedora probe-create --device-owner compute 
7624586e-120d-45dd-a918-716b942407ff
2014-06-25 06:37:23.435 | 2014-06-25 02:37:23.434 9698 ERROR 
neutron.agent.linux.utils [-] 
2014-06-25 06:37:23.436 | Command: ['sudo', '/usr/bin/neutron-rootwrap', 
'/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 
'qprobe-19193c58-a12d-4910-b38d-cd638714b1df', 'ip', '-o', 'link', 'show', 
'tap19193c58-a1']
2014-06-25 06:37:23.436 | Exit code: 1
2014-06-25 06:37:23.437 | Stdout: ''
2014-06-25 06:37:23.437 | Stderr: 'Cannot open network namespace 
qprobe-19193c58-a12d-4910-b38d-cd638714b1df: No such file or directory\n'
[. . .]
---

Howver, running `ip netns` _does_ enumerate the above qprobe network
namespace.


Other info
--

That's the 

Re: [openstack-dev] [hacking] rules for removal

2014-06-25 Thread Martin Geisler
Mark McLoughlin mar...@redhat.com writes:

 On Tue, 2014-06-24 at 13:56 -0700, Clint Byrum wrote:
 Excerpts from Mark McLoughlin's message of 2014-06-24 12:49:52 -0700:

 However, there is a debate, and thus I would _never_ block a patch
 based on this rule. It was feedback.. just as sometimes there is
 feedback in commit messages that isn't taken and doesn't lead to a
 -1.

 Absolutely, and I try and be clear about that with e.g. not a -1 or
 if you're rebasing anyway, perhaps fix this.

Perhaps the problem is the round-trips such corrections imply?

In the Mercurial project we accept contributions sent as patches only.
There it's common for the core developers to fix the commit message
locally before importing a patch. That makes it quick to fix these
problems and I think that this workflow puts less work on the core
maintainers.

With Gerrit, it seems that simply fixing the commit message in the web
interface could work. I know that a patch submitter can update it
online, but I don't know if (core) reviewers can also just update it?

(Updating the patch in Gerrit would go behind the back of the
submitter who would then have to rebase any additional work he has done
on the branch. So this is not 100% pain free.)

-- 
Martin Geisler

http://google.com/+MartinGeisler


pgpPXvqvE_28i.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Distributed locking

2014-06-25 Thread John Garbutt
So just to keep the ML up with some of the discussion we had in IRC
the other day...

Most resources in Nova are owned by a particular nova-compute. So the
locks on the resources are effectively held by the nova-compute that
owns the resource.

We already effectively have a cross nova-compute lock holding in the
capacity reservations during migrate/resize.

But to cut a long story short, if the image cache is actually just a
copy from one of the nova-compute nodes that already have that image
into the local (shared) folder for another nova-compute, then we can
get away without a global lock, and just have two local locks on
either end and some conducting to co-ordinate things.

Its not perfect, but its an option.

Thanks,
John


On 17 June 2014 18:18, Clint Byrum cl...@fewbar.com wrote:
 Excerpts from Matthew Booth's message of 2014-06-17 01:36:11 -0700:
 On 17/06/14 00:28, Joshua Harlow wrote:
  So this is a reader/write lock then?
 
  I have seen https://github.com/python-zk/kazoo/pull/141 come up in the
  kazoo (zookeeper python library) but there was a lack of a maintainer for
  that 'recipe', perhaps if we really find this needed we can help get that
  pull request 'sponsored' so that it can be used for this purpose?
 
 
  As far as resiliency, the thing I was thinking about was how correct do u
  want this lock to be?
 
  If u say go with memcached and a locking mechanism using it this will not
  be correct but it might work good enough under normal usage. So that¹s why
  I was wondering about what level of correctness do you want and what do
  you want to happen if a server that is maintaining the lock record dies.
  In memcaches case this will literally be 1 server, even if sharding is
  being used, since a key hashes to one server. So if that one server goes
  down (or a network split happens) then it is possible for two entities to
  believe they own the same lock (and if the network split recovers this
  gets even weirder); so that¹s what I was wondering about when mentioning
  resiliency and how much incorrectness you are willing to tolerate.

 From my POV, the most important things are:

 * 2 nodes must never believe they hold the same lock
 * A node must eventually get the lock


 If these are musts, then memcache is a no-go for locking. memcached is
 likely to delete anything it is storing in its RAM, at any time. Also
 if you have several memcache servers, a momentary network blip could
 lead to acquiring the lock erroneously.

 The only thing it is useful for is coalescing, where a broken lock just
 means wasted resources, erroneous errors, etc. If consistency is needed,
 then you need a consistent backend.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] OpenStack-dev Digest, Vol 26, Issue 77

2014-06-25 Thread Abbass MAROUNI

Hello Joe,

Thanks for your quick reply, here's what we're trying to do :

In the scheduling process of a virtual machine we need to be able to 
choose the best Host (which is a cinder-volume and nova-compute host at 
the same time) that has enough volume space so that we can launch the VM 
then create and attach some cinder volumes locally (on the same host). 
We get the part where we check the available cinder space on each host 
(in a filter) and choose the best host (that has the most free space in 
a Weigher). Now we need to tell cinder to create and attach the volumes. 
We need to be able to do it from Heat.


So I was thinking that if I can tag the virtual machine with the name of 
the chosen host (in the Weigher) then I can extract the tag (somehow !) 
and use it in heat as a dependency in the volume element (At least 
that's what I'm thinking :  the virtual machine will be launched and 
Heat will extract the tag then use it to create/attach the volumes).


I'm sure that there are other means to achieve this, so any help will be 
greatly appreciated.


Thanks,

On 06/24/2014 11:38 PM, openstack-dev-requ...@lists.openstack.org wrote:

Hi,

I was wondering if there's a way to set a tag (key/value) of a Virtual

Machine from within a scheduler filter ?

The scheduler today is just for placement. And since we are in the process
of trying to split it out, I don't think we want to make the scheduler do
something like this (at least for now).



I want to be able to tag a machine with a specific key/value after

passing my custom filter

What is your use case? Perhaps we have another way of solving it today.



--
--
Abbass MAROUNI
VirtualScale


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] should we have a stale data indication in nova list/show?

2014-06-25 Thread Belmiro Moreira
I like the current behavior of not changing the VM state if nova-compute
goes down.

The cloud operators can identify the issue in the compute node and try to
fix it without users noticing. Depending in the problem I can inform users
if instances are affected and change the state if necessary.

I wouldn't like is to expose any failure in nova-compute to users and be
contacted because VM state changed.



Belmiro


On Wed, Jun 25, 2014 at 4:49 AM, Ahmed RAHAL ara...@iweb.com wrote:

 Hi,

 Le 2014-06-24 20:12, Joe Gordon a écrit :


 Finally, assuming the customer had access to this 'unknown' state
 information, what would he be able to do with it ? Usually he has no
 lever to 'evacuate' or 'recover' the VM. All he could do is spawn
 another instance to replace the lost one. But only if the VM really
 is currently unavailable, an information he must get from other
 sources.


 If I was a user, and my instance went to an 'UNKNOWN' state, I would
 check if its still operating, and if not delete it and start another
 instance.


 If I was a user and polled nova list/show on a regular basis just in case
 the management pane indicates a failure, I should have no expectation
 whatsoever. If service availability is my concern, I should monitor the
 service, nothing else. From there, once the service has failed, I can
 imagine checking if VM management is telling me something. However, if my
 service is down and I have no longer access to the VM ... simple case:
 destroy and respawn.

 My point is that we should not make the nova state an expected source of
 truth regarding service availability in the VM, as there is no way to tell
 such a thing. If my VM is being DDOSed, nova would still say everything is
 fine, while my service is really down. In that situation, console access
 would help me determine if the VM management is wrong by stating everything
 is ok or if there is another root cause.
 Similarly, should nova show a state change if load in the VM is through
 the roof and the service is not responsive ? or if OOM is killing all my
 processes because of a memory shortage ?

 As stated before, providing such a state information is misleading because
 there are cases where node unavailability is not service disruptive, thus
 it would indicate a false positive while the opposite (everything is ok) is
 not at all indicative of a healthy status of the service.

 Maybe am I overseeing a use case here where you absolutely need the user
 of the service to know a potential problem with his hosting platform.

 Ahmed.

 --
 =
 Ahmed Rahal ara...@iweb.com / iWeb Technologies
 Spécialiste de l'Architecture TI
 / IT Architecture Specialist
 =


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] DVR SNAT shortcut

2014-06-25 Thread Zang MingJie
Hi:

In current DVR design, SNAT is north/south direction, but packets have
to go west/east through the network node. If every compute node is
assigned a public ip, is it technically able to improve SNAT packets
w/o going through the network node ?

SNAT versus floating ips, can save tons of public ips, in trade of
introducing a single failure point, and limiting the bandwidth of the
network node. If the SNAT performance problem can be solved, I'll
encourage people to use SNAT over floating ips. unless the VM is
serving a public service

--
Zang MingJie

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Scheduler]

2014-06-25 Thread Abbass MAROUNI


Hello Joe,

Thanks for your quick reply, here's what we're trying to do :

In the scheduling process of a virtual machine we need to be able to
choose the best Host (which is a cinder-volume and nova-compute host at
the same time) that has enough volume space so that we can launch the VM
then create and attach some cinder volumes locally (on the same host).
We get the part where we check the available cinder space on each host
(in a filter) and choose the best host (that has the most free space in
a Weigher). Now we need to tell cinder to create and attach the volumes.
We need to be able to do it from Heat.

So I was thinking that if I can tag the virtual machine with the name of
the chosen host (in the Weigher) then I can extract the tag (somehow !)
and use it in heat as a dependency in the volume element (At least
that's what I'm thinking :  the virtual machine will be launched and
Heat will extract the tag then use it to create/attach the volumes).

I'm sure that there are other means to achieve this, so any help will be
greatly appreciated.

Thanks,

On 06/24/2014 11:38 PM, openstack-dev-requ...@lists.openstack.org wrote:

Hi,

I was wondering if there's a way to set a tag (key/value) of a Virtual

Machine from within a scheduler filter ?

The scheduler today is just for placement. And since we are in the process
of trying to split it out, I don't think we want to make the scheduler do
something like this (at least for now).



I want to be able to tag a machine with a specific key/value after

passing my custom filter

What is your use case? Perhaps we have another way of solving it today.



--
--
Abbass MAROUNI
VirtualScale




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron ML2] Potential DB lock when developing new mechanism driver

2014-06-25 Thread Kevin Benton
The post_commit methods occur outside of the transactions. You should be
able to perform the necessary database calls there.

If you look at the code snippet in the email you provided, you can see that
the 'try' block surrounding the postcommit method is at the same
indentation-level as the 'with' statement for the transaction so it will be
closed at that point.

Cheers,
Kevin Benton

--
Kevin Benton


On Tue, Jun 24, 2014 at 8:33 PM, Li Ma skywalker.n...@gmail.com wrote:

 Hi all,

 I'm developing a new mechanism driver. I'd like to access ml2-related
 tables in create_port_precommit and create_port_postcommit. However I find
 it hard to do that because the two functions are both inside an existed
 database transaction defined in create_port function of ml2/plugin.py.

 The related code is as follows:

 def create_port(self, context, port):
 ...
 session = context.session
 with session.begin(subtransactions=True):
 ...
 self.mechanism_manager.create_port_precommit(mech_context)
 try:
 self.mechanism_manager.create_port_postcommit(mech_context)
 ...
 ...
 return result

 As a result, I need to carefully deal with the database nested transaction
 issue to prevent from db lock when I develop my own mechanism driver. Right
 now, I'm trying to get the idea behind the scene. Is it possible to
 refactor it in order to make precommit and postcommit out of the db
 transaction? I think it is perfect for those who develop mechanism driver
 and do not know well about the functioning context of the whole ML2 plugin.

 Thanks,
 Li Ma

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Kevin Benton
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] [OSTF] OSTF stops working after password is changed

2014-06-25 Thread Dmitriy Shulyak
Looks like we will stick to #2 option, as most reliable one.

- we have no way to know that openrc is changed, even if some scripts
relies on it - ostf should not fail with auth error
- we can create ostf user in post-deployment stage, but i heard that some
ceilometer tests relied on admin user, also
  operator may not want to create additional user, for some reasons

So, everybody is ok with additional fields on HealthCheck tab?




On Fri, Jun 20, 2014 at 8:17 PM, Andrew Woodward xar...@gmail.com wrote:

 The openrc file has to be up to date for some of the HA scripts to
 work, we could just source that.

 On Fri, Jun 20, 2014 at 12:12 AM, Sergii Golovatiuk
 sgolovat...@mirantis.com wrote:
  +1 for #2.
 
  ~Sergii
 
 
  On Fri, Jun 20, 2014 at 1:21 AM, Andrey Danin ada...@mirantis.com
 wrote:
 
  +1 to Mike. Let the user provide actual credentials and use them in
 place.
 
 
  On Fri, Jun 20, 2014 at 2:01 AM, Mike Scherbakov
  mscherba...@mirantis.com wrote:
 
  I'm in favor of #2. I think users might not want to have their password
  stored in Fuel Master node.
  And if so, then it actually means we should not save it when user
  provides it on HealthCheck tab.
 
 
  On Thu, Jun 19, 2014 at 8:05 PM, Vitaly Kramskikh
  vkramsk...@mirantis.com wrote:
 
  Hi folks,
 
  We have a bug which prevents OSTF from working if user changes a
  password which was using for the initial installation. I skimmed
 through the
  comments and it seems there are 2 viable options:
 
  Create a separate user just for OSTF during OpenStack installation
  Provide a field for a password in UI so user could provide actual
  password in case it was changed
 
  What do you guys think? Which options is better?
 
  --
  Vitaly Kramskikh,
  Software Engineer,
  Mirantis, Inc.
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
  --
  Mike Scherbakov
  #mihgen
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
  --
  Andrey Danin
  ada...@mirantis.com
  skype: gcon.monolake
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 



 --
 Andrew
 Mirantis
 Ceph community

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Timeline for the rest of the Juno release

2014-06-25 Thread John Garbutt
Seems like we all agree on the basic idea here, which is great.

I think just not concentrating on nova-spec reviews is fine, at least,
it is the simplest way to implement the freeze (as Russell pointed
out).

I so worry about setting the right expectations for the poor souls
who's specs might stick in there for a few months unreviewed, and we
come back to re-write the template for K and tell them they did it all
wrong. But lets try avoid that.

I guess the carrot is we have more reviewer (by which I mean everyone)
focus on code, post the nova-specs soft freeze.

Lets bring this up at the nova-meeting on Thursday (at the end) and
see if we can get some consensus in there. Either way we should talk
about the options to relax some of the nova-spec process at the
mid-cycle summit, as I feel we have somewhat over-rotated here.

Thanks,
John

On 25 June 2014 00:47, Michael Still mi...@stillhq.com wrote:
 Your comments are fair. I think perhaps at this point we should defer
 discussion of the further away deadlines until the mid cycle meetup --
 that will give us a chance to whiteboard the flow for that period of
 the release.

 Or do you really want to lock this down now?

 Michael

 On Wed, Jun 25, 2014 at 12:53 AM, Day, Phil philip@hp.com wrote:
 -Original Message-
 From: Russell Bryant [mailto:rbry...@redhat.com]
 Sent: 24 June 2014 13:08
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Nova] Timeline for the rest of the Juno 
 release

 On 06/24/2014 07:35 AM, Michael Still wrote:
  Phil -- I really want people to focus their efforts on fixing bugs in
  that period was the main thing. The theory was if we encouraged people
  to work on specs for the next release, then they'd be distracted from
  fixing the bugs we need fixed in J.
 
  Cheers,
  Michael
 
  On Tue, Jun 24, 2014 at 9:08 PM, Day, Phil philip@hp.com wrote:
  Hi Michael,
 
  Not sure I understand the need for a gap between Juno Spec approval
 freeze (Jul 10th) and K opens for spec proposals (Sep 4th).I can
 understand that K specs won't get approved in that period, and may not get
 much feedback from the cores - but I don't see the harm in letting specs be
 submitted to the K directory for early review / feedback during that period 
 ?

 I agree with both of you.  Priorities need to be finishing up J, but I 
 don't see
 any reason not to let people post K specs whenever.
 Expectations just need to be set appropriately that it may be a while before
 they get reviewed/approved.

 Exactly - I think it's reasonable to set the expectation that the focus of 
 those that can produce/review code will be elsewhere - but that shouldn't 
 stop some small effort going into knocking the rough corners off the specs 
 at the same time


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 --
 Rackspace Australia

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] Spec Review Day Today!

2014-06-25 Thread John Garbutt
As previously (quietly) announced, today we are trying to do a push on
nova-specs reviews.

https://review.openstack.org/#/q/status:open+project:openstack/nova-specs,n,z

The hope is we get through some of the backlog, with some interactive
chat on IRC in #openstack-nova

If someone has better stats on our nova-spec reviews, do respond with
a link, and that would be appreciated. We need to track reviews that
need reviewer attention vs submitter attention.



If we spot really important blueprints, please note them in here:
https://etherpad.openstack.org/p/nova-juno-spec-priorities

Above I am trying to list what our (currently unwritten) priorities
are for Juno. Ideally we would start to agree these at the summit, and
use them to help set priorities, but just trying it out this release.



Note the rough plan currently is:

July 3rd: all Juno specs must be up for review, else they are deferred to K
July 10th: all Juno specs approved or deferred to K

As always, we have exceptions for cases where we need it, but thats
the general idea.



Happy spec reviewing!

Thanks,
John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Time to break backwards compatibility for *cloud-password file location?

2014-06-25 Thread mar...@redhat.com
On 25/06/14 10:52, James Polley wrote:
 Until https://review.openstack.org/#/c/83250/, the setup-*-password scripts
 used to drop password files into $CWD, which meant that if you ran the
 script from a different location next time, your old passwords wouldn't be
 found.
 
 https://review.openstack.org/#/c/83250/ changed this so that the default
 behaviour is to put the password files in $TRIPLEO_ROOT; but for backwards
 compatibility we left the script checking to see if there's a file in the
 current directory, and using that file in preference to $TRIPLEO_ROOT if it
 exists.
 
 However, this behaviour is still confusing to people. I'm not entirely
 clear on why it's confusing (it makes perfect sense to me...) but I imagine
 it's because we still have the problem that the code works fine if run from
 one directory, but run from a different directory it can't find passwords.
 
 There are two open patches which would break backwards compatibility and
 only ever use the files in $TRIPLEO_ROOT:
 
 https://review.openstack.org/#/c/93981/
 https://review.openstack.org/#/c/97657/
 
 The latter review is under more active development, and has suggestions
 that the directory containing the password files should be parameterised,
 defaulting to $TRIPLEO_ROOT. This would still break for anyone who relies
 on the password files being in the directory they run the script from, but
 at least there would be a fairly easy fix for them.
 

How about we:

* parameterize as suggested by Fabio in the review @
https://review.openstack.org/#/c/97657/

* move setting of this param to more visible location (setup, like
devtest_variables or testenv). We can then give this better visibility
in the dev/test autodocs with a warning about the 'old' behaviour

* add a deprecation warning to the code that reads from
$CWD/tripleo-overcloud-passwords to say that this will now need to be
set as a parameter in ... wherever. How long is a good period for this?

I don't think we make much in terms of backwards compatibility
guarantees right now do we?

marios







 To help decide if it's time to break backwards compatibility in this case
 (and if so, how), I'd love to see some more comments on 97657. If we don't
 want to break backwards compatibility, maybe comments about a better way to
 handle the ambiguity would be helpful.
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Timeline for the rest of the Juno release

2014-06-25 Thread Day, Phil
Discussing at the meet-up if fine with me

 -Original Message-
 From: Michael Still [mailto:mi...@stillhq.com]
 Sent: 25 June 2014 00:48
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Nova] Timeline for the rest of the Juno release
 
 Your comments are fair. I think perhaps at this point we should defer
 discussion of the further away deadlines until the mid cycle meetup -- that
 will give us a chance to whiteboard the flow for that period of the release.
 
 Or do you really want to lock this down now?
 
 Michael
 
 On Wed, Jun 25, 2014 at 12:53 AM, Day, Phil philip@hp.com wrote:
  -Original Message-
  From: Russell Bryant [mailto:rbry...@redhat.com]
  Sent: 24 June 2014 13:08
  To: openstack-dev@lists.openstack.org
  Subject: Re: [openstack-dev] [Nova] Timeline for the rest of the Juno
  release
 
  On 06/24/2014 07:35 AM, Michael Still wrote:
   Phil -- I really want people to focus their efforts on fixing bugs
   in that period was the main thing. The theory was if we encouraged
   people to work on specs for the next release, then they'd be
   distracted from fixing the bugs we need fixed in J.
  
   Cheers,
   Michael
  
   On Tue, Jun 24, 2014 at 9:08 PM, Day, Phil philip@hp.com wrote:
   Hi Michael,
  
   Not sure I understand the need for a gap between Juno Spec
   approval
  freeze (Jul 10th) and K opens for spec proposals (Sep 4th).I can
  understand that K specs won't get approved in that period, and may
  not get much feedback from the cores - but I don't see the harm in
  letting specs be submitted to the K directory for early review / feedback
 during that period ?
 
  I agree with both of you.  Priorities need to be finishing up J, but
  I don't see any reason not to let people post K specs whenever.
  Expectations just need to be set appropriately that it may be a while
  before they get reviewed/approved.
 
  Exactly - I think it's reasonable to set the expectation that the
  focus of those that can produce/review code will be elsewhere - but
  that shouldn't stop some small effort going into knocking the rough
  corners off the specs at the same time
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 --
 Rackspace Australia
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] [OSTF] OSTF stops working after password is changed

2014-06-25 Thread Vitaly Kramskikh
Dmitry,

Fields or field? Do we need to provide password only or other credentials
are needed?


2014-06-25 13:02 GMT+04:00 Dmitriy Shulyak dshul...@mirantis.com:

 Looks like we will stick to #2 option, as most reliable one.

 - we have no way to know that openrc is changed, even if some scripts
 relies on it - ostf should not fail with auth error
 - we can create ostf user in post-deployment stage, but i heard that some
 ceilometer tests relied on admin user, also
   operator may not want to create additional user, for some reasons

 So, everybody is ok with additional fields on HealthCheck tab?




 On Fri, Jun 20, 2014 at 8:17 PM, Andrew Woodward xar...@gmail.com wrote:

 The openrc file has to be up to date for some of the HA scripts to
 work, we could just source that.

 On Fri, Jun 20, 2014 at 12:12 AM, Sergii Golovatiuk
 sgolovat...@mirantis.com wrote:
  +1 for #2.
 
  ~Sergii
 
 
  On Fri, Jun 20, 2014 at 1:21 AM, Andrey Danin ada...@mirantis.com
 wrote:
 
  +1 to Mike. Let the user provide actual credentials and use them in
 place.
 
 
  On Fri, Jun 20, 2014 at 2:01 AM, Mike Scherbakov
  mscherba...@mirantis.com wrote:
 
  I'm in favor of #2. I think users might not want to have their
 password
  stored in Fuel Master node.
  And if so, then it actually means we should not save it when user
  provides it on HealthCheck tab.
 
 
  On Thu, Jun 19, 2014 at 8:05 PM, Vitaly Kramskikh
  vkramsk...@mirantis.com wrote:
 
  Hi folks,
 
  We have a bug which prevents OSTF from working if user changes a
  password which was using for the initial installation. I skimmed
 through the
  comments and it seems there are 2 viable options:
 
  Create a separate user just for OSTF during OpenStack installation
  Provide a field for a password in UI so user could provide actual
  password in case it was changed
 
  What do you guys think? Which options is better?
 
  --
  Vitaly Kramskikh,
  Software Engineer,
  Mirantis, Inc.
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
  --
  Mike Scherbakov
  #mihgen
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
  --
  Andrey Danin
  ada...@mirantis.com
  skype: gcon.monolake
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 



 --
 Andrew
 Mirantis
 Ceph community

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Vitaly Kramskikh,
Software Engineer,
Mirantis, Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Heat] Reminder: Mid-cycle Meetup - Attendance Confirmation

2014-06-25 Thread Jaromir Coufal

Thanks a lot for your help.

Just a side note - we need to fill in the number of requested rooms, so 
that we don't get charged for extra cost - we have a group discount price.


So for everybody, please, go forward and book your room here:
http://tinyurl.com/redhat-marriott

-- Jarda

On 2014/24/06 17:49, Jordan OMara wrote:

On 24/06/14 10:55 -0400, Jordan OMara wrote:

On 20/06/14 16:26 -0400, Charles Crouch wrote:

Any more takers for the tripleo mid-cycle meetup in Raleigh? If so,
please
sign up on the etherpad below.

The hotel group room rate will be finalized on Monday Jul 23rd (US
time), after that time you will be on your own for finding
accommodation.

Thanks
Charles



Just an update that I've got us a block of rooms reserved at the
nearest, cheapest hotel (the Marriott in downtown Raleigh, about 200
yards from the Red Hat office) - I'll have details on how to actually
book at this rate in just a few minutes.


Please use the following link to reserve at the marriott (it's copied
on the etherpad)

http://tinyurl.com/redhat-marriott

We have a 24-room block reserved at that rate from SUN-FRI


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Why is there a 'None' task_state between 'SCHEDULING' 'BLOCK_DEVICE_MAPPING'?

2014-06-25 Thread Day, Phil
Hi WingWJ,

I agree that we shouldn’t have a task state of None while an operation is in 
progress.  I’m pretty sure back in the day this didn’t use to be the case and 
task_state stayed as Scheduling until it went to Networking  (now of course 
networking and BDM happen in parallel, so you have to be very quick to see the 
Networking state).

Personally I would like to see the extra granularity of knowing that a request 
has been started on the compute manager (and knowing that the request was 
started rather than is still sitting on the queue makes the decision to put it 
into an error state when the manager is re-started more robust).

Maybe a task state of “STARTING_BUILD” for this case ?

BTW I don’t think _start_building() is called anymore now that we’ve switched 
to conductor calling build_and_run_instance() – but the same task_state issue 
exist in there well.

From: wu jiang [mailto:win...@gmail.com]
Sent: 25 June 2014 08:19
To: OpenStack Development Mailing List
Subject: [openstack-dev] [nova] Why is there a 'None' task_state between 
'SCHEDULING'  'BLOCK_DEVICE_MAPPING'?

Hi all,

Recently, some of my instances were stuck in task_state 'None' during VM 
creation in my environment.

So I checked  found there's a 'None' task_state between 'SCHEDULING'  
'BLOCK_DEVICE_MAPPING'.

The related codes are implemented like this:

#def _start_building():
#self._instance_update(context, instance['uuid'],
#  vm_state=vm_states.BUILDING,
#  task_state=None,
#  expected_task_state=(task_states.SCHEDULING,
#   None))

So if compute node is rebooted after that procession, all building VMs on it 
will always stay in 'None' task_state. And it's useless and not convenient for 
locating problems.

Why not a new task_state for this step?


WingWJ
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] [OSTF] OSTF stops working after password is changed

2014-06-25 Thread Dmitriy Shulyak
It is possible to change everything so username, password and tenant fields

Also this way we will be able to run tests not only as admin user


On Wed, Jun 25, 2014 at 12:29 PM, Vitaly Kramskikh vkramsk...@mirantis.com
wrote:

 Dmitry,

 Fields or field? Do we need to provide password only or other credentials
 are needed?


 2014-06-25 13:02 GMT+04:00 Dmitriy Shulyak dshul...@mirantis.com:

 Looks like we will stick to #2 option, as most reliable one.

 - we have no way to know that openrc is changed, even if some scripts
 relies on it - ostf should not fail with auth error
 - we can create ostf user in post-deployment stage, but i heard that some
 ceilometer tests relied on admin user, also
   operator may not want to create additional user, for some reasons

 So, everybody is ok with additional fields on HealthCheck tab?




 On Fri, Jun 20, 2014 at 8:17 PM, Andrew Woodward xar...@gmail.com
 wrote:

 The openrc file has to be up to date for some of the HA scripts to
 work, we could just source that.

 On Fri, Jun 20, 2014 at 12:12 AM, Sergii Golovatiuk
 sgolovat...@mirantis.com wrote:
  +1 for #2.
 
  ~Sergii
 
 
  On Fri, Jun 20, 2014 at 1:21 AM, Andrey Danin ada...@mirantis.com
 wrote:
 
  +1 to Mike. Let the user provide actual credentials and use them in
 place.
 
 
  On Fri, Jun 20, 2014 at 2:01 AM, Mike Scherbakov
  mscherba...@mirantis.com wrote:
 
  I'm in favor of #2. I think users might not want to have their
 password
  stored in Fuel Master node.
  And if so, then it actually means we should not save it when user
  provides it on HealthCheck tab.
 
 
  On Thu, Jun 19, 2014 at 8:05 PM, Vitaly Kramskikh
  vkramsk...@mirantis.com wrote:
 
  Hi folks,
 
  We have a bug which prevents OSTF from working if user changes a
  password which was using for the initial installation. I skimmed
 through the
  comments and it seems there are 2 viable options:
 
  Create a separate user just for OSTF during OpenStack installation
  Provide a field for a password in UI so user could provide actual
  password in case it was changed
 
  What do you guys think? Which options is better?
 
  --
  Vitaly Kramskikh,
  Software Engineer,
  Mirantis, Inc.
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
  --
  Mike Scherbakov
  #mihgen
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
  --
  Andrey Danin
  ada...@mirantis.com
  skype: gcon.monolake
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 



 --
 Andrew
 Mirantis
 Ceph community

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Vitaly Kramskikh,
 Software Engineer,
 Mirantis, Inc.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Do any hyperviors allow disk reduction as part of resize ?

2014-06-25 Thread John Garbutt
On 24 June 2014 16:40, Jay Pipes jaypi...@gmail.com wrote:
 On 06/24/2014 07:32 AM, Daniel P. Berrange wrote:

 On Tue, Jun 24, 2014 at 10:55:41AM +, Day, Phil wrote:

 -Original Message-
 From: John Garbutt [mailto:j...@johngarbutt.com]
 Sent: 23 June 2014 10:35
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [nova] Do any hyperviors allow disk
 reduction
 as part of resize ?

 On 18 June 2014 21:57, Jay Pipes jaypi...@gmail.com wrote:

 On 06/17/2014 05:42 PM, Daniel P. Berrange wrote:


 On Tue, Jun 17, 2014 at 04:32:36PM +0100, Pádraig Brady wrote:


 On 06/13/2014 02:22 PM, Day, Phil wrote:


 I guess the question I’m really asking here is:  “Since we know
 resize down won’t work in all cases, and the failure if it does
 occur will be hard for the user to detect, should we just block it
 at the API layer and be consistent across all Hypervisors ?”



 +1

 There is an existing libvirt blueprint:

 https://blueprints.launchpad.net/nova/+spec/libvirt-resize-disk-down
 which I've never been in favor of:
 https://bugs.launchpad.net/nova/+bug/1270238/comments/1



 All of the functionality around resizing VMs to match a different
 flavour seem to be a recipe for unleashing a torrent of unfixable
 bugs, whether resizing disks, adding CPUs, RAM or any other aspect.



 +1

 I'm of the opinion that we should plan to rip resize functionality out
 of (the next major version of) the Compute API and have a *single*,
 *consistent* API for migrating resources. No more API extension X for
 migrating this kind of thing, and API extension Y for this kind of
 thing, and API extension Z for migrating /live/ this type of thing.

 There should be One move API to Rule Them All, IMHO.


 +1 for one move API, the two evolved independently, in different
 drivers, its time to unify them!

 That plan got stuck behind the refactoring of live-migrate and migrate
 to the
 conductor, to help unify the code paths. But it kinda got stalled (I
 must
 rebase those patches...).

 Just to be clear, I am against removing resize down from v2 without a
 deprecation cycle. But I am pro starting that deprecation cycle.

 John

 I'm not sure Daniel and Jay are arguing for the same thing here John:
   I *think*  Daniel is saying drop resize altogether and Jay is saying
 unify it with migration - so I'm a tad confused which of those you're
 agreeing with.

OK, I got the wrong end of the stick, sorry.

 Yes, I'm personally for removing resize completely since, IMHO, no matter
 how many bugs we fix it is always going to be a mess. That said I realize
 that people probably find resize-up useful, so I won't push hard to kill
 it - we should just recognize that it is always going to be a mess which
 does not result in the same setup you'd get if you booted fresh with the
 new settings.

Resize down should probably get deprecated and die.

But I think resize up is quite useful.

If we make snapshot, then build work well for all use cases, then
resize up could die too.

But I am still on the fence here, mostly due to how slow snapshots can
be, and loosing your IP addresses across the whole process. But thats
more a problem for me than it is for nova users as a whole.

 I am of the opinion that the different API extensions and the fact that they
 have evolved separately have created a giant mess for users, and that we
 should consolidate the API into a single move API that can take an
 optional new set of resources (via a new specified flavor) and should
 automatically live move the instance if it is possible, and fall back to a
 cold move if it isn't possible, with no confusing options or
 additional/variant API calls needed by the user.

I agree, we need to bring together all move APIs into a single API.
Mostly thinking about migrate and live-migrate.

We could probably leave resize behind for now, but I really want to do
resize up like this:
* live migrate to host where it fits
* now shutdown guest and do the resize up of the disk
And there are specs about hot plugging CPUs as soon as you get to the
destination, if thats all you need to do.

Thanks,
John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] DVR SNAT shortcut

2014-06-25 Thread Yongsheng Gong
Hi,
for each compute node to have SNAT to Internet, I think we have the
drawbacks:
1. SNAT is done in router, so each router will have to consume one public
IP on each compute node, which is money.
2. for each compute node to go out to Internet, the compute node will have
one more NIC, which connect to physical switch, which is money too

So personally, I like the design:
 floating IPs and 1:N SNAT still use current network nodes, which will have
HA solution enabled and we can have many l3 agents to host routers. but
normal east/west traffic across compute nodes can use DVR.

yong sheng gong


On Wed, Jun 25, 2014 at 4:30 PM, Zang MingJie zealot0...@gmail.com wrote:

 Hi:

 In current DVR design, SNAT is north/south direction, but packets have
 to go west/east through the network node. If every compute node is
 assigned a public ip, is it technically able to improve SNAT packets
 w/o going through the network node ?

 SNAT versus floating ips, can save tons of public ips, in trade of
 introducing a single failure point, and limiting the bandwidth of the
 network node. If the SNAT performance problem can be solved, I'll
 encourage people to use SNAT over floating ips. unless the VM is
 serving a public service

 --
 Zang MingJie

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Fwd: [nova] How can I obtain compute_node_id in nova

2014-06-25 Thread afe.yo...@gmail.com
Any help will be greatly appreciated!

-- Forwarded message --
From: afe.yo...@gmail.com afe.yo...@gmail.com
Date: Wed, Jun 25, 2014 at 5:53 PM
Subject: [nova] How can I obtain compute_node_id in nova
To: openst...@lists.openstack.org



I found a bug recently and reported it here
https://bugs.launchpad.net/nova/+bug/1333498

The function  requires compute_node_id as its parameter.
I'm trying  to fix this bug. However I fail to find any way to obtain the
compute_node_id.

Any help will be greatly appreciated!
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [DevStack] neutron config not working

2014-06-25 Thread Kashyap Chamarthy
On Wed, Jun 25, 2014 at 01:36:02PM +0530, Kashyap Chamarthy wrote:
 On Tue, Jun 24, 2014 at 06:59:17PM -0400, Rob Crittenden wrote:
[. . .]

 Examining my install log[2], I only see 3 ERRORs that looked legitimate:
 
 (1) A fatal error about 'yaml.h' header file not found:
 ---
 [. . .]
 2014-06-25 06:22:38.963 | gcc -pthread -fno-strict-aliasing -O2 -g -pipe 
 -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong 
 --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic 
 -D_GNU_SOURCE -fPIC -fwrapv -DNDEBUG -O2 -g -pipe -Wall 
 -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong 
 --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic 
 -D_GNU_SOURCE -fPIC -fwrapv -fPIC -I/usr/include/python2.7 -c 
 build/temp.linux-x86_64-2.7/check_libyaml.c -o 
 build/temp.linux-x86_64-2.7/check_libyaml.o
 2014-06-25 06:22:38.976 | 
 build/temp.linux-x86_64-2.7/check_libyaml.c:2:18: fatal error: yaml.h: No 
 such file or directory
 2014-06-25 06:22:38.977 |  #include yaml.h
 2014-06-25 06:22:38.977 |   ^
 2014-06-25 06:22:38.978 | compilation terminated.
 2014-06-25 06:22:38.995 | 
 2014-06-25 06:22:38.996 | libyaml is not found or a compiler error: 
 forcing --without-libyaml
 2014-06-25 06:22:38.996 | (if libyaml is installed correctly, you may 
 need to
 2014-06-25 06:22:38.997 |  specify the option --include-dirs or uncomment 
 and
 2014-06-25 06:22:38.997 |  modify the parameter include_dirs in setup.cfg)
 2014-06-25 06:22:39.044 |
 [. . .]
 ---

This was resolved after I manually installed PyYAML RPM package on my
F20 system. (Thanks Attila Fazekas for pointing that out.)

 (2) For some reason, it couldn't connect to Libvirt Hypervisor, as it
 couldn't find the Libvirt socket file.
 ---
 [. . .]
 2014-06-25 06:32:08.942 | error: failed to connect to the hypervisor
 2014-06-25 06:32:08.943 | error: no valid connection
 2014-06-25 06:32:08.943 | error: Failed to connect socket to 
 '/var/run/libvirt/libvirt-sock': No such file or directory
 2014-06-25 06:32:08.948 | + instances=
 [. . .]
 ---

This too has gone in my second DevStack run.

 
 However, the file _does_ exist:
 
 $ file /var/run/libvirt/libvirt-sock
 /var/run/libvirt/libvirt-sock: socket
 
 
 (3) A Neutron complaint that it couldn't find a certain qprobe network
 namespace:
 ---
 [. . .]
 2014-06-25 06:37:21.009 | + neutron-debug --os-tenant-name admin 
 --os-username admin --os-password fedora probe-create --device-owner compute 
 7624586e-120d-45dd-a918-716b942407ff
 2014-06-25 06:37:23.435 | 2014-06-25 02:37:23.434 9698 ERROR 
 neutron.agent.linux.utils [-] 
 2014-06-25 06:37:23.436 | Command: ['sudo', '/usr/bin/neutron-rootwrap', 
 '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 
 'qprobe-19193c58-a12d-4910-b38d-cd638714b1df', 'ip', '-o', 'link', 'show', 
 'tap19193c58-a1']
 2014-06-25 06:37:23.436 | Exit code: 1
 2014-06-25 06:37:23.437 | Stdout: ''
 2014-06-25 06:37:23.437 | Stderr: 'Cannot open network namespace 
 qprobe-19193c58-a12d-4910-b38d-cd638714b1df: No such file or directory\n'
 [. . .]
 ---
 
 Howver, running `ip netns` _does_ enumerate the above qprobe network
 namespace.

 This still persists.

 And, when I boot an instance, it's still at SCHEDULING stage with no
 useful debug messages. I'll try verbose/debug logging.

-- 
/kashyap

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Why is there a 'None' task_state between 'SCHEDULING' 'BLOCK_DEVICE_MAPPING'?

2014-06-25 Thread Day, Phil
Looking at this a bit deeper the comment in _start_buidling() says that its 
doing this to “Save the host and launched_on fields and log appropriately “.
But as far as I can see those don’t actually get set until the claim is made 
against the resource tracker a bit later in the process, so this whole update 
might just be not needed – although I still like the idea of a state to show 
that the request has been taken off the queue by the compute manager.

From: Day, Phil
Sent: 25 June 2014 10:35
To: OpenStack Development Mailing List
Subject: RE: [openstack-dev] [nova] Why is there a 'None' task_state between 
'SCHEDULING'  'BLOCK_DEVICE_MAPPING'?

Hi WingWJ,

I agree that we shouldn’t have a task state of None while an operation is in 
progress.  I’m pretty sure back in the day this didn’t use to be the case and 
task_state stayed as Scheduling until it went to Networking  (now of course 
networking and BDM happen in parallel, so you have to be very quick to see the 
Networking state).

Personally I would like to see the extra granularity of knowing that a request 
has been started on the compute manager (and knowing that the request was 
started rather than is still sitting on the queue makes the decision to put it 
into an error state when the manager is re-started more robust).

Maybe a task state of “STARTING_BUILD” for this case ?

BTW I don’t think _start_building() is called anymore now that we’ve switched 
to conductor calling build_and_run_instance() – but the same task_state issue 
exist in there well.

From: wu jiang [mailto:win...@gmail.com]
Sent: 25 June 2014 08:19
To: OpenStack Development Mailing List
Subject: [openstack-dev] [nova] Why is there a 'None' task_state between 
'SCHEDULING'  'BLOCK_DEVICE_MAPPING'?

Hi all,

Recently, some of my instances were stuck in task_state 'None' during VM 
creation in my environment.

So I checked  found there's a 'None' task_state between 'SCHEDULING'  
'BLOCK_DEVICE_MAPPING'.

The related codes are implemented like this:

#def _start_building():
#self._instance_update(context, instance['uuid'],
#  vm_state=vm_states.BUILDING,
#  task_state=None,
#  expected_task_state=(task_states.SCHEDULING,
#   None))

So if compute node is rebooted after that procession, all building VMs on it 
will always stay in 'None' task_state. And it's useless and not convenient for 
locating problems.

Why not a new task_state for this step?


WingWJ
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] [ironic]nova scheduler and ironic

2014-06-25 Thread Day, Phil
I think there’s a bit more to it that just having an aggregate:


-  Ironic provides its own version of the Host manager class for the 
scheduler, I’m not sure if that is fully compatible with the non-ironic case.  
Even in the BP for merging the Ironic driver back into Nova it still looks like 
this will stay as a sub-class (would be good if they can just be merged IMO)


-  You’d need to decide how you want to use the aggregate – extra specs 
in the flavor matching against the aggregate metatdata is one way, you could 
also do it by matching image metadata (as the ironic images are going to be 
different from KVM ones)


From: Joe Gordon [mailto:joe.gord...@gmail.com]
Sent: 25 June 2014 05:53
To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] [Nova] [ironic]nova scheduler and ironic


On Jun 24, 2014 7:02 PM, Jander lu 
lhcxx0...@gmail.commailto:lhcxx0...@gmail.com wrote:

 hi, guys, I have two confused issue when reading source code.

 1) can we have ironic driver and KVM driver both exist in the cloud? for 
 example, I have 8 compute nodes, I make 4 of them with compute_driver = 
 libvirt and remaining 4 nodes with 
 compute_driver=nova.virt.ironic.IronicDriver ?

 2) if it works, how does nova scheduler work to choose the right node in this 
 case if I want boot a VM or a physical node ?

You can use host aggregates to make certain flavors bare metal and others KVM



 thx all.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron ML2] Potential DB lock when developing new mechanism driver

2014-06-25 Thread Li Ma
Hi Kevin,

Thanks for your reply. Actually, it is not that straightforward.
Even if postcommit is outside the 'with' statement, the transaction is not 
'truly' committed immediately. Because when I put my db code (reading and 
writing ml2-related tables) in postcommit, db lock wait exception is still 
thrown.

Li Ma

- Original Message -
From: Kevin Benton blak...@gmail.com
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Sent: 星期三, 2014年 6 月 25日 下午 4:59:26
Subject: Re: [openstack-dev] [Neutron ML2] Potential DB lock when developing 
new mechanism driver



The post_commit methods occur outside of the transactions. You should be able 
to perform the necessary database calls there. 


If you look at the code snippet in the email you provided, you can see that the 
'try' block surrounding the postcommit method is at the same indentation-level 
as the 'with' statement for the transaction so it will be closed at that point. 


Cheers, 
Kevin Benton 


-- 
Kevin Benton 



On Tue, Jun 24, 2014 at 8:33 PM, Li Ma  skywalker.n...@gmail.com  wrote: 


Hi all, 

I'm developing a new mechanism driver. I'd like to access ml2-related tables in 
create_port_precommit and create_port_postcommit. However I find it hard to do 
that because the two functions are both inside an existed database transaction 
defined in create_port function of ml2/plugin.py. 

The related code is as follows: 

def create_port(self, context, port): 
... 
session = context.session 
with session.begin(subtransactions=True): 
... 
self.mechanism_manager.create_port_precommit(mech_context) 
try: 
self.mechanism_manager.create_port_postcommit(mech_context) 
... 
... 
return result 

As a result, I need to carefully deal with the database nested transaction 
issue to prevent from db lock when I develop my own mechanism driver. Right 
now, I'm trying to get the idea behind the scene. Is it possible to refactor it 
in order to make precommit and postcommit out of the db transaction? I think it 
is perfect for those who develop mechanism driver and do not know well about 
the functioning context of the whole ML2 plugin. 

Thanks, 
Li Ma 

___ 
OpenStack-dev mailing list 
OpenStack-dev@lists.openstack.org 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 




-- 

Kevin Benton 
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Tripleo, Ironic, the SSH power driver, paramiko and eventlet fun.

2014-06-25 Thread jang
On Tue, 24 Jun 2014, j...@ioctl.org wrote:

 There's a bug on this: 
 https://bugs.launchpad.net/ironic/+bug/1321787?comments=all


We've got a potential eventlet fix here:

  https://github.com/jan-g/eventlet/tree/wip

... testing it now. Certainly seems hopeful in the face of a 'multiple 
threads running paramiko' test.

Cheers,
jan

-- 
My army boots contain everything not in them. - Russell's pair o' Docs.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano] environment deletion

2014-06-25 Thread Stan Lagun
Hi Steve,

The initial implementation of environment deletion (before 'destroy' method
was introduced in MuranoPL) was to deploy environment with empty
application list. So the code that deletes Heat stack was in 'deploy'. And
it is still there. Environment's 'destroy' will be executed only in case
Object Model is empty (doesn't contain Environment object), not in the case
it is present, but application list is empty. Now if you say that stack
doesn't get deleted it is clearly a bug that need to be filled in
launchpad. I'm not sure why it can happen. Maybe API behavior was changed
recently to send empty Object Model or it sends incorrect action name. This
just need to be debugged.



Sincerely yours,
Stan Lagun
Principal Software Engineer @ Mirantis

 sla...@mirantis.com


On Wed, Jun 25, 2014 at 3:57 AM, McLellan, Steven steve.mclel...@hp.com
wrote:

  Is there any reason the system Environment class doesn’t implement
 destroy? Without it, the pieces of the heat stack not owned by other
 resources get left lying around. It looks like it was once implemented as
 part of deploy, but that no longer seems to execute.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] DVR SNAT shortcut

2014-06-25 Thread Zang MingJie
On Wed, Jun 25, 2014 at 5:42 PM, Yongsheng Gong gong...@unitedstack.com wrote:
 Hi,
 for each compute node to have SNAT to Internet, I think we have the
 drawbacks:
 1. SNAT is done in router, so each router will have to consume one public IP
 on each compute node, which is money.

SNAT can save more ips than wasted on floating ips

 2. for each compute node to go out to Internet, the compute node will have
 one more NIC, which connect to physical switch, which is money too


Floating ip also need a public NIC on br-ex. Also we can use a
separate vlan to handle the network, so this is not a problem

 So personally, I like the design:
  floating IPs and 1:N SNAT still use current network nodes, which will have
 HA solution enabled and we can have many l3 agents to host routers. but
 normal east/west traffic across compute nodes can use DVR.

BTW, does HA implementation still active ? I haven't seen it has been
touched for a while


 yong sheng gong


 On Wed, Jun 25, 2014 at 4:30 PM, Zang MingJie zealot0...@gmail.com wrote:

 Hi:

 In current DVR design, SNAT is north/south direction, but packets have
 to go west/east through the network node. If every compute node is
 assigned a public ip, is it technically able to improve SNAT packets
 w/o going through the network node ?

 SNAT versus floating ips, can save tons of public ips, in trade of
 introducing a single failure point, and limiting the bandwidth of the
 network node. If the SNAT performance problem can be solved, I'll
 encourage people to use SNAT over floating ips. unless the VM is
 serving a public service

 --
 Zang MingJie

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] should we have a stale data indication in nova list/show?

2014-06-25 Thread Sean Dague
On 06/25/2014 04:28 AM, Belmiro Moreira wrote:
 I like the current behavior of not changing the VM state if nova-compute
 goes down. 
 
 The cloud operators can identify the issue in the compute node and try
 to fix it without users noticing. Depending in the problem I can inform
 users if instances are affected and change the state if necessary. 
 
 I wouldn't like is to expose any failure in nova-compute to users and be
 contacted because VM state changed. 

Agreed. Plus in the perfectly normal case of an upgrade of a compute
node, it's expected that nova-compute is going to be down for some
period of time, and it's 100% expected that the VMs remain up and ACTIVE
over that period.

Setting VMs to ERROR would totally gum that up.

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [hacking] rules for removal

2014-06-25 Thread Martin Geisler
Sean Dague s...@dague.net writes:

 On 06/25/2014 03:56 AM, Martin Geisler wrote:
 
 In the Mercurial project we accept contributions sent as patches
 only. There it's common for the core developers to fix the commit
 message locally before importing a patch. That makes it quick to fix
 these problems and I think that this workflow puts less work on the
 core maintainers.
 
 With Gerrit, it seems that simply fixing the commit message in the
 web interface could work. I know that a patch submitter can update it
 online, but I don't know if (core) reviewers can also just update it?

 Anyone can actually upload a 2nd patch, which includes changing the
 commit message. We just mostly have a culture of not rewriting
 people's patches, for better or worse.

Thanks, I did not know about this possibility.

 (Updating the patch in Gerrit would go behind the back of the
 submitter who would then have to rebase any additional work he has
 done on the branch. So this is not 100% pain free.)

 That's often the challenge, it works fine if the original author is
 actually paying attention, and does a git review -d instead of just
 using their local branch. But is many cases that's not happening.
 (Also it's completely off book for how we teach folks to use git
 --amend in the base case).

 I've had instances of working with someone where even though we were
 talking on IRC during the whole thing, they kept overwriting the fix I
 was sticking in for them to get the test fixed. So typically you only
 want to do this with really advanced developers, with heads up that
 you pushed over them.

I would guess that these developers would also typically respond quickly
and positively if you point out typos in the commit message. So this
makes the extra round trip less of an issue.

I've only submitted some small trivial patches. As far as I could tell,
Gerrit triggered a full test cycle when I just changed the commit
message. That surprised me and made the reviews more time-consuming,
especially because Jenkins would fail fairly often because of what looks
like heisenbugs to me.

 I do also think people often get grumpy about other people rewriting
 their code. Which I think is just human, so erring on the side of
 giving feedback instead of taking it over is I think the right thing
 to do.

I agree that such tricks are likely to do more harm than good for new
contributors.

-- 
Martin Geisler

http://google.com/+MartinGeisler


pgpSKzywISun0.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [devstack][Manila][docs] Setting up Devstack with Manila on Fedora 20

2014-06-25 Thread Deepak Shetty
Hi List,
I have created a new wikipage with the goal to document the steps
needed to setup DevStack with Manila on F20. Added some troubleshooting
tips based on my experience.

https://wiki.openstack.org/wiki/Manila/docs/Setting_up_DevStack_with_Manila_on_Fedora_20

Pls have a look and provide comments, if any.

The idea is to have this updated as and when new tips and/or corrections
are needed so that this can become a good reference for people starting on
Manila

thanx,
deepak

P.S. I added this page on the https://wiki.openstack.org/wiki/Manila/docs/
under `Fedora 20` link and removed the F19 link that was present before
which is now outdated.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Announcing Keystone Middleware Project

2014-06-25 Thread Sergey Lukjanov
Are there any API changes that will make projects need to fix?

On Wed, Jun 25, 2014 at 7:25 AM, Morgan Fainberg
morgan.fainb...@gmail.com wrote:
 I expect that we will be releasing the 1.0.0 shortly here (or at the very
 least an alpha so we can move forward) to make sure we have time get the new
 package in use during Juno. As soon as we have something released (should be
 very soon), I’ll make sure we give a heads up to all the packagers.

 Cheers,
 Morgan

 —
 Morgan Fainberg


 From: Tom Fifield t...@openstack.org
 Reply: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 Date: June 24, 2014 at 20:23:42
 To: openstack-dev@lists.openstack.org openstack-dev@lists.openstack.org
 Subject:  Re: [openstack-dev] [Keystone] Announcing Keystone Middleware
 Project

 On 25/06/14 07:24, Morgan Fainberg wrote:
 The Keystone team would like to announce the official split of
 python-keystoneclient and the Keystone middleware code.
 Over time the middleware (auth_token, s3_token, ec2_token) has developed
 into a fairly expansive code base and
 includes dependencies that are not necessarily appropriate for the
 python-keystoneclient library and CLI tools. Combined
 with the desire to be able to release updates of the middleware code
 without requiring an update of the CLI and
 python-keystoneclient library itself, we have opted to split the
 packaging of the middleware.

 Seems sane :) If you haven't already, please consider giving a heads up
 to the debian/redhat/suse/ubuntu packagers so they're prepped as early
 as possible.


 Regards,


 Tom

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [hacking] rules for removal

2014-06-25 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

On 25/06/14 12:59, Sean Dague wrote:
 On 06/25/2014 03:56 AM, Martin Geisler wrote:
 Mark McLoughlin mar...@redhat.com writes:
 
 On Tue, 2014-06-24 at 13:56 -0700, Clint Byrum wrote:
 Excerpts from Mark McLoughlin's message of 2014-06-24
 12:49:52 -0700:
 
 However, there is a debate, and thus I would _never_ block a
 patch based on this rule. It was feedback.. just as sometimes
 there is feedback in commit messages that isn't taken and
 doesn't lead to a -1.
 
 Absolutely, and I try and be clear about that with e.g. not a
 -1 or if you're rebasing anyway, perhaps fix this.
 
 Perhaps the problem is the round-trips such corrections imply?
 
 In the Mercurial project we accept contributions sent as patches
 only. There it's common for the core developers to fix the commit
 message locally before importing a patch. That makes it quick to
 fix these problems and I think that this workflow puts less work
 on the core maintainers.
 
 With Gerrit, it seems that simply fixing the commit message in
 the web interface could work. I know that a patch submitter can
 update it online, but I don't know if (core) reviewers can also
 just update it?
 
 Anyone can actually upload a 2nd patch, which includes changing
 the commit message. We just mostly have a culture of not rewriting
 people's patches, for better or worse.
 

That can even be achieved thru Gerrit WebUI. There's a button for
this. But it's not about culture only. If you update commit message
for another guy, and then he needs to update the patch due to
comments, he will need to fetch your patch to his workspace and amend
it, instead of amending what he has in his local repo. This is
inconvenient.

 (Updating the patch in Gerrit would go behind the back of the 
 submitter who would then have to rebase any additional work he
 has done on the branch. So this is not 100% pain free.)
 
 That's often the challenge, it works fine if the original author
 is actually paying attention, and does a git review -d instead of
 just using their local branch. But is many cases that's not
 happening. (Also it's completely off book for how we teach folks to
 use git --amend in the base case).
 
 I've had instances of working with someone where even though we
 were talking on IRC during the whole thing, they kept overwriting
 the fix I was sticking in for them to get the test fixed. So
 typically you only want to do this with really advanced developers,
 with heads up that you pushed over them.
 
 Maybe there are trickier things we could do in git-review for this.
 But it definitely gets goofy if you aren't paying attention.
 
 I do also think people often get grumpy about other people
 rewriting their code. Which I think is just human, so erring on the
 side of giving feedback instead of taking it over is I think the
 right thing to do.
 
 -Sean
 
 
 
 
 ___ OpenStack-dev
 mailing list OpenStack-dev@lists.openstack.org 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2.0.22 (Darwin)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQEcBAEBCgAGBQJTqrrHAAoJEC5aWaUY1u57PNMIAK3squEc3Wy/l7BWMP5zeke+
aPz2QoKksvmAmAT2kpIg1VQLGokuD12gjlTuLRIBaFd3mZOa1kOeIoLbu3BYFCDc
1dmGoGKgimbsnx3vQuFG5AyHU/vrah4ysP4lFb/5vwXrQRQDXP3hpp0ShB3p1v5x
23T2aYV9Snkbpvb1EaeN8ca8Z5el0qfeX3RP7DHHgi2phsH+8EebHQ0XmYQluB59
P6d5NHZhQBaXMV9qPbyrzdc3QpMgEOLAYybZKwWvEDMeKAFw/jQAlJlhxKVzJ6XT
58xkpBTEAHw3OBLfeHuWNf529/cKexK97OwSXl/a3wNxKaSgc0FHM1gbPFovKpc=
=05UN
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [hacking] rules for removal

2014-06-25 Thread Sean Dague
On 06/25/2014 07:53 AM, Martin Geisler wrote:
 Sean Dague s...@dague.net writes:
 
 On 06/25/2014 03:56 AM, Martin Geisler wrote:

 In the Mercurial project we accept contributions sent as patches
 only. There it's common for the core developers to fix the commit
 message locally before importing a patch. That makes it quick to fix
 these problems and I think that this workflow puts less work on the
 core maintainers.

 With Gerrit, it seems that simply fixing the commit message in the
 web interface could work. I know that a patch submitter can update it
 online, but I don't know if (core) reviewers can also just update it?

 Anyone can actually upload a 2nd patch, which includes changing the
 commit message. We just mostly have a culture of not rewriting
 people's patches, for better or worse.
 
 Thanks, I did not know about this possibility.
 
 (Updating the patch in Gerrit would go behind the back of the
 submitter who would then have to rebase any additional work he has
 done on the branch. So this is not 100% pain free.)

 That's often the challenge, it works fine if the original author is
 actually paying attention, and does a git review -d instead of just
 using their local branch. But is many cases that's not happening.
 (Also it's completely off book for how we teach folks to use git
 --amend in the base case).

 I've had instances of working with someone where even though we were
 talking on IRC during the whole thing, they kept overwriting the fix I
 was sticking in for them to get the test fixed. So typically you only
 want to do this with really advanced developers, with heads up that
 you pushed over them.
 
 I would guess that these developers would also typically respond quickly
 and positively if you point out typos in the commit message. So this
 makes the extra round trip less of an issue.
 
 I've only submitted some small trivial patches. As far as I could tell,
 Gerrit triggered a full test cycle when I just changed the commit
 message. That surprised me and made the reviews more time-consuming,
 especially because Jenkins would fail fairly often because of what looks
 like heisenbugs to me.

We track them here - http://status.openstack.org/elastic-recheck/ - help
always appreciated in fixing them. Most of them are actually race
conditions that exist in OpenStack.

I think optimizing the zuul path for commit message only changes would
be useful. Today the pipeline only knows that there was a change. That's
not something anyone's gotten around to yet.

 I do also think people often get grumpy about other people rewriting
 their code. Which I think is just human, so erring on the side of
 giving feedback instead of taking it over is I think the right thing
 to do.
 
 I agree that such tricks are likely to do more harm than good for new
 contributors.
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [containers][nova][cinder] Cinder support in containers and unprivileged container-in-container

2014-06-25 Thread Eric Windisch


 I’m reasonably sure that nobody wants to intentionally relax compute host
 security in order to add this new functionality. Let’s find the right short
 term and long term approaches


From our discussions, one approach that seemed popular for long-term
support was to find a way to gracefully allow mounting inside of the
containers by somehow trapping the syscall. It was presumed we would have
to make some change(s) to the kernel for this.

It turns out we can already do this using the kernel's seccomp feature.
Using seccomp, we should be able to trap the mount calls and handle them in
userspace.

References:
*
http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/Documentation/prctl/seccomp_filter.txt?id=HEAD
* http://chdir.org/~nico/seccomp-nurse/

-- 
Regards,
Eric Windisch
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder][volume/manager.py] volume driver mapping

2014-06-25 Thread Yogesh Prasad
Hi All,

I am observing a bit difference in manager.py file between these branches
stable/icehouse and master.
In stable/icehouse various driver mapped in manager.py but it is not in
master.

Please guide me, where i have to map my driver.

*Thanks  Regards*,
  Yogesh Prasad.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][nova] nova needs a new release of neutronclient for OverQuotaClient exception

2014-06-25 Thread Kyle Mestery
On Tue, Jun 24, 2014 at 10:32 PM, Angus Lees gusl...@gmail.com wrote:
 On Tue, 24 Jun 2014 02:46:33 PM Kyle Mestery wrote:
 On Mon, Jun 23, 2014 at 11:08 AM, Kyle Mestery

 mest...@noironetworks.com wrote:
  On Mon, Jun 23, 2014 at 8:54 AM, Matt Riedemann
 
  mrie...@linux.vnet.ibm.com wrote:
  There are at least two changes [1][2] proposed to Nova that use the new
  OverQuotaClient exception in python-neutronclient, but the unit test jobs
  no longer test against trunk-level code of the client packages so they
  fail. So I'm here to lobby for a new release of python-neutronclient if
  possible so we can keep these fixes moving.  Are there any issues with
  that?
  Thanks for bringing this up Matt. I've put this on the agenda for the
  Neutron meeting today, I'll reply on this thread with what comes out
  of that discussion.
 
  Kyle

 As discussed in the meeting, we're going to work on making a new
 release of the client Matt. Ping me in channel later this week, we're
 working the details out on that release at the moment.

 fyi, it would also make sense to include this neutronclient fix too:
  https://review.openstack.org/#/c/98318/
 (assuming it gets sufficient reviews+submitted)

I've just approved that one, so we won't cut the release without it at
this point. Thanks for the heads up!

Kyle

 Thanks,
 Kyle

  [1]
  https://wiki.openstack.org/wiki/Network/Meetings#Team_Discussion_Topics
 
  [1] https://review.openstack.org/#/c/62581/
  [2] https://review.openstack.org/#/c/101462/
  --
 
  Thanks,
 
  Matt Riedemann
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 --
  - Gus

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][glance] Update volume-image-metadata proposal

2014-06-25 Thread Maldonado, Facundo N
Thanks for the response, I'll be there this Thursday.

Having the file in more than one place could me a nightmare if we have to 
maintain consistency between them.
It could be good if we want to protect different properties than Glance.

Thanks,
Facundo

From: Brian Rosmaita [mailto:brian.rosma...@rackspace.com]
Sent: Tuesday, June 24, 2014 7:10 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [cinder][glance] Update volume-image-metadata 
proposal

Hi Facundo,

Can you attend the Glance meeting this week at 20:00 UTC on Thursday in 
#openstack-meeting-alt ?

I may be misunderstanding what's at stake, but it looks like:
- Glance holds the image metadata (some user-modifiable, some not)
- Cinder copies the image metadata to use as volume metadata (none is 
user-modifiable)
- You want to implement user-modifiable metadata in Cinder, but you don't know 
which items should be mutable and which not.
- You propose to add glance API calls to allow you to figure out property 
protections on a per-property basis.

It looks like the only roles for Glance here are (1) as the original source of 
the image metadata, and then (2) as the source of truth for what image 
properties can be modified on the volume metadata.  For (1), you've already got 
an API call.  For (2), why not use the glance property protection configuration 
file directly?  It's going to be deployed somehow to your glance nodes, you can 
deploy it to your cinder nodes at the same time.  Or you can just use it as the 
basis of a Cinder property protection config file, because I wonder whether in 
the general case, you'll always want volume properties protected exactly the 
same as image properties.  If not, the new API call strategy will force you to 
deal with differences in the code, whereas the config file strategy would move 
dealing with differences to setting up the config file.  So I'm not convinced 
that a new API call is the way to go here.

But there may be some nuances I'm missing, so it might be easier to discuss at 
the Glance meeting.  The agenda looks pretty light for Thursday if you want to 
add this topic:
https://etherpad.openstack.org/p/glance-team-meeting-agenda

cheers,
brian

From: Maldonado, Facundo N [facundo.n.maldon...@intel.com]
Sent: Tuesday, June 24, 2014 2:34 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [cinder][glance] Update volume-image-metadata proposal
Hi folks,

I started working on this blueprint [1] but the work to be done 
is not limited to cinder python client.
Volume-image-metadata is immutable in Cinder and Glance has 
RBAC image properties and it doesn't provide any way to find out which are 
those protected properties in advance [2].

I want to share this proposal and get feedback from you.

https://docs.google.com/document/d/1XYEqGOa30viOyZf8AiwkrCiMWGTfBKjgmeYBptaCHlM/


Thanks,
Facundo

[1] 
https://blueprints.launchpad.net/python-cinderclient/+spec/support-volume-image-metadata
[2] 
http://openstack.10931.n7.nabble.com/Cinder-Confusion-about-the-respective-use-cases-for-volume-s-admin-metadata-metadata-and-glance-imaga-td39849.html

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] validation of input against column-size specified in schema

2014-06-25 Thread Mark McClain

On Jun 25, 2014, at 12:26 AM, Manish Godara mani...@yahoo-inc.com wrote:
 Hi,
 
 Is there any way in current neutron codebase that can be used to validate
 the length of a string attribute against the max column size specified in
 the schema for that attribute.
 
 E.g. , in models_v2.py
 
 class Network(model_base.BASEV2, HasId, HasTenant):
Represents a v2 neutron network.
 
name = sa.Column(sa.String(255))
...
 
 
 And if I want to validate the 'name' before storing in db, then how can I
 get the max allowable length given this definition?  I don't see any such
 validations being done in neutron for fields, so wondering how to do it.
 Maybe it's there and I missed it.
 


There’s not any length validation currently as the original v2 API spec never 
specified the max field length.  One of the changes being made as part of the 
REST layer refactoring is that we’re adding length validation.

mark
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][volume/manager.py] volume driver mapping

2014-06-25 Thread Duncan Thomas
That's easy; you don't. The mappings are their because we moved some
drivers round during a code cleanup and didn't want old config files
to break during an upgrade. The old names have been deprecated since
Falsom and finally now removed; new drivers don't need to do any
mapping at all.

On 25 June 2014 14:29, Yogesh Prasad yogesh.pra...@cloudbyte.com wrote:
 Hi All,

 I am observing a bit difference in manager.py file between these branches
 stable/icehouse and master.
 In stable/icehouse various driver mapped in manager.py but it is not in
 master.

 Please guide me, where i have to map my driver.

 Thanks  Regards,
   Yogesh Prasad.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Duncan Thomas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [QA] Meeting Thursday June 26th at 22:00 UTC

2014-06-25 Thread Matthew Treinish
Hi everyone,

Just a quick reminder that the weekly OpenStack QA team IRC meeting will be
tomorrow Thursday, June 26th at 22:00 UTC in the #openstack-meeting channel.

The agenda for tomorrow's meeting can be found here:
https://wiki.openstack.org/wiki/Meetings/QATeamMeeting
Anyone is welcome to add an item to the agenda.

To help people figure out what time 22:00 UTC is in other timezones tomorrow's
meeting will be at:

18:00 EDT
07:00 JST
07:30 ACST
0:00 CEST
17:00 CDT
15:00 PDT

-Matt Treinish


pgpn6OGnUQwQd.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Announcing Keystone Middleware Project

2014-06-25 Thread Brant Knudson
On Wed, Jun 25, 2014 at 6:56 AM, Sergey Lukjanov slukja...@mirantis.com
wrote:

 Are there any API changes that will make projects need to fix?


There aren't any API changes. The code was just copy-pasted to the new
package. The changes to the projects are just 1) update the requirements
and 2) change the auth_token middleware from
keystoneclient.middleware.auth_token to keystonemiddleware.auth_token in
api-paste.ini. I've proposed WIP changes to the projects that I had repos
for using the `keystonemiddleware` topic:


https://review.openstack.org/#/q/status:open+branch:master+topic:keystonemiddleware,n,z

- Brant
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] DVR SNAT shortcut

2014-06-25 Thread McCann, Jack
  If every compute node is
  assigned a public ip, is it technically able to improve SNAT packets
  w/o going through the network node ?

It is technically possible to implement default SNAT at the compute node.

One approach would be to use a single IP address per compute node as a
default SNAT address shared by all VMs on that compute node.  While this
optimizes for number of external IPs consumed per compute node, the downside
is having VMs from different tenants sharing the same default SNAT IP address
and conntrack table.  That downside may be acceptable for some deployments,
but it is not acceptable in others.

Another approach would be to use a single IP address per router per compute
node.  This avoids the multi-tenant issue mentioned above, at the cost of
consuming more IP addresses, potentially one default SNAT IP address for each
VM on the compute server (which is the case when every VM on the compute node
is from a different tenant and/or using a different router).  At that point
you might as well give each VM a floating IP.

Hence the approach taken with the initial DVR implementation is to keep
default SNAT as a centralized service.

- Jack

 -Original Message-
 From: Zang MingJie [mailto:zealot0...@gmail.com]
 Sent: Wednesday, June 25, 2014 6:34 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Neutron] DVR SNAT shortcut
 
 On Wed, Jun 25, 2014 at 5:42 PM, Yongsheng Gong gong...@unitedstack.com 
 wrote:
  Hi,
  for each compute node to have SNAT to Internet, I think we have the
  drawbacks:
  1. SNAT is done in router, so each router will have to consume one public IP
  on each compute node, which is money.
 
 SNAT can save more ips than wasted on floating ips
 
  2. for each compute node to go out to Internet, the compute node will have
  one more NIC, which connect to physical switch, which is money too
 
 
 Floating ip also need a public NIC on br-ex. Also we can use a
 separate vlan to handle the network, so this is not a problem
 
  So personally, I like the design:
   floating IPs and 1:N SNAT still use current network nodes, which will have
  HA solution enabled and we can have many l3 agents to host routers. but
  normal east/west traffic across compute nodes can use DVR.
 
 BTW, does HA implementation still active ? I haven't seen it has been
 touched for a while
 
 
  yong sheng gong
 
 
  On Wed, Jun 25, 2014 at 4:30 PM, Zang MingJie zealot0...@gmail.com wrote:
 
  Hi:
 
  In current DVR design, SNAT is north/south direction, but packets have
  to go west/east through the network node. If every compute node is
  assigned a public ip, is it technically able to improve SNAT packets
  w/o going through the network node ?
 
  SNAT versus floating ips, can save tons of public ips, in trade of
  introducing a single failure point, and limiting the bandwidth of the
  network node. If the SNAT performance problem can be solved, I'll
  encourage people to use SNAT over floating ips. unless the VM is
  serving a public service
 
  --
  Zang MingJie
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [OSSG][OSSN] Nova Network configuration allows guest VMs to connect to host services

2014-06-25 Thread Nathan Kinder
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Nova Network configuration allows guest VMs to connect to host services
- ---

### Summary ###
When using Nova Network to manage networking for compute instances,
instances are able to reach network services running on the host
system. This may be a security issue for the operator.

### Affected Services / Software ###
Nova, Folsom, Grizzly, Havana, Icehouse

### Discussion ###
OpenStack deployments using Nova Network, rather than Neutron for
network configuration will cause the host running the instances to be
reachable on the virtual network. Specifically, booted instances can
check the address of their gateway and try to connect to it. Any host
service which listens on the interfaces created by OpenStack and does
not apply any additional filtering will receive such traffic.

This is a security issue for deployments where the OpenStack service
users are not trusted parties, or should not be allowed to access
underlying services of the host system.

Using a specific example of devstack in default configuration, the
instance spawned inside of it will see the following routing table:

$ ip r s
default via 172.16.1.1 dev eth0
172.16.1.0/24 dev eth0  src 172.16.1.2

The instance can then use the gateway's address (172.16.1.1) to connect
to the sshd service on the host system (if one is running and listening
on all interfaces). The host system will see the connection coming from
interface `br100`.

### Recommended Actions ###
Connections like this can be stopped at various levels (libvirt filters,
specific host's iptables entries, ebtables, network service
configuration). The recommended way to protect against the incoming
connections is to stop the critical services from binding to the
Nova-controlled interfaces.

Using the sshd service as an example, the default configuration on most
systems is to bind to all interfaces and all local addresses
(ListenAddress :22 in sshd_config).  In order to configure it only on
a specific interface, use ListenAddress a.b.c.d:22 where a.b.c.d is
the address assigned to the chosen interface. Similar settings can be
found for most other services.

The list of services listening on all interfaces can be obtained by
running command 'netstat -ltu', where the '*:port' in the Local
Address field means the service will likely accept connections from the
local Nova instances.

If filtering of the traffic is chosen instead, care must be taken to
allow traffic coming from the running instances to services controlled
by Nova - DHCP and DNS providers.

### Contacts / References ###
This OSSN : https://wiki.openstack.org/wiki/OSSN/OSSN-0018
Original LaunchPad Bug : https://bugs.launchpad.net/nova/+bug/1316271
OpenStack Security ML : openstack-secur...@lists.openstack.org
OpenStack Security Group : https://launchpad.net/~openstack-ossg
-BEGIN PGP SIGNATURE-
Version: GnuPG v1
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQEcBAEBAgAGBQJTqt1vAAoJEJa+6E7Ri+EVxxcH/jKHpZOebKKwEpj6EqwNQQeV
o1atUb7zvqhSUUYHIBbAgc51bSlWtdvUVq+fH5w1O/PU+C1OMMtDZK7lvQnZDYrA
j5XfEItjon1wAIyaZm96OOlq39PW5gQJN6q1A+/3sV6tpeVsX6VhucJH4tOAillL
vOyuGcBZeWDbf38IZXHulALvkJ6ReNcZzrzSrbpA3n2d7dGhtBiYXV2DMjxvOjDE
qLy+Fe3KAeZWtYgqK6NPKUfNGzIxtoKgvgoOJugp1EWIr9HdwIjTOndsI4owThjC
M6OElLwaROWtpFe1gONiaU1gVDZ2MdXmQPwB4ZMzNFdoAc3cx7IAzHFClei2fug=
=2HqX
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] [nova] How can I obtain compute_node_id in nova

2014-06-25 Thread Steve Martinelli
You should probably email the openstack-dev
mailing list (openstack-dev@lists.openstack.org) or better yet, ask for
help in #openstack-nova on IRC.

Regards,

Steve Martinelli
Software Developer - Openstack
Keystone Core Member





Phone:
1-905-413-2851
E-mail: steve...@ca.ibm.com

8200 Warden Ave
Markham, ON L6G 1C7
Canada




From:   
afe.yo...@gmail.com
afe.yo...@gmail.com
To:   
openst...@lists.openstack.org,

Date:   
06/25/2014 06:09 AM
Subject:  
 [Openstack]
[nova] How can I obtain compute_node_id in nova





I found a bug recently and reported it herehttps://bugs.launchpad.net/nova/+bug/1333498

The function requires compute_node_id as its parameter.

I'm trying to fix this bug. However I fail to find
any way to obtain the compute_node_id.

Any help will be greatly appreciated!


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to   : openst...@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] One performance issue about VXLAN pool initiation

2014-06-25 Thread ZZelle
Hi everyone,


A new change (https://review.openstack.org/101982) has been proposed to
improve vxlan pool initiation with an improvement on delete of obsolete
unallocated vnis using a unique delete SQL command.
I've tested performance with the following (delete only) scenario: vxlan
range is changed from 0:10 to 5:10.
The scenario code is available here: http://paste.openstack.org/show/84882

50k vnis to deletePostgresql
MySQL
Sqlite
current code
6,0
5,5
5,1
proposed code
3,2
3,3
3,2


The gain is from 40% to 50%.


Raw results: http://paste.openstack.org/show/84890







On Mon, Jun 9, 2014 at 3:38 PM, Eugene Nikanorov enikano...@mirantis.com
wrote:

 Mike,

 Thanks a lot for your response!
 Some comments:
  There's some in-Python filtering following it which does not seem
 necessary; the alloc.vxlan_vni not in vxlan_vnis phrase
  could just as well be a SQL NOT IN expression.
 There we have to do specific set intersection between configured ranges
 and existing allocation. That could be done in sql,
 but that certainly would lead to a huge sql query text as full vxlan range
 could consist of 16 millions of ids.

   The synchronize_session=fetch is certainly a huge part of the time
 spent here
 You've actually made a good point about synchronize_session=fetch which
 was obviously misused by me.
 It seems to save up to 40% of plain deleting time.

 I've fixed that and get some speedup with deletes for both mysql and
 postgress that reduced difference between chunked/non-chunked version:

  50k vnis to add/deletePg adding vnisPg deleting vnis Pg TotalMysql
 adding vnis Mysql deleting vnisMysql totalnon-chunked sql 221537 151530 chuked
 in 10020 133314 1428

 Results of chunked and non-chunked version look closer, but gap increases
 with vni range size (based on few tests of 150k vni range)

 So I'm going to fix chunked version that is on review now. If you think
 that the benefit doesn't worth complexity - please let me know.

 Thanks,
 Eugene.

 On Mon, Jun 9, 2014 at 1:33 AM, Mike Bayer mba...@redhat.com wrote:


 On Jun 7, 2014, at 4:38 PM, Eugene Nikanorov enikano...@mirantis.com
 wrote:

 Hi folks,

 There was a small discussion about the better way of doing sql operations
 for vni synchronization with the config.
 Initial proposal was to handle those in chunks. Carl also suggested to
 issue a single sql query.
 I've did some testing with my sql and postgress.
 I've tested the following scenario: vxlan range is changed from
 5:15 to 0:10 and vice versa.
 That involves adding and deleting 5 vni in each test.

 Here are the numbers:
  50k vnis to add/deletePg adding vnisPg deleting vnis Pg TotalMysql
 adding vnis Mysql deleting vnisMysql totalnon-chunked sql 232245 142034 
 chunked
 in 10020 173714 1731

 I've done about 5 tries to get each number to minimize random floating
 factor (due to swaps, disc or cpu activity or other factors)
 That might be surprising that issuing multiple sql statements instead one
 big is little bit more efficient, so I would appreciate if someone could
 reproduce those numbers.
 Also I'd like to note that part of code that iterates over vnis fetched
 from db is taking 10 seconds both on mysql and postgress and is a part of
 deleting vnis numbers.
 In other words, difference between multiple DELETE sql statements and
 single one is even bigger (in percent) than these numbers show.

 The code which I used to test is here:
 http://paste.openstack.org/show/83298/
 Right now the chunked version is commented out, so to switch between
 versions some lines should be commented and some - uncommented.


 I've taken a look at this, though I'm not at the point where I have
 things set up to run things like this within full context, and I don't know
 that I have any definitive statements to make, but I do have some
 suggestions:

 1. I do tend to chunk things a lot, selects, deletes, inserts, though the
 chunk size I work with is typically more like 1000, rather than 100.   When
 chunking, we're looking to select a size that doesn't tend to overload the
 things that are receiving the data (query buffers, structures internal to
 both SQLAlchemy as well as the DBAPI and the relational database), but at
 the same time doesn't lead to too much repetition on the Python side (where
 of course there's a lot of slowness).

 2. Specifically regarding WHERE x IN (.), I always chunk those.  When
 we use IN with a list of values, we're building an actual SQL string that
 becomes enormous.  This puts strain on the database's query engine that is
 not optimized for SQL strings that are hundreds of thousands of characters
 long, and on some backends this size is limited; on Oracle, there's a limit
 of 1000 items.   So I'd always chunk this kind of thing.

 3. I'm not sure of the broader context of this code, but in fact placing
 a literal list of items in the IN in this case seems unnecessary; the
 vmis_to_remove list itself was just SELECTed two lines above.   There's

[openstack-dev] [swift] [trove] Configuration option descriptions

2014-06-25 Thread Anne Gentle
Hi swift and trove devs,
In working on our automation to document the configuration options across
OpenStack, we uncovered a deficit in both trove and swift configuration
option descriptions.

Here's an example:
http://docs.openstack.org/trunk/config-reference/content/container-sync-realms-configuration.html
You'll notice that many of those options do not have help text. We need
swift developers to fill in those in your code base so we can continue to
generate this document.

Trove devs, we are finding similar gaps in config option information in
your source code as well. Please make it a priority prior to our automating
those in the docs.

If you have any questions, let us know through the openstack-docs mailing
list.
Thanks,
Anne
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Token invalidation in deleting role assignments

2014-06-25 Thread Dolph Mathews
This is a known limitation of the token backend and the token revocation
list: we don't index tokens in the backend by roles (and we don't want to
iterate the token table to find matching tokens).

However, if we land support for token revocation events [1] in the
auth_token [2] middleware, we'll be able to deny tokens with invalid roles
as they are presented to other services.

[1]
https://github.com/openstack/identity-api/blob/master/v3/src/markdown/identity-api-v3-os-revoke-ext.md
[2] https://launchpad.net/keystonemiddleware


On Wed, Jun 25, 2014 at 1:19 AM, Takashi Natsume 
natsume.taka...@lab.ntt.co.jp wrote:

 Hi all,

 When deleting role assignments, not only tokens that are related with
 deleted role assignments but also other tokens that the(same) user has are
 invalidated in stable/icehouse(2014.1.1).

 For example,
 A) Role assignment between domain and user by OS-INHERIT(*1)
 1. Assign a role(For example,'Member') between 'Domain1' and 'user' by
 OS-INHERIT
 2. Assign the role('Member') between 'Domain2' and 'user' by OS-INHERIT
 3. Get a token with specifying 'user' and 'Project1'(in 'Domain1')
 4. Get a token with specifying 'user' and 'Project2'(in 'Domain2')
 5. Create reources(For example, cinder volumes) in 'Project1' with the
 token
 that was gotten in 3.
 it is possible to create them.
 6. Create reources in 'Project2' with the token that was gotten in 4.
 it is possible to create them.
 7. Delete the role assignment between 'Domain1' and 'user' (that was added
 in 1.)

 (After validated token cache is expired in cinder, etc.)
 8. Create reources in 'Project1' with the token that was gotten in 3.
 it is not possible to create them. 401 Unauthorized.
 9. Create reources in 'Project2' with the token that was gotten in 4.
 it is not possible to create them. 401 Unauthorized.

 In 9., my expectation is that it is possible to create resources with the
 token that was gotten in 4..

 *1:

 v3/OS-INHERIT/domains/{domain_id}/users/{user_id}/roles/{role_id}/inherited_
 to_projects

 B) Role assignment between project and user
 1. Assign a role(For example,'Member') between 'Project1' and 'user'
 2. Assign the role('Member') between 'Project2' and 'user'
 3. Get a token with specifying 'user' and 'Project1'
 4. Get a token with specifying 'user' and 'Project2'
 5. Create reources(For example, cinder volumes) in 'Project1' with the
 token
 that was gotten in 3.
 it is possible to create them.
 6. Create reources in 'Project2' with the token that was gotten in 4.
 it is possible to create them.
 7. Delete the role assignment between 'Project1' and 'user' (that was added
 in 1.)

 (After validated token cache is expired in cinder, etc.)
 8. Create reources in 'Project1' with the token that was gotten in 3.
 it is not possible to create them. 401 Unauthorized.
 9. Create reources in 'Project2' with the token that was gotten in 4.
 it is not possible to create them. 401 Unauthorized.

 In 9., my expectation is that it is possible to create resources with the
 token that was gotten in 4..


 Are these bugs?
 Or are there any reasons to implement these ways?

 Regards,
 Takashi Natsume
 NTT Software Innovation Center
 Tel: +81-422-59-4399
 E-mail: natsume.taka...@lab.ntt.co.jp




 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] 0 byte image is created with instance snapshot if instance is booted using volume

2014-06-25 Thread Agrawal, Ankit
Hi All,

When I boot an instance from volume and the take snapshot of that instance it 
creates a volume snapshot and also an image of 0 byte.
I can boot a new instance using this 0 byte image, but I am not sure why this 
image with 0 byte is created here?

Can someone please help me to understand this.

Thanks,
Ankit Agrawal

__
Disclaimer:This email and any attachments are sent in strictest confidence for 
the sole use of the addressee and may contain legally privileged, confidential, 
and proprietary data.  If you are not the intended recipient, please advise the 
sender by replying promptly to this email and then delete and destroy this 
email and any attachments without any further use, copying or forwarding___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Fwd: [nova] How can I obtain compute_node_id in nova

2014-06-25 Thread Sylvain Bauza
Hi Afe,

Le 25/06/2014 12:01, afe.yo...@gmail.com a écrit :
 Any help will be greatly appreciated! 

 -- Forwarded message --
 From: *afe.yo...@gmail.com mailto:afe.yo...@gmail.com*
 afe.yo...@gmail.com mailto:afe.yo...@gmail.com
 Date: Wed, Jun 25, 2014 at 5:53 PM
 Subject: [nova] How can I obtain compute_node_id in nova
 To: openst...@lists.openstack.org mailto:openst...@lists.openstack.org



 I found a bug recently and reported it
 here https://bugs.launchpad.net/nova/+bug/1333498

 The function  requires compute_node_id as its parameter.  
 I'm trying  to fix this bug. However I fail to find any way to obtain
 the compute_node_id.


Thanks for your bug report. There is already a patch proposed for
removing cn_id in PCITracker [1], so it will close your bug once merged.

-Sylvain

[1] https://review.openstack.org/102298

 Any help will be greatly appreciated! 






 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Jenkins faillure

2014-06-25 Thread Édouard Thuleau
Hi,

I got a jenkins failure on that small fix [1] on OpenContrail.
Here the last lines console output:

2014-06-25 07:02:55
RunUnitTest([build/debug/bgp/rtarget/test/rtarget_table_test.log],
[build/debug/bgp/rtarget/test/rtarget_table_test])
2014-06-25 07:02:56
/home/jenkins/workspace/ci-contrail-controller-unittest/repo/build/debug/bgp/rtarget/test/rtarget_table_test
FAIL
2014-06-25 07:02:56 scons: ***
[build/debug/bgp/rtarget/test/rtarget_table_test.log] Error -4
2014-06-25 07:02:56 scons: building terminated because of errors.
2014-06-25 07:02:59 Build step 'Execute shell' marked build as failure
2014-06-25 07:03:00 Finished: FAILURE

I don't think that failure is related to my patch. What I can do?

[1] https://review.opencontrail.org/#/c/526/

Regards,
Édouard.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Jenkins faillure

2014-06-25 Thread Anita Kuno
On 06/25/2014 12:07 PM, Édouard Thuleau wrote:
 Hi,
 
 I got a jenkins failure on that small fix [1] on OpenContrail.
 Here the last lines console output:
 
 2014-06-25 07:02:55
 RunUnitTest([build/debug/bgp/rtarget/test/rtarget_table_test.log],
 [build/debug/bgp/rtarget/test/rtarget_table_test])
 2014-06-25 07:02:56
 /home/jenkins/workspace/ci-contrail-controller-unittest/repo/build/debug/bgp/rtarget/test/rtarget_table_test
 FAIL
 2014-06-25 07:02:56 scons: ***
 [build/debug/bgp/rtarget/test/rtarget_table_test.log] Error -4
 2014-06-25 07:02:56 scons: building terminated because of errors.
 2014-06-25 07:02:59 Build step 'Execute shell' marked build as failure
 2014-06-25 07:03:00 Finished: FAILURE
 
 I don't think that failure is related to my patch. What I can do?
 
 [1] https://review.opencontrail.org/#/c/526/
 
 Regards,
 Édouard.
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
This seems to be a gerrit system that is not OpenStack's gerrit system.
Have you tried to evaluate this situation with the maintainers of this
gerrit system?

There are many reasons it might fail but the maintainers of the gerrit
system you are using is probably the best place to begin.

If this is a system question that relates to a third party ci system
that interacts with OpenStack's gerrit, please post to the infra mailing
list at openstack-in...@lists.openstack.org

Thanks Édouard,
Anita.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [qa][infra] etherpad on elastic-recheck testing improvements

2014-06-25 Thread Matt Riedemann
Sean asked me to jot some thoughts down on how we can automate some of 
our common review criteria for elastic-recheck queries, so that's here:


https://etherpad.openstack.org/p/elastic-recheck-testing

There is some low-hanging-fruit in there I think, but the bigger / more 
useful change is actually automating running the proposed query against 
ES and validating the results within some defined criteria.


--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Use MariaDB by default on Fedora

2014-06-25 Thread Giulio Fidente

On 06/16/2014 11:14 PM, Clint Byrum wrote:

Excerpts from Gregory Haynes's message of 2014-06-16 14:04:19 -0700:

Excerpts from Jan Provazník's message of 2014-06-16 20:28:29 +:

Hi,
MariaDB is now included in Fedora repositories, this makes it easier to
install and more stable option for Fedora installations. Currently
MariaDB can be used by including mariadb (use mariadb.org pkgs) or
mariadb-rdo (use redhat RDO pkgs) element when building an image. What
do you think about using MariaDB as default option for Fedora when
running devtest scripts?


(first, I believe Jan means that MariaDB _Galera_ is now in Fedora)


I think so too.


Id like to give this a try. This does start to change us from being a
deployment of openstck to being a deployment per distro but IMO thats a
reasonable position.

Id also like to propose that if we decide against doing this then these
elements should not live in tripleo-image-elements.


I'm not so sure I agree. We have lio and tgt because lio is on RHEL but
everywhere else is still using tgt IIRC.

However, I also am not so sure that it is actually a good idea for people
to ship on MariaDB since it is not in the gate. As it diverges from MySQL
(starting in earnest with 10.x), there will undoubtedly be subtle issues
that arise. So I'd say having MariaDB get tested along with Fedora will
actually improve those users' test coverage, which is a good thing.


I am favourable to the idea of switching to mariadb for fedora based 
distros.


Currently the default mysql element seems to be switching [1], yet for 
ubuntu/debian only, from the percona provided binary tarball of mysql to 
the percona provided packaged version of mysql.


In theory we could further update it to use percona provided packages of 
mysql on fedora too but I'm not sure there is much interest in using 
that combination where people gets mariadb and galera from the official 
repos.


Using different defaults (and even drop support for one or another, 
depending on the distro), seems to me a better approach in the long term.


Are there contrary opinions?

1. https://review.openstack.org/#/c/90134

--
Giulio Fidente
GPG KEY: 08D733BA

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Consolidated metrics proposal

2014-06-25 Thread Jorge Miramontes
Hey Andres,

Sorry for the late reply. I was out of town all last week. I would suggest 
continuing the email thread before we put this on a wiki somewhere so others 
can chime in.

Cheers,
--Jorge

From: Buraschi, Andres 
andres.buras...@intel.commailto:andres.buras...@intel.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Monday, June 16, 2014 10:06 AM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Neutron][LBaaS] Consolidated metrics proposal

Hi Jorge, thanks for your reply! You are right about summarizing too much. The 
idea is to identify which kinds of data could be retrieved in a summarized way 
without losing detail (i.e.: uptime can be better described with start-end 
timestamps than with lots of samples with up/down status) or simply to provide 
different levels of granularity and let the user decide (yes, it can be 
sometimes dangerous).
Having said this, how could we share the current metrics intended to be 
exposed? Is there a document or should I follow the “Requirements around 
statistics and billing” thread?

Thank you!
Andres

From: Jorge Miramontes [mailto:jorge.miramon...@rackspace.com]
Sent: Thursday, June 12, 2014 6:35 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS] Consolidated metrics proposal

Hey Andres,

In my experience with usage gathering consolidating statistics at the root 
layer is usually a bad idea. The reason is that you lose potentially useful 
information once you consolidate data. When it comes to troubleshooting issues 
(such as billing) this lost information can cause problems since there is no 
way to replay what had actually happened. That said, there is no free lunch 
and keeping track of huge amounts of data can be a huge engineering challenge. 
We have a separate thread on what kinds of metrics we want to expose from the 
LBaaS service so perhaps it would be nice to understand these in more detail.

Cheers,
--Jorge

From: Buraschi, Andres 
andres.buras...@intel.commailto:andres.buras...@intel.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Tuesday, June 10, 2014 3:34 PM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: [openstack-dev] [Neutron][LBaaS] Consolidated metrics proposal

Hi, we have been struggling with getting a meaningful set of metrics from LB 
stats thru ceilometer, and from a discussion about module responsibilities for 
providing data, an interesting idea came up. (Thanks Pradeep!)
The proposal is to consolidate some kinds of metrics as pool up time (hours) 
and average or historic response times of VIPs and listeners, to avoid having 
ceilometer querying for the state so frequently. There is a trade-off between 
fast response time (high sampling rate) and reasonable* amount of cumulative 
samples.
The next step in order to give more detail to the idea is to work on a use 
cases list to better explain / understand the benefits of this kind of data 
grouping.

What dou you think about this?
Do you find it will be useful to have some processed metrics on the 
loadbalancer side instead of the ceilometer side?
Do you identify any measurements about the load balancer that could not be 
obtained/calculated from ceilometer?
Perhaps this could be the base for other stats gathering solutions that may be 
under discussion?

Andres
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Distributed locking

2014-06-25 Thread Joshua Harlow
Could u expand on this and how it would work.

I'm pretty skeptical of new ad-hoc locking implementations so just want to
ensure it's flushed out in detail.

What would the two local locks be, where would they be, what would the
'conducting' being doing to coordinate?

-Original Message-
From: John Garbutt j...@johngarbutt.com
Reply-To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Date: Wednesday, June 25, 2014 at 1:08 AM
To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [nova] Distributed locking

So just to keep the ML up with some of the discussion we had in IRC
the other day...

Most resources in Nova are owned by a particular nova-compute. So the
locks on the resources are effectively held by the nova-compute that
owns the resource.

We already effectively have a cross nova-compute lock holding in the
capacity reservations during migrate/resize.

But to cut a long story short, if the image cache is actually just a
copy from one of the nova-compute nodes that already have that image
into the local (shared) folder for another nova-compute, then we can
get away without a global lock, and just have two local locks on
either end and some conducting to co-ordinate things.

Its not perfect, but its an option.

Thanks,
John


On 17 June 2014 18:18, Clint Byrum cl...@fewbar.com wrote:
 Excerpts from Matthew Booth's message of 2014-06-17 01:36:11 -0700:
 On 17/06/14 00:28, Joshua Harlow wrote:
  So this is a reader/write lock then?
 
  I have seen https://github.com/python-zk/kazoo/pull/141 come up in
the
  kazoo (zookeeper python library) but there was a lack of a
maintainer for
  that 'recipe', perhaps if we really find this needed we can help get
that
  pull request 'sponsored' so that it can be used for this purpose?
 
 
  As far as resiliency, the thing I was thinking about was how correct
do u
  want this lock to be?
 
  If u say go with memcached and a locking mechanism using it this
will not
  be correct but it might work good enough under normal usage. So
that¹s why
  I was wondering about what level of correctness do you want and what
do
  you want to happen if a server that is maintaining the lock record
dies.
  In memcaches case this will literally be 1 server, even if sharding
is
  being used, since a key hashes to one server. So if that one server
goes
  down (or a network split happens) then it is possible for two
entities to
  believe they own the same lock (and if the network split recovers
this
  gets even weirder); so that¹s what I was wondering about when
mentioning
  resiliency and how much incorrectness you are willing to tolerate.

 From my POV, the most important things are:

 * 2 nodes must never believe they hold the same lock
 * A node must eventually get the lock


 If these are musts, then memcache is a no-go for locking. memcached is
 likely to delete anything it is storing in its RAM, at any time. Also
 if you have several memcache servers, a momentary network blip could
 lead to acquiring the lock erroneously.

 The only thing it is useful for is coalescing, where a broken lock just
 means wasted resources, erroneous errors, etc. If consistency is needed,
 then you need a consistent backend.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Barbican] Barebones CA

2014-06-25 Thread Ade Lee
I think the plan is to create a Dogtag instance so that integration
tests can be run whenever code is checked in (both with and without a
Dogtag backend).

Dogtag isn't that difficult to deploy, but being a Java app, it does
bring in a set of dependencies that developers may not want to deal with
for basic/ devstack testing.

So, I agree that a simple OpenSSL CA may be useful at least initially as
a 'dev' plugin.

Ade

On Wed, 2014-06-25 at 16:31 +, Jarret Raim wrote:
 Rob,
 
 RedHat is working on a backend for Dogtag, which should be capable of
 doing something like that. That's still a bit hard to deploy, so it would
 make sense to extend the 'dev' plugin to include those features.
 
 
 Jarret
 
 
 On 6/24/14, 4:04 PM, Clark, Robert Graham robert.cl...@hp.com wrote:
 
 Yeah pretty much.
 
 That¹s something I¹d be interested to work on, if work isn¹t ongoing
 already.
 
 -Rob
 
 
 
 
 
 On 24/06/2014 18:57, John Wood john.w...@rackspace.com wrote:
 
 Hello Robert,
 
 I would actually hope we have a self-contained certificate plugin
 implementation that runs 'out of the box' to enable certificate
 generation orders to be evaluated and demo-ed on local boxes.
 
 Is this what you were thinking though?
 
 Thanks,
 John
 
 
 
 
 From: Clark, Robert Graham [robert.cl...@hp.com]
 Sent: Tuesday, June 24, 2014 10:36 AM
 To: OpenStack List
 Subject: [openstack-dev] [Barbican] Barebones CA
 
 Hi all,
 
 I¹m sure this has been discussed somewhere and I¹ve just missed it.
 
 Is there any value in creating a basic ŒCA¹ and plugin to satisfy
 tests/integration in Barbican? I¹m thinking something that probably
 performs OpenSSL certificate operations itself, ugly but perhaps useful
 for some things?
 
 -Rob
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Heat] Reminder: Mid-cycle Meetup - Attendance Confirmation

2014-06-25 Thread Carlino, Chuck (OpenStack TripleO, Neutron)
Is $179/day the expected rate?

Thanks,
Chuck

On Jun 25, 2014, at 2:34 AM, Jaromir Coufal jcou...@redhat.com wrote:

 Thanks a lot for your help.
 
 Just a side note - we need to fill in the number of requested rooms, so that 
 we don't get charged for extra cost - we have a group discount price.
 
 So for everybody, please, go forward and book your room here:
 http://tinyurl.com/redhat-marriott
 
 -- Jarda
 
 On 2014/24/06 17:49, Jordan OMara wrote:
 On 24/06/14 10:55 -0400, Jordan OMara wrote:
 On 20/06/14 16:26 -0400, Charles Crouch wrote:
 Any more takers for the tripleo mid-cycle meetup in Raleigh? If so,
 please
 sign up on the etherpad below.
 
 The hotel group room rate will be finalized on Monday Jul 23rd (US
 time), after that time you will be on your own for finding
 accommodation.
 
 Thanks
 Charles
 
 
 Just an update that I've got us a block of rooms reserved at the
 nearest, cheapest hotel (the Marriott in downtown Raleigh, about 200
 yards from the Red Hat office) - I'll have details on how to actually
 book at this rate in just a few minutes.
 
 Please use the following link to reserve at the marriott (it's copied
 on the etherpad)
 
 http://tinyurl.com/redhat-marriott
 
 We have a 24-room block reserved at that rate from SUN-FRI
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] should we have a stale data indication in nova list/show?

2014-06-25 Thread Day, Phil
 -Original Message-
 From: Sean Dague [mailto:s...@dague.net]
 Sent: 25 June 2014 11:49
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [nova] should we have a stale data indication in
 nova list/show?
 
 On 06/25/2014 04:28 AM, Belmiro Moreira wrote:
  I like the current behavior of not changing the VM state if
  nova-compute goes down.
 
  The cloud operators can identify the issue in the compute node and try
  to fix it without users noticing. Depending in the problem I can
  inform users if instances are affected and change the state if necessary.
 
  I wouldn't like is to expose any failure in nova-compute to users and
  be contacted because VM state changed.
 
 Agreed. Plus in the perfectly normal case of an upgrade of a compute node,
 it's expected that nova-compute is going to be down for some period of
 time, and it's 100% expected that the VMs remain up and ACTIVE over that
 period.
 
 Setting VMs to ERROR would totally gum that up.
 
+1 that the state shouldn't be changed.

What about if we exposed the last updated time to users and allowed them to 
decide if its significant or not ?
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Heat] Reminder: Mid-cycle Meetup - Attendance Confirmation

2014-06-25 Thread Jordan OMara

On 25/06/14 18:20 +, Carlino, Chuck (OpenStack TripleO, Neutron) wrote:

Is $179/day the expected rate?

Thanks,
Chuck


Yes, that's the best rate available from both of the downtown
(walkable) hotels.
--
Jordan O'Mara jomara at redhat.com
Red Hat Engineering, Raleigh 


pgpq2YbXSIITm.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [hacking] rules for removal

2014-06-25 Thread Jay Pipes

On 06/24/2014 09:51 PM, Steve Kowalik wrote:

On 25/06/14 07:26, Mark McLoughlin wrote:

There's two sides to this coin - concern about alienating
non-english-as-a-first-language speakers who feel undervalued because
their language is nitpicked to death and concern about alienating
english-as-a-first-language speakers who struggle to understand unclear
or incorrect language.

Obviously there's a balance to be struck there and different people will
judge that differently, but I'm personally far more concerned about the
former rather than the latter case.

I expect many beyond the english-as-a-first-language world are pretty
used to dealing with imperfect language but aren't so delighted with
being constantly reminded that their use language is imperfect.


Just to throw my two cents into the ring, when I comment about language
use in a review, I will almost always include suggested wording in full.
If I can't come to a decision about if my wording is better, than I
don't comment on it.


This is exactly my strategy as well.

-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] should we have a stale data indication in nova list/show?

2014-06-25 Thread Joe Gordon
On Wed, Jun 25, 2014 at 11:26 AM, Day, Phil philip@hp.com wrote:

  -Original Message-
  From: Sean Dague [mailto:s...@dague.net]
  Sent: 25 June 2014 11:49
  To: OpenStack Development Mailing List (not for usage questions)
  Subject: Re: [openstack-dev] [nova] should we have a stale data
 indication in
  nova list/show?
 
  On 06/25/2014 04:28 AM, Belmiro Moreira wrote:
   I like the current behavior of not changing the VM state if
   nova-compute goes down.
  
   The cloud operators can identify the issue in the compute node and try
   to fix it without users noticing. Depending in the problem I can
   inform users if instances are affected and change the state if
 necessary.
  
   I wouldn't like is to expose any failure in nova-compute to users and
   be contacted because VM state changed.
 
  Agreed. Plus in the perfectly normal case of an upgrade of a compute node
  it's expected that nova-compute is going to be down for some period of
  time, and it's 100% expected that the VMs remain up and ACTIVE over that
  period.
 
  Setting VMs to ERROR would totally gum that up.
 
 +1 that the state shouldn't be changed.

 What about if we exposed the last updated time to users and allowed them
 to decide if its significant or not ?


I have changed my mind on this one. I agree we shouldn't change any state,
and I also do not think we should show the last update time to the user
either. I don't think showing that information would be very helpful to
users, if at all, and would train users to poll nova more.

We don't want folks using nova list/show to check if there instance is
functional or not.  A user should care about if there instance is operating
as expected, nova misbehaving isn't the only reason an instance may go
haywire (the service they are running inside the instance can crash etc.).
 So we should expect users to be able to monitor the health of there
instance without needing to poll nova on a regular basis.

Thoughts?


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][glance] Update volume-image-metadata proposal

2014-06-25 Thread Tripp, Travis S
From: Brian Rosmaita [mailto:brian.rosma...@rackspace.com]
Or you can just use it as the basis of a Cinder property protection config 
file,
because I wonder whether in the general case, you'll always want volume
properties protected exactly the same as image properties.  If not, the new
API call strategy will force you to deal with differences in the code, whereas
the config file strategy would move dealing with differences to setting up the 
config file.

Once an instance is launched, the image properties are treated the same and 
lose distinction of whether they came from an image or a bootable volume in 
Nova. I agree with Facundo that maintaining a consistency between configuration 
files sounds like a configuration drift risk to me opening the opportunity to 
bypass protected properties. Also, commands in Cinder like upload-to-image may 
fail because a bootable volume is created with image properties that Glance 
doesn't actually allow.

Why not have a single source of truth for protected properties coming from 
Glance? A small possible downside I see is that the Glance API will get hit 
more often, but maybe we can optimize that?

This does sound like a good topic for the Glance meeting, but since it is a 
Cinder topic as well, it would be good to get Cinder team feedback.

-Travis

From: Maldonado, Facundo N [mailto:facundo.n.maldon...@intel.com]
Sent: Wednesday, June 25, 2014 7:34 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [cinder][glance] Update volume-image-metadata 
proposal

Thanks for the response, I'll be there this Thursday.

Having the file in more than one place could me a nightmare if we have to 
maintain consistency between them.
It could be good if we want to protect different properties than Glance.

Thanks,
Facundo

From: Brian Rosmaita [mailto:brian.rosma...@rackspace.com]
Sent: Tuesday, June 24, 2014 7:10 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [cinder][glance] Update volume-image-metadata 
proposal

Hi Facundo,

Can you attend the Glance meeting this week at 20:00 UTC on Thursday in 
#openstack-meeting-alt ?

I may be misunderstanding what's at stake, but it looks like:
- Glance holds the image metadata (some user-modifiable, some not)
- Cinder copies the image metadata to use as volume metadata (none is 
user-modifiable)
- You want to implement user-modifiable metadata in Cinder, but you don't know 
which items should be mutable and which not.
- You propose to add glance API calls to allow you to figure out property 
protections on a per-property basis.

It looks like the only roles for Glance here are (1) as the original source of 
the image metadata, and then (2) as the source of truth for what image 
properties can be modified on the volume metadata.  For (1), you've already got 
an API call.  For (2), why not use the glance property protection configuration 
file directly?  It's going to be deployed somehow to your glance nodes, you can 
deploy it to your cinder nodes at the same time.  Or you can just use it as the 
basis of a Cinder property protection config file, because I wonder whether in 
the general case, you'll always want volume properties protected exactly the 
same as image properties.  If not, the new API call strategy will force you to 
deal with differences in the code, whereas the config file strategy would move 
dealing with differences to setting up the config file.  So I'm not convinced 
that a new API call is the way to go here.

But there may be some nuances I'm missing, so it might be easier to discuss at 
the Glance meeting.  The agenda looks pretty light for Thursday if you want to 
add this topic:
https://etherpad.openstack.org/p/glance-team-meeting-agenda

cheers,
brian

From: Maldonado, Facundo N [facundo.n.maldon...@intel.com]
Sent: Tuesday, June 24, 2014 2:34 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [cinder][glance] Update volume-image-metadata proposal
Hi folks,

I started working on this blueprint [1] but the work to be done 
is not limited to cinder python client.
Volume-image-metadata is immutable in Cinder and Glance has 
RBAC image properties and it doesn't provide any way to find out which are 
those protected properties in advance [2].

I want to share this proposal and get feedback from you.

https://docs.google.com/document/d/1XYEqGOa30viOyZf8AiwkrCiMWGTfBKjgmeYBptaCHlM/


Thanks,
Facundo

[1] 
https://blueprints.launchpad.net/python-cinderclient/+spec/support-volume-image-metadata
[2] 
http://openstack.10931.n7.nabble.com/Cinder-Confusion-about-the-respective-use-cases-for-volume-s-admin-metadata-metadata-and-glance-imaga-td39849.html

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org

Re: [openstack-dev] [nova] should we have a stale data indication in nova list/show?

2014-06-25 Thread Ahmed RAHAL

Le 2014-06-25 14:26, Day, Phil a écrit :

-Original Message-
From: Sean Dague [mailto:s...@dague.net]
Sent: 25 June 2014 11:49
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] should we have a stale data indication in
nova list/show?



+1 that the state shouldn't be changed.

What about if we exposed the last updated time to users and allowed them to 
decide if its significant or not ?



This would just indicate the last operation's time stamp.
There already is a field in nova show called 'updated' that has some 
kind of indication. I honestly do not know who updates that field, but 
if anything, this existing field could/should be used.



Ahmed.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara] Upgrade of Hadoop components inside released version

2014-06-25 Thread Erik Bergenholtz
Team -

Please see in-line for my thoughts/opinions on the topic:


 From: Andrew Lazarev alaza...@mirantis.com
 Subject: [openstack-dev] [sahara] Upgrade of Hadoop components inside 
 released version
 Date: June 24, 2014 at 5:20:27 PM EDT
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Reply-To: OpenStack Development Mailing List \(not for usage questions\) 
 openstack-dev@lists.openstack.org
 
 Hi Team,
 
 I want to raise topic about upgrade of components in Hadoop version that is 
 already supported by released Sahara plugin. The question is raised because 
 of several change requests [1] and [2]. Topic was discussed in Atlanta 
 ([3]), but we didn't come to the decision. 

Any future policy that is put in place must provide the ability for a plugin to 
move forward in terms of functionality. Each plugin, depending on its 
implementation is going to have limitations, sometimes with backwards 
compatibility. This is not a function of Sahara proper, but possibly of Hadoop 
and or the distribution in question that the plugin implements. Each 
vendor/plugin should be allowed to control what they do or do not support.

With regards to the code submissions that are being delayed by lack of 
backwards compatibility policy ([1] [2]), it is my opinion that they should be 
allowed to move forward as there is no policy in place that is being challenged 
and/or violated. However, these code submission serve as a good vehicle for 
discussing said compatibility policy.

 
 All of us agreed that existing clusters must continue to work after 
 OpenStack upgrade. So if user creates cluster by Icehouse Sahara and then 
 upgrades OpenStack - everything should continue working as before. The most 
 tricky operation is scaling and it dictates list of restrictions over new 
 version of component:
 
 1. plugin-version pair supported by the plugin must not change
 2. if component upgrade requires DIB involved then plugin must work with 
 both versions of image - old and new one
 3. cluster with mixed nodes (created by old code and by new one) should 
 still be operational
 
 Given that we should choose policy for components upgrade. Here are several 
 options:
 
 1. Prohibit components upgrade in released versions of plugin. Change plugin 
 version even if hadoop version didn't change. This solves all listed 
 problems but a little bit frustrating for user. They will need to recreate 
 all clusters they have and migrate data like as it is hadoop upgrade. They 
 should also consider Hadoop upgrade to do migration only once.

Re-creating a cluster just because the version of a plugin (or Sahara) has 
changed is very unlikely to occur in the real world as this could easily 
involve 1,000’s of nodes and many petabytes of data. There must be a more 
compelling reason to recreate a cluster than plugin/sahara has changed. What’s 
more likely is that cluster that is provisioned which is rendered incompatible 
with a future version of a plugin will result in an administrator making use of 
the ‘native’ management capabilities provided by the Hadoop distribution; in 
the case of HDP, this would be Ambari. Clusters can be completely managed 
through Ambari, including migration, scaling etc. It’s only the VM resources 
that are not managed by Ambari, but this is a relatively simple proposition.

 
 2. Disable some operations over cluster created by the previous version. If 
 users don't have option to scale cluster there will be no problems with 
 mixed nodes. For this option Sahara need to know if the cluster was created 
 by this version or not.

If for some reason a change is introduced in a plugin that renders it 
incompatible across either Hadoop OR OpenStack versions, it should still be 
possible to make such change in favor of moving the state of the art forward. 
Such incompatibility may be difficult (read expensive) or impossible to avoid. 
The requirement should be to specify the upgrade/migration support (through 
documentation) specifically with respect to scaling.

 
 3. Require change author to perform all kind of tests and prove that mixed 
 cluster works as good and not mixed. In such case we need some list of tests 
 that are enough to cover all corner cases.

My opinion is that testing and backwards compatibility is ultimately the 
responsibility of the plugin. As such, the plugin vendor should not be 
restricted in terms of what it needs/must do, but indicate through 
documentation what its capabilities are to set expectations with 
customers/users.

 
 Ideas are welcome.
 
 [1] https://review.openstack.org/#/c/98260/
 [2] https://review.openstack.org/#/c/87723/
 [3] https://etherpad.openstack.org/p/juno-summit-sahara-relmngmt-backward
 
 Thanks,
 Andrew.
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


-- 
CONFIDENTIALITY NOTICE

Re: [openstack-dev] Jenkins faillure

2014-06-25 Thread Édouard Thuleau
Yes, the wrong mailing list.
Sorry for the noise.

Édouard.


On Wed, Jun 25, 2014 at 6:18 PM, Anita Kuno ante...@anteaya.info wrote:

 On 06/25/2014 12:07 PM, Édouard Thuleau wrote:
  Hi,
 
  I got a jenkins failure on that small fix [1] on OpenContrail.
  Here the last lines console output:
 
  2014-06-25 07:02:55
  RunUnitTest([build/debug/bgp/rtarget/test/rtarget_table_test.log],
  [build/debug/bgp/rtarget/test/rtarget_table_test])
  2014-06-25 07:02:56
 
 /home/jenkins/workspace/ci-contrail-controller-unittest/repo/build/debug/bgp/rtarget/test/rtarget_table_test
  FAIL
  2014-06-25 07:02:56 scons: ***
  [build/debug/bgp/rtarget/test/rtarget_table_test.log] Error -4
  2014-06-25 07:02:56 scons: building terminated because of errors.
  2014-06-25 07:02:59 Build step 'Execute shell' marked build as failure
  2014-06-25 07:03:00 Finished: FAILURE
 
  I don't think that failure is related to my patch. What I can do?
 
  [1] https://review.opencontrail.org/#/c/526/
 
  Regards,
  Édouard.
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 This seems to be a gerrit system that is not OpenStack's gerrit system.
 Have you tried to evaluate this situation with the maintainers of this
 gerrit system?

 There are many reasons it might fail but the maintainers of the gerrit
 system you are using is probably the best place to begin.

 If this is a system question that relates to a third party ci system
 that interacts with OpenStack's gerrit, please post to the infra mailing
 list at openstack-in...@lists.openstack.org

 Thanks Édouard,
 Anita.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Should TLS settings for listener be set through separate API/model?

2014-06-25 Thread Brandon Logan
Hi Stephen, 

The entityentityassociations table name is consistent with the rest
of neutron's table names, as is not breaking the table name words up by
an underscore.  I think this stems from the sqlalchemy models getting
the table name for free because of inheriting from a base model that
derives the table name based on the model's class name.

However, with markmcclain's blessing the new loadbalancing tables will
be prefixed with lbaas_, but the model names will be LoadBalancer,
Listener, etc.

I would agree though that since sni will not be a separate table then
that will be a bit odd to have an association table's name implying a
join of a table that doesn't exist.

Thanks,
Brandon

On Wed, 2014-06-25 at 09:55 -0700, Stephen Balukoff wrote:
 What's the point of putting off a potential name change to the actual
 code (where you're going to see more friction because names in the
 code do not match names in the spec, and this becomes a point where
 confusion can happen). I understand the idea that code may not exactly
 match the spec, but when it's obvious that it should, why use the
 wrong name in the spec?
 
 
 Isn't it more confusing when the API does not match database object
 names when it's clear the API is specifically meant to manipulate
 those database objects?
 
 
 Is that naming convention actually documented anywhere? And why are
 you calling it a 'listenersniassociations'? There is no SNI object
 in the database. (IMO, this is a terrible name that needs to be
 re-read three times just to pick out where the word breaks should be!
 As written it looks like Listeners NI Associations what the heck is
 an 'NI'?)
 
 
 They say that there are two hard problems in Computer Science:
 * Cache invalidation
 * Naming things
 * Off-by-one errors
 
 
 And far be it from me to pick nits about a name (OK, I guess it's
 isn't that far fetched for me to pick nits. :P ), but it's hard for me
 to imagine a worse name than 'listenersniassocaitions' being
 considered. :P
 
 
 Stephen
 
 
 
 
 On Wed, Jun 25, 2014 at 2:05 AM, Evgeny Fedoruk evge...@radware.com
 wrote:
 Hi folks
 
  
 
 Regarding names, there are two types of them: new API
 attributes for REST call,  and new column name and table name
 for the database.
 
 When creating listener, 2 new attributes will be added to the
 REST call API: 
 
 1.  default_tls_container_id - Barbican TLS container uuid
 
 2.  sni_container_ids (I removed the “_list” part to make
 it shorter) – ordered list of Barbican TLS container uuids
 
 For the database, these will be translated to:
 
 1.  default_tls_container_id- new column for listeners
 table
 
 2.  listenersniassociations (changed it from
 vipsniassociations which is a mistake) – new associations
 table, holding: id(generated), listener_id, TLS_container_id,
 and position(for ordering)
 
 This kind of a name comes to comply current neutron’s table
 name convention, like pollmonitorassociation or
 providerresourceassociation
 
  
 
 I think names may always be an issue for the actual code
 review, the document is just a functional specification
 
 Since new objects model code is not landed yet, naming
 conventions may be changed while implementing this spec.
 
 I will commit the document with all comments addressed and
 mentioned above names.
 
 Please review it and give your feedback, I think we are close
 to complete this one ) 
 
  
 
 Thanks,
 
 Evg
 
  
 
  
 
  
 
 From: Vijay Venkatachalam
 [mailto:vijay.venkatacha...@citrix.com] 
 Sent: Wednesday, June 25, 2014 8:34 AM
 
 
 To: OpenStack Development Mailing List (not for usage
 questions)
 Subject: Re: [openstack-dev] [Neutron][LBaaS] Should TLS
 settings for listener be set through separate API/model?
  
 
 Thanks for the details Evg!
 
  
 
 I understand there was no TLS settings API originally planned.
 
  
 
 From: Stephen Balukoff [mailto:sbaluk...@bluebox.net] 
 Sent: Wednesday, June 25, 2014 5:46 AM
 To: OpenStack Development Mailing List (not for usage
 questions)
 Subject: Re: [openstack-dev] [Neutron][LBaaS] Should TLS
 settings for listener be set through separate API/model?
 
 
  
 
 Evgeny--
 
  
 
 
 Two minor nits:
 
 
  
 
 
 * Your spec lists the new SNI related 

[openstack-dev] DVR and FWaaS integration

2014-06-25 Thread Yi Sun
All,
During last summit, we were talking about the integration issues between
DVR and FWaaS. After the summit, I had one IRC meeting with DVR team. But
after that meeting I was tight up with my work and did not get time to
continue to follow up the issue. To not slow down the discussion, I'm
forwarding out the email that I sent out as the follow up to the IRC
meeting here, so that whoever may be interested on the topic can continue
to discuss about it.

First some background about the issue:
In the normal case, FW and router are running together inside the same box
so that FW can get route and NAT information from the router component. And
in order to have FW to function correctly, FW needs to see the both
directions of the traffic.
DVR is designed in an asymmetric way that each DVR only sees one leg of the
traffic. If we build FW on top of DVR, then FW functionality will be
broken. We need to find a good method to have FW to work with DVR.

---forwarding email---
 During the IRC meeting, we think that we could force the traffic to the FW
before DVR. Vivek had more detail; He thinks that since the br-int knowns
whether a packet is routed or switched, it is possible for the br-int to
forward traffic to FW before it forwards to DVR. The whole forwarding
process can be operated as part of service-chain operation. And there could
be a FWaaS driver that understands the DVR configuration to setup OVS flows
on the br-int.
The concern is that normally firewall and router are integrated together so
that firewall can make right decision based on the routing result. But what
we are suggesting is to split the firewall and router into two separated
components, hence there could be issues. For example, FW will not be able
to get enough information to setup zone. Normally Zone contains a group of
interfaces that can be used in the firewall policy to enforce the direction
of the policy. If we forward traffic to firewall before DVR, then we can
only create policy based on subnets not the interface.
Also, I’m not sure if we have ever planed to support SNAT on the DVR, but
if we do, then it depends on at which point we forward traffic to the FW,
the subnet may not even work for us anymore (even DNAT could have problem
too).
Another thing that I may have to get detail is that how we handle the
overlap subnet, it seems that the new namespaces are required.

--- end of forwarding 

YI
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] tempest failure on image creation time out

2014-06-25 Thread Manickam, Kanagaraj

While building the patch in Jenkins, following exception reported in tempest.



2014-06-25 
19:09:19.009http://logs.openstack.org/82/92782/17/check/check-tempest-dsvm-full/8aa24c6/console.html#_2014-06-25_19_09_19_009
 |

2014-06-25 
19:09:19.010http://logs.openstack.org/82/92782/17/check/check-tempest-dsvm-full/8aa24c6/console.html#_2014-06-25_19_09_19_010
 | setUpClass 
(tempest.api.compute.images.test_list_image_filters.ListImageFiltersTestXML)

2014-06-25 
19:09:19.010http://logs.openstack.org/82/92782/17/check/check-tempest-dsvm-full/8aa24c6/console.html#_2014-06-25_19_09_19_010
 | 
---

2014-06-25 
19:09:19.010http://logs.openstack.org/82/92782/17/check/check-tempest-dsvm-full/8aa24c6/console.html#_2014-06-25_19_09_19_010
 |

2014-06-25 
19:09:19.010http://logs.openstack.org/82/92782/17/check/check-tempest-dsvm-full/8aa24c6/console.html#_2014-06-25_19_09_19_010
 | Captured traceback:

2014-06-25 
19:09:19.010http://logs.openstack.org/82/92782/17/check/check-tempest-dsvm-full/8aa24c6/console.html#_2014-06-25_19_09_19_010
 | ~~~

2014-06-25 
19:09:19.010http://logs.openstack.org/82/92782/17/check/check-tempest-dsvm-full/8aa24c6/console.html#_2014-06-25_19_09_19_010
 | Traceback (most recent call last):

2014-06-25 
19:09:19.010http://logs.openstack.org/82/92782/17/check/check-tempest-dsvm-full/8aa24c6/console.html#_2014-06-25_19_09_19_010
 |   File tempest/api/compute/images/test_list_image_filters.py, line 52, 
in setUpClass

2014-06-25 
19:09:19.010http://logs.openstack.org/82/92782/17/check/check-tempest-dsvm-full/8aa24c6/console.html#_2014-06-25_19_09_19_010
 | cls.server2['id'], wait_until='ACTIVE')

2014-06-25 
19:09:19.010http://logs.openstack.org/82/92782/17/check/check-tempest-dsvm-full/8aa24c6/console.html#_2014-06-25_19_09_19_010
 |   File tempest/api/compute/base.py, line 326, in 
create_image_from_server

2014-06-25 
19:09:19.010http://logs.openstack.org/82/92782/17/check/check-tempest-dsvm-full/8aa24c6/console.html#_2014-06-25_19_09_19_010
 | kwargs['wait_until'])

2014-06-25 
19:09:19.010http://logs.openstack.org/82/92782/17/check/check-tempest-dsvm-full/8aa24c6/console.html#_2014-06-25_19_09_19_010
 |   File tempest/services/compute/xml/images_client.py, line 140, in 
wait_for_image_status

2014-06-25 
19:09:19.011http://logs.openstack.org/82/92782/17/check/check-tempest-dsvm-full/8aa24c6/console.html#_2014-06-25_19_09_19_011
 | waiters.wait_for_image_status(self, image_id, status)

2014-06-25 
19:09:19.011http://logs.openstack.org/82/92782/17/check/check-tempest-dsvm-full/8aa24c6/console.html#_2014-06-25_19_09_19_011
 |   File tempest/common/waiters.py, line 143, in wait_for_image_status

2014-06-25 
19:09:19.011http://logs.openstack.org/82/92782/17/check/check-tempest-dsvm-full/8aa24c6/console.html#_2014-06-25_19_09_19_011
 | raise exceptions.TimeoutException(message)

2014-06-25 
19:09:19.011http://logs.openstack.org/82/92782/17/check/check-tempest-dsvm-full/8aa24c6/console.html#_2014-06-25_19_09_19_011
 | TimeoutException: Request timed out

2014-06-25 
19:09:19.011http://logs.openstack.org/82/92782/17/check/check-tempest-dsvm-full/8aa24c6/console.html#_2014-06-25_19_09_19_011
 | Details: (ListImageFiltersTestXML:setUpClass) Image 
49211f6e-8771-4e07-a581-901cda3ad45b failed to reach ACTIVE status within the 
required time (196 s). Current status: SAVING.


Any one facing this issue?

Ref: 
http://logs.openstack.org/82/92782/17/check/check-tempest-dsvm-full/8aa24c6/console.html

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] tempest failure on image creation time out

2014-06-25 Thread David Shrewsbury
On Wed, Jun 25, 2014 at 4:16 PM, Manickam, Kanagaraj 
kanagaraj.manic...@hp.com wrote:



 While building the patch in Jenkins, following exception reported in
 tempest.





 2014-06-25 19:09:19.009 
 http://logs.openstack.org/82/92782/17/check/check-tempest-dsvm-full/8aa24c6/console.html#_2014-06-25_19_09_19_009
  |

 2014-06-25 19:09:19.010 
 http://logs.openstack.org/82/92782/17/check/check-tempest-dsvm-full/8aa24c6/console.html#_2014-06-25_19_09_19_010
  | setUpClass 
 (tempest.api.compute.images.test_list_image_filters.ListImageFiltersTestXML)

 2014-06-25 19:09:19.010 
 http://logs.openstack.org/82/92782/17/check/check-tempest-dsvm-full/8aa24c6/console.html#_2014-06-25_19_09_19_010
  | 
 ---

 2014-06-25 19:09:19.010 
 http://logs.openstack.org/82/92782/17/check/check-tempest-dsvm-full/8aa24c6/console.html#_2014-06-25_19_09_19_010
  |

 2014-06-25 19:09:19.010 
 http://logs.openstack.org/82/92782/17/check/check-tempest-dsvm-full/8aa24c6/console.html#_2014-06-25_19_09_19_010
  | Captured traceback:

 2014-06-25 19:09:19.010 
 http://logs.openstack.org/82/92782/17/check/check-tempest-dsvm-full/8aa24c6/console.html#_2014-06-25_19_09_19_010
  | ~~~

 2014-06-25 19:09:19.010 
 http://logs.openstack.org/82/92782/17/check/check-tempest-dsvm-full/8aa24c6/console.html#_2014-06-25_19_09_19_010
  | Traceback (most recent call last):

 2014-06-25 19:09:19.010 
 http://logs.openstack.org/82/92782/17/check/check-tempest-dsvm-full/8aa24c6/console.html#_2014-06-25_19_09_19_010
  |   File tempest/api/compute/images/test_list_image_filters.py, line 
 52, in setUpClass

 2014-06-25 19:09:19.010 
 http://logs.openstack.org/82/92782/17/check/check-tempest-dsvm-full/8aa24c6/console.html#_2014-06-25_19_09_19_010
  | cls.server2['id'], wait_until='ACTIVE')

 2014-06-25 19:09:19.010 
 http://logs.openstack.org/82/92782/17/check/check-tempest-dsvm-full/8aa24c6/console.html#_2014-06-25_19_09_19_010
  |   File tempest/api/compute/base.py, line 326, in 
 create_image_from_server

 2014-06-25 19:09:19.010 
 http://logs.openstack.org/82/92782/17/check/check-tempest-dsvm-full/8aa24c6/console.html#_2014-06-25_19_09_19_010
  | kwargs['wait_until'])

 2014-06-25 19:09:19.010 
 http://logs.openstack.org/82/92782/17/check/check-tempest-dsvm-full/8aa24c6/console.html#_2014-06-25_19_09_19_010
  |   File tempest/services/compute/xml/images_client.py, line 140, in 
 wait_for_image_status

 2014-06-25 19:09:19.011 
 http://logs.openstack.org/82/92782/17/check/check-tempest-dsvm-full/8aa24c6/console.html#_2014-06-25_19_09_19_011
  | waiters.wait_for_image_status(self, image_id, status)

 2014-06-25 19:09:19.011 
 http://logs.openstack.org/82/92782/17/check/check-tempest-dsvm-full/8aa24c6/console.html#_2014-06-25_19_09_19_011
  |   File tempest/common/waiters.py, line 143, in wait_for_image_status

 2014-06-25 19:09:19.011 
 http://logs.openstack.org/82/92782/17/check/check-tempest-dsvm-full/8aa24c6/console.html#_2014-06-25_19_09_19_011
  | raise exceptions.TimeoutException(message)

 2014-06-25 19:09:19.011 
 http://logs.openstack.org/82/92782/17/check/check-tempest-dsvm-full/8aa24c6/console.html#_2014-06-25_19_09_19_011
  | TimeoutException: Request timed out

 2014-06-25 19:09:19.011 
 http://logs.openstack.org/82/92782/17/check/check-tempest-dsvm-full/8aa24c6/console.html#_2014-06-25_19_09_19_011
  | Details: (ListImageFiltersTestXML:setUpClass) Image 
 49211f6e-8771-4e07-a581-901cda3ad45b failed to reach ACTIVE status within the 
 required time (196 s). Current status: SAVING.





 Any one facing this issue?



 Ref:
 http://logs.openstack.org/82/92782/17/check/check-tempest-dsvm-full/8aa24c6/console.html



https://bugs.launchpad.net/nova/+bug/1320617

It's also sometimes helpful to check the elastic recheck status page for
common bugs/issues
that people are hitting. This one happens to appear near the top:

http://status.openstack.org/elastic-recheck/

--
David Shrewsbury (Shrews)
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] 0 byte image is created with instance snapshot if instance is booted using volume

2014-06-25 Thread Michael Still
This sounds like something which should be reported as a bug. You do
that at https://bugs.launchpad.net/nova/+filebug

Cheers,
Michael

On Thu, Jun 26, 2014 at 1:43 AM, Agrawal, Ankit
ankit11.agra...@nttdata.com wrote:
 Hi All,



 When I boot an instance from volume and the take snapshot of that instance
 it creates a volume snapshot and also an image of 0 byte.

 I can boot a new instance using this 0 byte image, but I am not sure why
 this image with 0 byte is created here?



 Can someone please help me to understand this.



 Thanks,

 Ankit Agrawal


 __
 Disclaimer:This email and any attachments are sent in strictest confidence
 for the sole use of the addressee and may contain legally privileged,
 confidential, and proprietary data. If you are not the intended recipient,
 please advise the sender by replying promptly to this email and then delete
 and destroy this email and any attachments without any further use, copying
 or forwarding

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Horizon][Sahara] Question about PanelGroups and Panels in subdirectories of a given dashboard

2014-06-25 Thread Chad Roberts
Here is the scenario:  We are looking to merge the Sahara (Data Processing) 
dashboard into horizon.  The 9 panels will live in a PanelGroup under Project 
called Data Processing.  In the spirit of code organization, it was suggested 
that I put all 9 of the data processing panels into a subdirectory under 
project.  The following patch shows you what we currently have 
(https://review.openstack.org/#/c/91118/ ).

Organizing the code this way has lead me to create a couple of bugs.
1) https://bugs.launchpad.net/horizon/+bug/1329050  (The panelGroup shows up as 
Other rather than Data Processing)
2) https://bugs.launchpad.net/horizon/+bug/1333739  (The panels within the 
group show-up in a random order each time I launch horizon)

A bit more looking around lead me to discover that if I change the slugs of 
each of the panels to data_processing.panel name that I do wind-up 
eliminating the symptoms in each of the bugs I describe above.  That's a good 
thing, right? (But is it the intent of the current code to work this way?)

It turns out that while the panels show up in the correct order in the Data 
Processing panel group, none of the templates were found.  Doh!  That's a 
problem.  So, I did some digging there as well and here is what I 
foundspoiler alert, I came up with a workaround, but I'm afraid it might be 
viewed as ugly.

A little background:
When we register a panel, the following code gets called (in 
horizon/base.py)

   def register(cls, panel):
Registers a :class:`~horizon.Panel` with this dashboard.
panel_class = Horizon.register_panel(cls, panel)
# Support template loading from panel template directories.
panel_mod = import_module(panel.__module__)
panel_dir = os.path.dirname(panel_mod.__file__)
template_dir = os.path.join(panel_dir, templates)
if os.path.exists(template_dir):
key = os.path.join(cls.slug, panel.slug)
loaders.panel_template_dirs[key] = template_dir
return panel_class

That sets up our loader.panel_template_dirs with the following key/value pair 
(using my data_processing.plugins panel as the example here).
key:  project/data_processing.plugins
value: 
/home/croberts/src/horizon/openstack_dashboard/dashboards/project/data_processing/plugins/templates

When we go to render the template, we eventually call the following (from 
horizon/loaders.py)...
def get_template_sources(self, template_name):
bits = template_name.split(os.path.sep, 2)
if len(bits) == 3:
dash_name, panel_name, remainder = bits
key = os.path.join(dash_name, panel_name)
if key in panel_template_dirs:
template_dir = panel_template_dirs[key]
try:
yield safe_join(template_dir, panel_name, remainder)


In order for that function to be able to find the correct value that we stored 
in loader.panel_template_dirs earlier, we need to reference our template like 
this (from my panel's views.py)...
template_name = 'project/data_processing.plugins/plugins.html'

That gives us (inside of get_template_sources)...
dash_name = project
panel_name = data_processing.plugins
remainder = plugins.html

Nothing too bad so far really.
The slightly ugly part (to me) is when we eventually join template_dir (which 
is the value from loader.panel_template_dirs) with panel_name and remainder.
Doing so, gives us a template location of 
/home/croberts/src/horizon/openstack_dashboard/dashboards/project/data_processing/plugins/templates/data_processing.plugins/plugins.html

The part that seems ugly to me is that we need a data_processing.plugins 
subdirectory rather than just plugins.  Is that really the desired directory 
name, or should some changes be made to eliminate the data_processing. from 
that directory name [ie:  the directory structure would look like every other 
panel that is defined (none of them are in a subdirectory like this one)]

Please give me your thoughts on this.  

Are there one or more bugs in play here?
Is there something that needs enhancement in order to work better with nested 
panel groups/panels?
Should I just work within the current code and create my 
template/data_processing.panel_name template directories?

Thanks,
Chad

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron ML2] Potential DB lock when developing new mechanism driver

2014-06-25 Thread Kevin Benton
I'm not sure what you mean about it being 'truly committed' even though the
transaction has ended. When would it be committed? There are not more calls
to sqlalchemy after that. The only way it wouldn't be committed is if
another transaction was started before calling create_port.

Regardless, a db lock wait exception does not indicate anything about the
state of a particular transaction. Transactions do not lock tables on their
own. A lock has to be explicitly set with a with_lockmode('update') on a
query.

I think you will probably need to share some sample code with the mailing
list demonstrating what you are trying to do.

--
Kevin Benton


On Wed, Jun 25, 2014 at 3:21 AM, Li Ma skywalker.n...@gmail.com wrote:

 Hi Kevin,

 Thanks for your reply. Actually, it is not that straightforward.
 Even if postcommit is outside the 'with' statement, the transaction is not
 'truly' committed immediately. Because when I put my db code (reading and
 writing ml2-related tables) in postcommit, db lock wait exception is still
 thrown.

 Li Ma

 - Original Message -
 From: Kevin Benton blak...@gmail.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Sent: 星期三, 2014年 6 月 25日 下午 4:59:26
 Subject: Re: [openstack-dev] [Neutron ML2] Potential DB lock when
 developing new mechanism driver



 The post_commit methods occur outside of the transactions. You should be
 able to perform the necessary database calls there.


 If you look at the code snippet in the email you provided, you can see
 that the 'try' block surrounding the postcommit method is at the same
 indentation-level as the 'with' statement for the transaction so it will be
 closed at that point.


 Cheers,
 Kevin Benton


 --
 Kevin Benton



 On Tue, Jun 24, 2014 at 8:33 PM, Li Ma  skywalker.n...@gmail.com  wrote:


 Hi all,

 I'm developing a new mechanism driver. I'd like to access ml2-related
 tables in create_port_precommit and create_port_postcommit. However I find
 it hard to do that because the two functions are both inside an existed
 database transaction defined in create_port function of ml2/plugin.py.

 The related code is as follows:

 def create_port(self, context, port):
 ...
 session = context.session
 with session.begin(subtransactions=True):
 ...
 self.mechanism_manager.create_port_precommit(mech_context)
 try:
 self.mechanism_manager.create_port_postcommit(mech_context)
 ...
 ...
 return result

 As a result, I need to carefully deal with the database nested transaction
 issue to prevent from db lock when I develop my own mechanism driver. Right
 now, I'm trying to get the idea behind the scene. Is it possible to
 refactor it in order to make precommit and postcommit out of the db
 transaction? I think it is perfect for those who develop mechanism driver
 and do not know well about the functioning context of the whole ML2 plugin.

 Thanks,
 Li Ma

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --

 Kevin Benton
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Kevin Benton
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Barbican] Barebones CA

2014-06-25 Thread Clark, Robert Graham

Ok, I’ll hack together a dev plugin over the next week or so, other work
notwithstanding. Where possible I’ll probably borrow from the dog tag
plugin as I’ve not looked closely at the plugin infrastructure in Barbican
recently.

Is this something you’d like a blueprint for first?

-Rob




On 25/06/2014 18:30, Ade Lee a...@redhat.com wrote:

I think the plan is to create a Dogtag instance so that integration
tests can be run whenever code is checked in (both with and without a
Dogtag backend).

Dogtag isn't that difficult to deploy, but being a Java app, it does
bring in a set of dependencies that developers may not want to deal with
for basic/ devstack testing.

So, I agree that a simple OpenSSL CA may be useful at least initially as
a 'dev' plugin.

Ade

On Wed, 2014-06-25 at 16:31 +, Jarret Raim wrote:
 Rob,
 
 RedHat is working on a backend for Dogtag, which should be capable of
 doing something like that. That's still a bit hard to deploy, so it
would
 make sense to extend the 'dev' plugin to include those features.
 
 
 Jarret
 
 
 On 6/24/14, 4:04 PM, Clark, Robert Graham robert.cl...@hp.com wrote:
 
 Yeah pretty much.
 
 That¹s something I¹d be interested to work on, if work isn¹t ongoing
 already.
 
 -Rob
 
 
 
 
 
 On 24/06/2014 18:57, John Wood john.w...@rackspace.com wrote:
 
 Hello Robert,
 
 I would actually hope we have a self-contained certificate plugin
 implementation that runs 'out of the box' to enable certificate
 generation orders to be evaluated and demo-ed on local boxes.
 
 Is this what you were thinking though?
 
 Thanks,
 John
 
 
 
 
 From: Clark, Robert Graham [robert.cl...@hp.com]
 Sent: Tuesday, June 24, 2014 10:36 AM
 To: OpenStack List
 Subject: [openstack-dev] [Barbican] Barebones CA
 
 Hi all,
 
 I¹m sure this has been discussed somewhere and I¹ve just missed it.
 
 Is there any value in creating a basic ŒCA¹ and plugin to satisfy
 tests/integration in Barbican? I¹m thinking something that probably
 performs OpenSSL certificate operations itself, ugly but perhaps
useful
 for some things?
 
 -Rob
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Spec Review Day Today!

2014-06-25 Thread Russell Bryant
On 06/25/2014 05:23 AM, John Garbutt wrote:
 As previously (quietly) announced, today we are trying to do a push on
 nova-specs reviews.
 
 https://review.openstack.org/#/q/status:open+project:openstack/nova-specs,n,z
 
 The hope is we get through some of the backlog, with some interactive
 chat on IRC in #openstack-nova
 
 If someone has better stats on our nova-spec reviews, do respond with
 a link, and that would be appreciated. We need to track reviews that
 need reviewer attention vs submitter attention.

It's the end of my day at least (there's still some time left for those
in the western US).

The majority of specs are waiting on an update from the submitter.  I
didn't grab these stats before today, but I believe we made some good
progress.

Using: $ openreviews -u russellb -p projects/nova-specs.json

 Projects: [u'nova-specs']
-- Total Open Reviews: 111
-- Waiting on Submitter: 87
-- Waiting on Reviewer: 24

The wait times still aren't amazing though.  There's a pretty big
increase in wait time as you look at older ones.  It seems there's a set
we've been all avoiding for one reason or another that deserve some sort
of answer, even if it's we're just not interested enough.

-- Stats since the latest revision:
 Average wait time: 12 days, 11 hours, 30 minutes
 1st quartile wait time: 2 days, 9 hours, 5 minutes
 Median wait time: 8 days, 6 hours, 6 minutes
 3rd quartile wait time: 21 days, 1 hours, 21 minutes
 Number waiting more than 7 days: 12
-- Stats since the last revision without -1 or -2 :
 Average wait time: 13 days, 1 hours, 8 minutes
 1st quartile wait time: 2 days, 9 hours, 5 minutes
 Median wait time: 9 days, 2 hours, 32 minutes
 3rd quartile wait time: 21 days, 1 hours, 21 minutes
-- Stats since the first revision (total age):
 Average wait time: 42 days, 18 hours, 44 minutes
 1st quartile wait time: 14 days, 0 hours, 27 minutes
 Median wait time: 50 days, 12 hours, 9 minutes
 3rd quartile wait time: 75 days, 6 hours, 25 minutes

Another stat we can look at is how many specs we merged:

https://review.openstack.org/#/q/project:openstack/nova-specs+status:merged,n,z

It looks like we've merged 11 specs in the last 24 hours or so, and only
about 5 more if we look at the last week.

Finally, we can look at review stats:  Here's review numbers for the
last week:

Using: $ reviewers -u russellb -p projects/nova-specs.json -d 7

Reviews for the last 7 days in nova-specs
** -- nova-specs-core team member
+-+---++
|   Reviewer  | Reviews   -2  -1  +1  +2  +A+/- % |
Disagreements* |
+-+---++
|   jogo **   |  393  21   0  15   838.5% |
   2 (  5.1%)  |
|   danms **  |  240  22   1   1   0 8.3% |
   0 (  0.0%)  |
| russellb ** |  201   8   0  11   355.0% |
   1 (  5.0%)  |
|  philip-day |  180  10   8   0   044.4% |
   4 ( 22.2%)  |
|johngarbutt **   |  160  10   0   6   237.5% |
   2 ( 12.5%)  |
|   jaypipes  |  140   6   8   0   057.1% |
   2 ( 14.3%)  |
|  alaski **  |  120   5   1   6   258.3% |
   1 (  8.3%)  |
|sdague   |   90   7   2   0   022.2% |
   0 (  0.0%)  |
|   berrange  |   70   4   0   3   042.9% |
   0 (  0.0%)  |
|sbauza   |   70   5   2   0   028.6% |
   2 ( 28.6%)  |
|   jichenjc  |   60   1   5   0   083.3% |
   3 ( 50.0%)  |
| xuhj|   50   3   2   0   040.0% |
   1 ( 20.0%)  |
|mikalstill **|   50   2   0   3   160.0% |
   0 (  0.0%)  |
|   sgordon   |   30   0   3   0   0   100.0% |
   1 ( 33.3%)  |
|kiwik|   30   2   1   0   033.3% |
   1 ( 33.3%)  |
|   oomichi   |   30   3   0   0   0 0.0% |
   0 (  0.0%)  |
|dpamio   |   30   0   3   0   0   100.0% |
   0 (  0.0%)  |
|  gd-mdorman |   20   0   2   0   0   100.0% |
   1 ( 50.0%)  |
|   cyeoh-0   |   20   0   2   0   0   100.0% |
   0 (  0.0%)  |
|   cbehrens  |   20   1   1   0   050.0% |
   0 (  0.0%)  |
|lxsli|   20   0   2   0   0   100.0% |
   1 ( 50.0%)  |
|  geekinutah |   20   1   1   0   050.0% |
   0 (  0.0%)  |
|   pczesno   |   20   0   2   0   0   100.0% |
   1 ( 50.0%)  |
|santibaldassin   |   10   0   1   0   0   100.0% |
   0 (  0.0%)  |
| ptm |   10   1   0   0   0 0.0% |
   0 (  

Re: [openstack-dev] [QA] Questions about test policy for scenario test

2014-06-25 Thread Fei Long Wang
Good to know. I think it's a good idea to implement a common compute
verifier after instances booted. Maybe we can define different checking
levels so that it can be leveraged by different test cases. I will see
what I can do.

On 24/06/14 22:27, Sean Dague wrote:
 On 06/24/2014 01:29 AM, Fei Long Wang wrote:
 Greetings,

 We're leveraging the scenario test of Tempest to do the end-to-end
 functional test to make sure everything work great after upgrade,
 patching, etc. And We're happy to fill the gaps we found. However, I'm a
 little bit confused about the test policy from the scenario test
 perspective, especially comparing with the API test. IMHO, scenario test
 will cover some typical work flows of one specific service or mixed
 services, and it would be nice to make sure the function is really
 working instead of just checking the object status from OpenStack
 perspective. Is that correct?

 For example, live migration of Nova, it has been covered in API test of
 Tempest (see
 https://github.com/openstack/tempest/blob/master/tempest/api/compute/test_live_block_migration.py).
 But as you see, it just checks if the instance is Active or not instead
 of checking if the instance can be login/ssh successfully. Obviously,
 from an real world view, we'd like to check if it's working indeed. So
 the question is, should this be improved? If so, the enhanced code
 should be in API test, scenario test or any other places? Thanks you.
 The fact that computes aren't verified fully during the API testing is
 mostly historical. I think they should be. The run_ssh flag used to be
 used for this, however because of some long standing race conditions in
 the networking stack, that wasn't able to be turned on in upstream
 testing. My guess is that it's rotted now.

 We've had some conversations in the QA team about a compute verifier
 that would be run after any of the compute jobs to make sure they booted
 correctly, and more importantly, did a very consistent set of debug
 capture when they didn't. Would be great if that's something you'd like
 to help out with.

   -Sean



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
Cheers  Best regards,
Fei Long Wang (???)
--
Senior Cloud Software Engineer
Tel: +64-48032246
Email: flw...@catalyst.net.nz
Catalyst IT Limited
Level 6, Catalyst House, 150 Willis Street, Wellington
-- 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [swift] [trove] Configuration option descriptions

2014-06-25 Thread Nikhil Manchanda

Hi Anne:

Thanks for bringing this to our attention! This is definitely something
that we need to fix.

I've filed https://bugs.launchpad.net/trove/+bug/1334465 to track this
issue, and I'm hoping we'll be able to get to it during juno-2 (or
beginning of juno-3 at the latest).

Once we've got a patch up, I'll probably run it by some doc folks to
make sure that the help text for the options reads okay. Would
appreciate the feedback here.

Thanks,
Nikhil

Anne Gentle writes:

 Hi swift and trove devs,
 In working on our automation to document the configuration options across
 OpenStack, we uncovered a deficit in both trove and swift configuration
 option descriptions.

 Here's an example:
 http://docs.openstack.org/trunk/config-reference/content/container-sync-realms-configuration.html
 You'll notice that many of those options do not have help text. We need
 swift developers to fill in those in your code base so we can continue to
 generate this document.

 Trove devs, we are finding similar gaps in config option information in
 your source code as well. Please make it a priority prior to our automating
 those in the docs.

 If you have any questions, let us know through the openstack-docs mailing
 list.
 Thanks,
 Anne
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] DVR SNAT shortcut

2014-06-25 Thread loy wolfe
On Wed, Jun 25, 2014 at 10:29 PM, McCann, Jack jack.mcc...@hp.com wrote:

   If every compute node is
   assigned a public ip, is it technically able to improve SNAT packets
   w/o going through the network node ?

 It is technically possible to implement default SNAT at the compute node.

 One approach would be to use a single IP address per compute node as a
 default SNAT address shared by all VMs on that compute node.  While this
 optimizes for number of external IPs consumed per compute node, the
 downside
 is having VMs from different tenants sharing the same default SNAT IP
 address
 and conntrack table.  That downside may be acceptable for some deployments,
 but it is not acceptable in others.

 In fact, it is only acceptable in some very special cases.




 Another approach would be to use a single IP address per router per compute
 node.  This avoids the multi-tenant issue mentioned above, at the cost of
 consuming more IP addresses, potentially one default SNAT IP address for
 each
 VM on the compute server (which is the case when every VM on the compute
 node
 is from a different tenant and/or using a different router).  At that point
 you might as well give each VM a floating IP.

 Hence the approach taken with the initial DVR implementation is to keep
 default SNAT as a centralized service.


In contrast to moving service to distributed CN, we should take care of
keeping them as centralized, especially FIP and FW. I know a lot of
customer prefer using some dedicated servers to act as network nodes, which
have more NICs(as external connection) than compute nodes, in these cases
FIP must be centralized instead of being distributed. As for FW, if we want
stateful ACL then DVR can do nothing, except that we think security group
is already some kind of FW.




 - Jack

  -Original Message-
  From: Zang MingJie [mailto:zealot0...@gmail.com]
  Sent: Wednesday, June 25, 2014 6:34 AM
  To: OpenStack Development Mailing List (not for usage questions)
  Subject: Re: [openstack-dev] [Neutron] DVR SNAT shortcut
 
  On Wed, Jun 25, 2014 at 5:42 PM, Yongsheng Gong gong...@unitedstack.com
 wrote:
   Hi,
   for each compute node to have SNAT to Internet, I think we have the
   drawbacks:
   1. SNAT is done in router, so each router will have to consume one
 public IP
   on each compute node, which is money.
 
  SNAT can save more ips than wasted on floating ips
 
   2. for each compute node to go out to Internet, the compute node will
 have
   one more NIC, which connect to physical switch, which is money too
  
 
  Floating ip also need a public NIC on br-ex. Also we can use a
  separate vlan to handle the network, so this is not a problem
 
   So personally, I like the design:
floating IPs and 1:N SNAT still use current network nodes, which will
 have
   HA solution enabled and we can have many l3 agents to host routers. but
   normal east/west traffic across compute nodes can use DVR.
 
  BTW, does HA implementation still active ? I haven't seen it has been
  touched for a while
 
  
   yong sheng gong
  
  
   On Wed, Jun 25, 2014 at 4:30 PM, Zang MingJie zealot0...@gmail.com
 wrote:
  
   Hi:
  
   In current DVR design, SNAT is north/south direction, but packets have
   to go west/east through the network node. If every compute node is
   assigned a public ip, is it technically able to improve SNAT packets
   w/o going through the network node ?
  
   SNAT versus floating ips, can save tons of public ips, in trade of
   introducing a single failure point, and limiting the bandwidth of the
   network node. If the SNAT performance problem can be solved, I'll
   encourage people to use SNAT over floating ips. unless the VM is
   serving a public service
  
   --
   Zang MingJie
  
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
  
  
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron ML2] Potential DB lock when developing new mechanism driver

2014-06-25 Thread Li Ma
Here's a code sample which can raise db lock wait exception:

def create_port_postcommit(self, context):

port_id = ...
with session.begin(subtransactions=True):
try:
binding = (session.query(models.PortBinding).
  filter(models.PortBinding.port_id.startswith(port_id)).
  one())
# Here I modify some attributes if port binding is existed
session.merge(query)
except exc.NoResultFound:
# Here I insert new port binding record to initialize some attributes
except ...
LOG.error(error happened)

The exception is as follows:
2014-06-25 10:05:17.195 9915 ERROR neutron.plugins.ml2.managers 
[req-961680da-ce69-43c6-974c-57132def411d None] Mechanism driver 'hello' failed 
in create_port_postcommit
2014-06-25 10:05:17.195 9915 TRACE neutron.plugins.ml2.managers Traceback (most 
recent call last):
2014-06-25 10:05:17.195 9915 TRACE neutron.plugins.ml2.managers   File 
/usr/lib/python2.6/site-packages/neutron/plugins/ml2/managers.py, line 158, 
in _call_on_drivers
2014-06-25 10:05:17.195 9915 TRACE neutron.plugins.ml2.managers 
getattr(driver.obj, method_name)(context)
2014-06-25 10:05:17.195 9915 TRACE neutron.plugins.ml2.managers   File 
/usr/lib/python2.6/site-packages/neutron/plugins/ml2/drivers/mech_hello.py, 
line 95, in create_port_postcommit
2014-06-25 10:05:17.195 9915 TRACE neutron.plugins.ml2.managers {'port_id': 
port_id})
2014-06-25 10:05:17.195 9915 TRACE neutron.plugins.ml2.managers   File 
/usr/lib64/python2.6/site-packages/SQLAlchemy-0.7.8-py2.6-linux-x86_64.egg/sqlalchemy/orm/session.py,
 line 402, in __exit__
2014-06-25 10:05:17.195 9915 TRACE neutron.plugins.ml2.managers 
self.commit()
2014-06-25 10:05:17.195 9915 TRACE neutron.plugins.ml2.managers   File 
/usr/lib64/python2.6/site-packages/SQLAlchemy-0.7.8-py2.6-linux-x86_64.egg/sqlalchemy/orm/session.py,
 line 314, in commit
2014-06-25 10:05:17.195 9915 TRACE neutron.plugins.ml2.managers 
self._prepare_impl()
2014-06-25 10:05:17.195 9915 TRACE neutron.plugins.ml2.managers   File 
/usr/lib64/python2.6/site-packages/SQLAlchemy-0.7.8-py2.6-linux-x86_64.egg/sqlalchemy/orm/session.py,
 line 298, in _prepare_impl
2014-06-25 10:05:17.195 9915 TRACE neutron.plugins.ml2.managers 
self.session.flush()

...

2014-06-25 10:05:17.195 9915 TRACE neutron.plugins.ml2.managers 
OperationalError: (OperationalError) (1205, 'Lock wait timeout exceeded; try 
restarting transaction') 'INSERT INTO ml2_port_bindings (port_id, host, 
vnic_type, profile, vif_type, vif_details, driver, segment) VALUES (%s, %s, %s, 
%s, %s, %s, %s, %s)' (...)

It seems that the transaction in the postcommit cannot be committed.

Thanks a lot,
Li Ma

- Original Message -
From: Li Ma skywalker.n...@gmail.com
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Sent: 星期三, 2014年 6 月 25日 下午 6:21:10
Subject: Re: [openstack-dev] [Neutron ML2] Potential DB lock when developing 
new mechanism driver

Hi Kevin,

Thanks for your reply. Actually, it is not that straightforward.
Even if postcommit is outside the 'with' statement, the transaction is not 
'truly' committed immediately. Because when I put my db code (reading and 
writing ml2-related tables) in postcommit, db lock wait exception is still 
thrown.

Li Ma

- Original Message -
From: Kevin Benton blak...@gmail.com
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Sent: 星期三, 2014年 6 月 25日 下午 4:59:26
Subject: Re: [openstack-dev] [Neutron ML2] Potential DB lock when developing 
new mechanism driver



The post_commit methods occur outside of the transactions. You should be able 
to perform the necessary database calls there. 


If you look at the code snippet in the email you provided, you can see that the 
'try' block surrounding the postcommit method is at the same indentation-level 
as the 'with' statement for the transaction so it will be closed at that point. 


Cheers, 
Kevin Benton 


-- 
Kevin Benton 



On Tue, Jun 24, 2014 at 8:33 PM, Li Ma  skywalker.n...@gmail.com  wrote: 


Hi all, 

I'm developing a new mechanism driver. I'd like to access ml2-related tables in 
create_port_precommit and create_port_postcommit. However I find it hard to do 
that because the two functions are both inside an existed database transaction 
defined in create_port function of ml2/plugin.py. 

The related code is as follows: 

def create_port(self, context, port): 
... 
session = context.session 
with session.begin(subtransactions=True): 
... 
self.mechanism_manager.create_port_precommit(mech_context) 
try: 
self.mechanism_manager.create_port_postcommit(mech_context) 
... 
... 
return result 

As a result, I need to carefully deal with the database nested transaction 
issue to prevent from db lock when I develop my own mechanism driver. Right 
now, I'm trying to get the idea behind the scene. Is it possible to refactor it 
in order to make 

[openstack-dev] Fwd: [Openstack] Glance - and the use of the project_id:%(project_id) rule

2014-06-25 Thread Scott Devoid
?

-- Forwarded message --
From: Michael Hearn mrhe...@gmail.com
Date: Fri, May 2, 2014 at 9:21 AM
Subject: [Openstack] Glance - and the use of the project_id:%(project_id)
rule
To: openst...@lists.openstack.org openst...@lists.openstack.org


Having played with the policies and rules within glance's policy.json file
I have not had any success using the rule, project_id:%(project_id) to
restrict api usage.
Without changing user/role/tenant  I have had success using
project_id:%(project_id) with cinder.
I cannot find anything to suggest glance's policy engine cannot parse the
rule but would like confirmation.
Can anyone verify this?.

This is using icehouse, glance 0.12.0

~Mike



___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openst...@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Infra][Solum][Mistral] New class of requirements for Stackforge projects

2014-06-25 Thread Angus Salkeld
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 25/06/14 15:13, Clark Boylan wrote:
 On Tue, Jun 24, 2014 at 9:54 PM, Adrian Otto adrian.o...@rackspace.com 
 wrote:
 Hello,

 Solum has run into a constraint with the current scheme for requirements 
 management within the OpenStack CI system. We have a proposal for dealing 
 with this constraint that involves making a contribution to openstack-infra. 
 This message explains the constraint, and our proposal for addressing it.

 == Background ==

 OpenStack uses a list of global requirements in the requirements repo[1], 
 and each project has it’s own requirements.txt and test-requirements.txt 
 files. The requirements are satisfied by gate jobs using pip configured to 
 use the pypi.openstack.org mirror, which is periodically updated with new 
 content from pypi.python.org. One motivation for doing this is that 
 pypi.python.org may not be as fast or as reliable as a local mirror. The 
 gate/check jobs for the projects use the OpenStack internal pypi mirror to 
 ensure stability.

 The OpenStack CI system will sync up the requirements across all the 
 official projects and will create reviews in the participating projects for 
 any mis-matches. Solum is one of these projects, and enjoys this feature.

 Another motivation is so that users of OpenStack will have one single set of 
 python package requirements/dependencies to install and run the individual 
 OpenStack components.

 == Problem ==

 Stackforge projects listed in openstack/requirements/projects.txt that 
 decide to depend on each other (for example, Solum wanting to list 
 mistralclient as a requirement) are unable to, because they are not yet 
 integrated, and are not listed in 
 openstack/requirements/global-requirements.txt yet. This means that in order 
 to depend on each other, a project must withdraw from projects.txt and begin 
 using pip with pypi.poython.org to satisfy all of their requirements.I 
 strongly dislike this option.

 Mistral is still evolving rapidly, and we don’t think it makes sense for 
 them to pursue integration wight now. The upstream distributions who include 
 packages to support OpenStack will also prefer not to deal with a 
 requirement that will be cutting a new version every week or two in order to 
 satisfy evolving needs as Solum and other consumers of Mistral help refine 
 how it works.

 == Proposal ==

 We want the best of both worlds. We want the freedom to innovate and use new 
 software for a limited selection of stackforge projects, and still use the 
 OpenStack pypi server to satisfy my regular requirements. We want the speed 
 and reliability of using our local mirror, and users of Solum to use a 
 matching set of requirements for all the things that we use, and integrated 
 projects use. We want to continue getting the reviews that bring us up to 
 date with new requirements versions.

 We propose that we submit an enhancement to the gate/check job setup that 
 will:

 1) Begin (as it does today) by satisfying global-requirements.txt and my 
 local project’s requirements.txt and test-requirements.txt using the local 
 OpenStack pypi mirror.
 2) After all requirements are satisfied, check the name of my project. If it 
 begins with ‘stackforge/‘ then look for a stackforge-requirements.txt file. 
 If one exists, reconfigure pip to switch to use pypi.python.org, and satisfy 
 the requirements listed in the file. We will list mistralclient there, and 
 get the latest tagged/released version of that.

 I am reasonably sure that if you remove yourself from the
 openstack/requirements project list this is basically how it will
 work. Pip is configured to use the OpenStack mirror and fall back on
 pypi.python.org for packages not available on the OpenStack mirror
 [2]. So I don't think there is any work to do here with additional
 requirements files. It should just work. Adding a new requirements
 file will just make things more confusing for packagers and consumers
 of your software.

Adrian I know this is not the optimal solution, but I think this is
the most pragmatic solution (esp. given we need to progress and not be held
up by this), most stackforge projects are in the same boat as us.
As far as pypi breakages (most are easily fixable by restricting the
package versions if we get an issue with a new release
of *random-api-breaking-package*).



 == Call To Action ==

 What do you think of this approach to satisfy a balance of interests? 
 Everything remains the same for OpenStack projects, and Stackforge projects 
 get a new feature that allows them to require software that has not yet been 
 integrated. Are there even better options that we should consider?

 Thanks,

 Adrian Otto


 References:
 [1] https://review.openstack.org/openstack/requirements
 
 For what it is worth the Infra team has also been looking at
 potentially using something like bandersnatch to mirror all of pypi
 which is now a possibility because OpenStack doesn't depend on
 

Re: [openstack-dev] Fwd: [nova] How can I obtain compute_node_id in nova

2014-06-25 Thread afe.yo...@gmail.com
Thx!   I will  pay close attension to this patch.


On Wed, Jun 25, 2014 at 11:59 PM, Sylvain Bauza sba...@redhat.com wrote:

  Hi Afe,

 Le 25/06/2014 12:01, afe.yo...@gmail.com a écrit :

 Any help will be greatly appreciated!

 -- Forwarded message --
 From: afe.yo...@gmail.com afe.yo...@gmail.com
 Date: Wed, Jun 25, 2014 at 5:53 PM
 Subject: [nova] How can I obtain compute_node_id in nova
 To: openst...@lists.openstack.org



 I found a bug recently and reported it here
 https://bugs.launchpad.net/nova/+bug/1333498

  The function  requires compute_node_id as its parameter.
 I'm trying  to fix this bug. However I fail to find any way to obtain the
 compute_node_id.


 Thanks for your bug report. There is already a patch proposed for removing
 cn_id in PCITracker [1], so it will close your bug once merged.

 -Sylvain

 [1] https://review.openstack.org/102298

   Any help will be greatly appreciated!






 ___
 OpenStack-dev mailing 
 listOpenStack-dev@lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron ML2] Potential DB lock when developing new mechanism driver

2014-06-25 Thread Kevin Benton
What is in the variable named 'query' that you are trying to merge into the
session? Can you include the full create_port_postcommit method? The line
raising the exception ends with {'port_id': port_id}) and that doesn't
matching anything included in your sample.


On Wed, Jun 25, 2014 at 6:53 PM, Li Ma skywalker.n...@gmail.com wrote:

 Here's a code sample which can raise db lock wait exception:

 def create_port_postcommit(self, context):

 port_id = ...
 with session.begin(subtransactions=True):
 try:
 binding = (session.query(models.PortBinding).
   filter(models.PortBinding.port_id.startswith(port_id)).
   one())
 # Here I modify some attributes if port binding is existed
 session.merge(query)
 except exc.NoResultFound:
 # Here I insert new port binding record to initialize some
 attributes
 except ...
 LOG.error(error happened)

 The exception is as follows:
 2014-06-25 10:05:17.195 9915 ERROR neutron.plugins.ml2.managers
 [req-961680da-ce69-43c6-974c-57132def411d None] Mechanism driver 'hello'
 failed in create_port_postcommit
 2014-06-25 10:05:17.195 9915 TRACE neutron.plugins.ml2.managers Traceback
 (most recent call last):
 2014-06-25 10:05:17.195 9915 TRACE neutron.plugins.ml2.managers   File
 /usr/lib/python2.6/site-packages/neutron/plugins/ml2/managers.py, line
 158, in _call_on_drivers
 2014-06-25 10:05:17.195 9915 TRACE neutron.plugins.ml2.managers
 getattr(driver.obj, method_name)(context)
 2014-06-25 10:05:17.195 9915 TRACE neutron.plugins.ml2.managers   File
 /usr/lib/python2.6/site-packages/neutron/plugins/ml2/drivers/mech_hello.py,
 line 95, in create_port_postcommit
 2014-06-25 10:05:17.195 9915 TRACE neutron.plugins.ml2.managers
 {'port_id': port_id})
 2014-06-25 10:05:17.195 9915 TRACE neutron.plugins.ml2.managers   File
 /usr/lib64/python2.6/site-packages/SQLAlchemy-0.7.8-py2.6-linux-x86_64.egg/sqlalchemy/orm/session.py,
 line 402, in __exit__
 2014-06-25 10:05:17.195 9915 TRACE neutron.plugins.ml2.managers
 self.commit()
 2014-06-25 10:05:17.195 9915 TRACE neutron.plugins.ml2.managers   File
 /usr/lib64/python2.6/site-packages/SQLAlchemy-0.7.8-py2.6-linux-x86_64.egg/sqlalchemy/orm/session.py,
 line 314, in commit
 2014-06-25 10:05:17.195 9915 TRACE neutron.plugins.ml2.managers
 self._prepare_impl()
 2014-06-25 10:05:17.195 9915 TRACE neutron.plugins.ml2.managers   File
 /usr/lib64/python2.6/site-packages/SQLAlchemy-0.7.8-py2.6-linux-x86_64.egg/sqlalchemy/orm/session.py,
 line 298, in _prepare_impl
 2014-06-25 10:05:17.195 9915 TRACE neutron.plugins.ml2.managers
 self.session.flush()

 ...

 2014-06-25 10:05:17.195 9915 TRACE neutron.plugins.ml2.managers
 OperationalError: (OperationalError) (1205, 'Lock wait timeout exceeded;
 try restarting transaction') 'INSERT INTO ml2_port_bindings (port_id, host,
 vnic_type, profile, vif_type, vif_details, driver, segment) VALUES (%s, %s,
 %s, %s, %s, %s, %s, %s)' (...)

 It seems that the transaction in the postcommit cannot be committed.

 Thanks a lot,
 Li Ma

 - Original Message -
 From: Li Ma skywalker.n...@gmail.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Sent: 星期三, 2014年 6 月 25日 下午 6:21:10
 Subject: Re: [openstack-dev] [Neutron ML2] Potential DB lock when
 developing new mechanism driver

 Hi Kevin,

 Thanks for your reply. Actually, it is not that straightforward.
 Even if postcommit is outside the 'with' statement, the transaction is not
 'truly' committed immediately. Because when I put my db code (reading and
 writing ml2-related tables) in postcommit, db lock wait exception is still
 thrown.

 Li Ma

 - Original Message -
 From: Kevin Benton blak...@gmail.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Sent: 星期三, 2014年 6 月 25日 下午 4:59:26
 Subject: Re: [openstack-dev] [Neutron ML2] Potential DB lock when
 developing new mechanism driver



 The post_commit methods occur outside of the transactions. You should be
 able to perform the necessary database calls there.


 If you look at the code snippet in the email you provided, you can see
 that the 'try' block surrounding the postcommit method is at the same
 indentation-level as the 'with' statement for the transaction so it will be
 closed at that point.


 Cheers,
 Kevin Benton


 --
 Kevin Benton



 On Tue, Jun 24, 2014 at 8:33 PM, Li Ma  skywalker.n...@gmail.com  wrote:


 Hi all,

 I'm developing a new mechanism driver. I'd like to access ml2-related
 tables in create_port_precommit and create_port_postcommit. However I find
 it hard to do that because the two functions are both inside an existed
 database transaction defined in create_port function of ml2/plugin.py.

 The related code is as follows:

 def create_port(self, context, port):
 ...
 session = context.session
 with session.begin(subtransactions=True):
 ...
 

Re: [openstack-dev] [swift] [trove] Configuration option descriptions

2014-06-25 Thread Anne Gentle
On Wed, Jun 25, 2014 at 7:32 PM, Nikhil Manchanda nik...@manchanda.me
wrote:


 Hi Anne:

 Thanks for bringing this to our attention! This is definitely something
 that we need to fix.

 I've filed https://bugs.launchpad.net/trove/+bug/1334465 to track this
 issue, and I'm hoping we'll be able to get to it during juno-2 (or
 beginning of juno-3 at the latest).

 Once we've got a patch up, I'll probably run it by some doc folks to
 make sure that the help text for the options reads okay. Would
 appreciate the feedback here.


Thanks! I forgot to mention, we do have a style guide for writing config
options:
http://docs.openstack.org/developer/oslo.config/styleguide.html

Also I failed to mention that the swift options descriptions are pulled
from the example config files, so those are what need updating.

Thanks,
Anne



 Thanks,
 Nikhil

 Anne Gentle writes:

  Hi swift and trove devs,
  In working on our automation to document the configuration options across
  OpenStack, we uncovered a deficit in both trove and swift configuration
  option descriptions.
 
  Here's an example:
 
 http://docs.openstack.org/trunk/config-reference/content/container-sync-realms-configuration.html
  You'll notice that many of those options do not have help text. We need
  swift developers to fill in those in your code base so we can continue to
  generate this document.
 
  Trove devs, we are finding similar gaps in config option information in
  your source code as well. Please make it a priority prior to our
 automating
  those in the docs.
 
  If you have any questions, let us know through the openstack-docs mailing
  list.
  Thanks,
  Anne
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Barbican] Barebones CA

2014-06-25 Thread Nathan Kinder


On 06/25/2014 02:42 PM, Clark, Robert Graham wrote:
 
 Ok, I’ll hack together a dev plugin over the next week or so, other work
 notwithstanding. Where possible I’ll probably borrow from the dog tag
 plugin as I’ve not looked closely at the plugin infrastructure in Barbican
 recently.

My understanding is that Barbican's plugin interface is currently in the
midst of a redesign, so be careful not to copy something that will be
changing shortly.

-NGK

 
 Is this something you’d like a blueprint for first?
 
 -Rob
 
 
 
 
 On 25/06/2014 18:30, Ade Lee a...@redhat.com wrote:
 
 I think the plan is to create a Dogtag instance so that integration
 tests can be run whenever code is checked in (both with and without a
 Dogtag backend).

 Dogtag isn't that difficult to deploy, but being a Java app, it does
 bring in a set of dependencies that developers may not want to deal with
 for basic/ devstack testing.

 So, I agree that a simple OpenSSL CA may be useful at least initially as
 a 'dev' plugin.

 Ade

 On Wed, 2014-06-25 at 16:31 +, Jarret Raim wrote:
 Rob,

 RedHat is working on a backend for Dogtag, which should be capable of
 doing something like that. That's still a bit hard to deploy, so it
 would
 make sense to extend the 'dev' plugin to include those features.


 Jarret


 On 6/24/14, 4:04 PM, Clark, Robert Graham robert.cl...@hp.com wrote:

 Yeah pretty much.

 That¹s something I¹d be interested to work on, if work isn¹t ongoing
 already.

 -Rob





 On 24/06/2014 18:57, John Wood john.w...@rackspace.com wrote:

 Hello Robert,

 I would actually hope we have a self-contained certificate plugin
 implementation that runs 'out of the box' to enable certificate
 generation orders to be evaluated and demo-ed on local boxes.

 Is this what you were thinking though?

 Thanks,
 John



 
 From: Clark, Robert Graham [robert.cl...@hp.com]
 Sent: Tuesday, June 24, 2014 10:36 AM
 To: OpenStack List
 Subject: [openstack-dev] [Barbican] Barebones CA

 Hi all,

 I¹m sure this has been discussed somewhere and I¹ve just missed it.

 Is there any value in creating a basic ŒCA¹ and plugin to satisfy
 tests/integration in Barbican? I¹m thinking something that probably
 performs OpenSSL certificate operations itself, ugly but perhaps
 useful
 for some things?

 -Rob

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Infra][Solum][Mistral] New class of requirements for Stackforge projects

2014-06-25 Thread Matthew Oliver
On Jun 26, 2014 12:12 PM, Angus Salkeld angus.salk...@rackspace.com
wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 On 25/06/14 15:13, Clark Boylan wrote:
  On Tue, Jun 24, 2014 at 9:54 PM, Adrian Otto adrian.o...@rackspace.com
wrote:
  Hello,
 
  Solum has run into a constraint with the current scheme for
requirements management within the OpenStack CI system. We have a proposal
for dealing with this constraint that involves making a contribution to
openstack-infra. This message explains the constraint, and our proposal for
addressing it.
 
  == Background ==
 
  OpenStack uses a list of global requirements in the requirements
repo[1], and each project has it’s own requirements.txt and
test-requirements.txt files. The requirements are satisfied by gate jobs
using pip configured to use the pypi.openstack.org mirror, which is
periodically updated with new content from pypi.python.org. One motivation
for doing this is that pypi.python.org may not be as fast or as reliable as
a local mirror. The gate/check jobs for the projects use the OpenStack
internal pypi mirror to ensure stability.
 
  The OpenStack CI system will sync up the requirements across all the
official projects and will create reviews in the participating projects for
any mis-matches. Solum is one of these projects, and enjoys this feature.
 
  Another motivation is so that users of OpenStack will have one single
set of python package requirements/dependencies to install and run the
individual OpenStack components.
 
  == Problem ==
 
  Stackforge projects listed in openstack/requirements/projects.txt that
decide to depend on each other (for example, Solum wanting to list
mistralclient as a requirement) are unable to, because they are not yet
integrated, and are not listed in
openstack/requirements/global-requirements.txt yet. This means that in
order to depend on each other, a project must withdraw from projects.txt
and begin using pip with pypi.poython.org to satisfy all of their
requirements.I strongly dislike this option.
 
  Mistral is still evolving rapidly, and we don’t think it makes sense
for them to pursue integration wight now. The upstream distributions who
include packages to support OpenStack will also prefer not to deal with a
requirement that will be cutting a new version every week or two in order
to satisfy evolving needs as Solum and other consumers of Mistral help
refine how it works.
 
  == Proposal ==
 
  We want the best of both worlds. We want the freedom to innovate and
use new software for a limited selection of stackforge projects, and still
use the OpenStack pypi server to satisfy my regular requirements. We want
the speed and reliability of using our local mirror, and users of Solum to
use a matching set of requirements for all the things that we use, and
integrated projects use. We want to continue getting the reviews that bring
us up to date with new requirements versions.
 
  We propose that we submit an enhancement to the gate/check job setup
that will:
 
  1) Begin (as it does today) by satisfying global-requirements.txt and
my local project’s requirements.txt and test-requirements.txt using the
local OpenStack pypi mirror.
  2) After all requirements are satisfied, check the name of my project.
If it begins with ‘stackforge/‘ then look for a stackforge-requirements.txt
file. If one exists, reconfigure pip to switch to use pypi.python.org, and
satisfy the requirements listed in the file. We will list mistralclient
there, and get the latest tagged/released version of that.
 
  I am reasonably sure that if you remove yourself from the
  openstack/requirements project list this is basically how it will
  work. Pip is configured to use the OpenStack mirror and fall back on
  pypi.python.org for packages not available on the OpenStack mirror
  [2]. So I don't think there is any work to do here with additional
  requirements files. It should just work. Adding a new requirements
  file will just make things more confusing for packagers and consumers
  of your software.

 Adrian I know this is not the optimal solution, but I think this is
 the most pragmatic solution (esp. given we need to progress and not be
held
 up by this), most stackforge projects are in the same boat as us.
 As far as pypi breakages (most are easily fixable by restricting the
 package versions if we get an issue with a new release
 of *random-api-breaking-package*).


I've looked through the infra choose mirror code, and Clark is correct. If
the project isn't in the projects.txt file they will only access to
pypi.openstack.org however if removed then it will first check
pypi.openstack.org and then fall back to to pypi.python.org. I think the
only real solution is what Angus mentioned, remove yourself from
projects.txt at least until all your dependencies can be provided by
pypi.openstack.org or another solution is put into place. In the mean time
you can at least progress and continue development.

If you code requires a direct 

Re: [openstack-dev] Virtual Interface creation failed

2014-06-25 Thread Edgar Magana Perdomo (eperdomo)
The link does not work for me!

Edgar

From: tfre...@redhat.commailto:tfre...@redhat.com 
tfre...@redhat.commailto:tfre...@redhat.com
Organization: Red Hat
Reply-To: tfre...@redhat.commailto:tfre...@redhat.com 
tfre...@redhat.commailto:tfre...@redhat.com, OpenStack Development Mailing 
List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Wednesday, June 25, 2014 at 6:57 AM
To: 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: [openstack-dev] Virtual Interface creation failed


Hello,

During the tests of Multiple RPC, I've encountered a problem to create VMs.
Creation of 180 VMs succeeded.

But when I've tried to create 200 VMs, part of the VMs failed with resources 
problem of VCPU limitation, the other part failed with following error:
vm failed -  {message: Virtual Interface creation failed, code: 500, 
created: 2014-06-25T10:22:35Z} | | flavor | nano (10)

We can see from the Neutron server and Nova API logs, that Neutron got the Nova 
request and responded to it, but this connection fails somewhere between Nova 
API and Nova Compute.

Please see the exact logs: http://pastebin.test.redhat.com/217653


Tested with latest Icehouse version on RHEL 7.
Controller + Compute Node

All Nova and Neutron logs are attached.

Is this a known issue?

--
Thanks,
Toni
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Creating new python-new_project_nameclient

2014-06-25 Thread Aaron Rosen
Hi,

I'm looking at creating a new python-new_project_nameclient and I was
wondering if there was any on going effort to share code between the
clients or not? I've looked at the code in python-novaclient and
python-neutronclient and both of them seem to have their own homegrown
HTTPClient and keystone integration. Figured I'd ping the mailing list here
before I go on and make my own homegrown HTTPClient as well.

Thanks!

Aaron
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Creating new python-new_project_nameclient

2014-06-25 Thread Brian Curtin
On Wed, Jun 25, 2014 at 10:18 PM, Aaron Rosen aaronoro...@gmail.com wrote:

 Hi,

 I'm looking at creating a new python-new_project_nameclient and I was
 wondering if there was any on going effort to share code between the
 clients or not? I've looked at the code in python-novaclient and
 python-neutronclient and both of them seem to have their own homegrown
 HTTPClient and keystone integration. Figured I'd ping the mailing list here
 before I go on and make my own homegrown HTTPClient as well.


It's still quite early, but this is one of the things we're working on with
python-openstacksdk[0][1]. As of today[2] there are reviews for the start
of how Neutron and Glance clients would work out, and I'm working on the
start of a Swift one myself. However, if you need your client written
today, this project isn't there yet.

[0] https://github.com/stackforge/python-openstacksdk
[1] https://wiki.openstack.org/wiki/PythonOpenStackSDK
[2]
https://review.openstack.org/#/q/status:open+project:stackforge/python-openstacksdk,n,z
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Creating new python-new_project_nameclient

2014-06-25 Thread Dean Troyer
On Wed, Jun 25, 2014 at 10:18 PM, Aaron Rosen aaronoro...@gmail.com wrote:

 I'm looking at creating a new python-new_project_nameclient and I was
 wondering if there was any on going effort to share code between the
 clients or not? I've looked at the code in python-novaclient and
 python-neutronclient and both of them seem to have their own homegrown
 HTTPClient and keystone integration. Figured I'd ping the mailing list here
 before I go on and make my own homegrown HTTPClient as well.


For things in the library level of a client please consider using
keystoneclient's fairly new session layer as the basis of your HTTP layer.
 That will also give you access to the new style auth plugins, assuming you
want to do Keystone auth with this client.

I'm not sure if Jamie has any examples of using this without leaning on the
backward-compatibility bits that the existing clients need.

The Python SDK is being built on a similar Session layer (without the
backeard compat bits).

dt

-- 

Dean Troyer
dtro...@gmail.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron ML2] Potential DB lock when developing new mechanism driver

2014-06-25 Thread Li Ma
Sorry, I was thought that the code was straightforward to understand. I can 
explain what I wanna do.
I try to use postcommit to dynamically set some specific port attribute into 
port binding when a certain port is created.
So, I write an example to see whether it is working or not. But, 

def create_port_postcommit(self, context):
port_id = context._port['id']
session = db_api.get_session()

# insert some attr in profile
profile = {'type': 1, 'priority': 2}

try:
binding = (session.query(models.PortBinding).
  filter(models.PortBinding.port_id.startswith(port_id)).
  one())
binding.profile = str(profile)
session.merge(binding)
except exc.NoResultFound:
binding = models.PortBinding(
  port_id=port_id,
  vif_type=portbindings.VIF_TYPE_UNBOUND,
  profile=str(profile))
session.add(binding)
except Exception as e:
LOG.error(_(Error with port %(port_id)s), {'port_id': port_id})  -- 
line 95

Message is as follows:

2014-06-25 10:05:17.195 9915 ERROR neutron.plugins.ml2.managers 
[req-961680da-ce69-43c6-974c-57132def411d None] Mechanism driver 'hello' failed 
in create_port_postcommit
2014-06-25 10:05:17.195 9915 TRACE neutron.plugins.ml2.managers Traceback (most 
recent call last):
2014-06-25 10:05:17.195 9915 TRACE neutron.plugins.ml2.managers   File 
/usr/lib/python2.6/site-packages/neutron/plugins/ml2/managers.py, line 158, 
in _call_on_drivers
2014-06-25 10:05:17.195 9915 TRACE neutron.plugins.ml2.managers 
getattr(driver.obj, method_name)(context)
2014-06-25 10:05:17.195 9915 TRACE neutron.plugins.ml2.managers   File 
/usr/lib/python2.6/site-packages/neutron/plugins/ml2/drivers/mech_hello.py, 
line 95, in create_port_postcommit
2014-06-25 10:05:17.195 9915 TRACE neutron.plugins.ml2.managers {'port_id': 
port_id})
2014-06-25 10:05:17.195 9915 TRACE neutron.plugins.ml2.managers   File 
/usr/lib64/python2.6/site-packages/SQLAlchemy-0.7.8-py2.6-linux-x86_64.egg/sqlalchemy/orm/session.py,
 line 402, in __exit__
2014-06-25 10:05:17.195 9915 TRACE neutron.plugins.ml2.managers 
self.commit()
2014-06-25 10:05:17.195 9915 TRACE neutron.plugins.ml2.managers   File 
/usr/lib64/python2.6/site-packages/SQLAlchemy-0.7.8-py2.6-linux-x86_64.egg/sqlalchemy/orm/session.py,
 line 314, in commit
2014-06-25 10:05:17.195 9915 TRACE neutron.plugins.ml2.managers 
self._prepare_impl()
2014-06-25 10:05:17.195 9915 TRACE neutron.plugins.ml2.managers   File 
/usr/lib64/python2.6/site-packages/SQLAlchemy-0.7.8-py2.6-linux-x86_64.egg/sqlalchemy/orm/session.py,
 line 298, in _prepare_impl
2014-06-25 10:05:17.195 9915 TRACE neutron.plugins.ml2.managers 
self.session.flush()
2014-06-25 10:05:17.195 9915 TRACE neutron.plugins.ml2.managers   File 
/usr/lib/python2.6/site-packages/neutron/openstack/common/db/sqlalchemy/session.py,
 line 597, in _wrap
2014-06-25 10:05:17.195 9915 TRACE neutron.plugins.ml2.managers return 
f(*args, **kwargs)
2014-06-25 10:05:17.195 9915 TRACE neutron.plugins.ml2.managers   File 
/usr/lib/python2.6/site-packages/neutron/openstack/common/db/sqlalchemy/session.py,
 line 836, in flush
2014-06-25 10:05:17.195 9915 TRACE neutron.plugins.ml2.managers return 
super(Session, self).flush(*args, **kwargs)
2014-06-25 10:05:17.195 9915 TRACE neutron.plugins.ml2.managers   File 
/usr/lib64/python2.6/site-packages/SQLAlchemy-0.7.8-py2.6-linux-x86_64.egg/sqlalchemy/orm/session.py,
 line 1583, in flush
2014-06-25 10:05:17.195 9915 TRACE neutron.plugins.ml2.managers 
self._flush(objects)
2014-06-25 10:05:17.195 9915 TRACE neutron.plugins.ml2.managers   File 
/usr/lib64/python2.6/site-packages/SQLAlchemy-0.7.8-py2.6-linux-x86_64.egg/sqlalchemy/orm/session.py,
 line 1654, in _flush
2014-06-25 10:05:17.195 9915 TRACE neutron.plugins.ml2.managers 
flush_context.execute()
2014-06-25 10:05:17.195 9915 TRACE neutron.plugins.ml2.managers   File 
/usr/lib64/python2.6/site-packages/SQLAlchemy-0.7.8-py2.6-linux-x86_64.egg/sqlalchemy/orm/unitofwork.py,
 line 331, in execute
2014-06-25 10:05:17.195 9915 TRACE neutron.plugins.ml2.managers 
rec.execute(self)
2014-06-25 10:05:17.195 9915 TRACE neutron.plugins.ml2.managers   File 
/usr/lib64/python2.6/site-packages/SQLAlchemy-0.7.8-py2.6-linux-x86_64.egg/sqlalchemy/orm/unitofwork.py,
 line 475, in execute
2014-06-25 10:05:17.195 9915 TRACE neutron.plugins.ml2.managers uow
2014-06-25 10:05:17.195 9915 TRACE neutron.plugins.ml2.managers   File 
/usr/lib64/python2.6/site-packages/SQLAlchemy-0.7.8-py2.6-linux-x86_64.egg/sqlalchemy/orm/persistence.py,
 line 64, in save_obj
2014-06-25 10:05:17.195 9915 TRACE neutron.plugins.ml2.managers table, 
insert)
2014-06-25 10:05:17.195 9915 TRACE neutron.plugins.ml2.managers   File 
/usr/lib64/python2.6/site-packages/SQLAlchemy-0.7.8-py2.6-linux-x86_64.egg/sqlalchemy/orm/persistence.py,
 line 530, in _emit_insert_statements
2014-06-25 10:05:17.195 9915 TRACE 

Re: [openstack-dev] [Neutron] DVR SNAT shortcut

2014-06-25 Thread Yi Sun



Another approach would be to use a single IP address per router
per compute
node.  This avoids the multi-tenant issue mentioned above, at the
cost of
consuming more IP addresses, potentially one default SNAT IP
address for each
VM on the compute server (which is the case when every VM on the
compute node
is from a different tenant and/or using a different router).  At
that point
you might as well give each VM a floating IP.

Hence the approach taken with the initial DVR implementation is to
keep
default SNAT as a centralized service.


In contrast to moving service to distributed CN, we should take care 
of keeping them as centralized, especially FIP and FW. I know a lot of 
customer prefer using some dedicated servers to act as network nodes, 
which have more NICs(as external connection) than compute nodes, in 
these cases FIP must be centralized instead of being distributed. As 
for FW, if we want stateful ACL then DVR can do nothing, except that 
we think security group is already some kind of FW.


+1, I had another email to discuss about FW (FWaaS) and DVR integration. 
Traditionally, we run firewall with router so that firewall can use 
route and NAT info from router. since DVR is asymmetric when handling 
traffic, it is hard to run stateful firewall on top of DVR just like a 
traditional firewall does . When the NAT is in the picture, the 
situation can be even worse.

Yi




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [sahara] team meeting June 26 1800 UTC

2014-06-25 Thread Sergey Lukjanov
Hi folks,

We'll be having the Sahara team meeting as usual in
#openstack-meeting-alt channel.

Agenda: https://wiki.openstack.org/wiki/Meetings/SaharaAgenda#Next_meetings

http://www.timeanddate.com/worldclock/fixedtime.html?msg=Sahara+Meetingiso=20140626T18

-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] How can I enable operation for non-admin user

2014-06-25 Thread Scott Devoid
Hi Chen,


 I’m not an experienced developer, so , could you explain more about
  “Perhaps the live_migrate task is passing the incorrect context in for
 this database query?” ?

Sorry, I should have clarified that that question was for the developers
*out there*. (cc's the dev list now). I'm not really a developer either so
we will have to see what they say. ;-)




 Here is what I understand.

 The issue is basically caused by  @require_admin_context for
 db.service_get_by_compute_host().

Yes, the request is failing because @require_admin_context only checks for
the admin role in the context. It's somewhat of a holdover from when
there was just admin and everything else.


 Then, should this a bug ?


Possibly. I can see why db.service_get_by_compute_host() should be an
admin-only call, but I am assuming that there must be a way for nova to
switch the running context to itself once it has authorized the
live-migrate task.

But I suspect few people have tried to allow non-admin's to live-migrate
and this is just a bug from that.

Why “nova migrate” command do not need to check compute host ?


Sorry, this is a bit fastidious, but I think nova live-migrate is what
you mean here. nova migrate, I think, is still a completely separate
code-path. live-migrate needs to talk to both the source and destination
nova-compute services to coordinate and confirm the migration.






 Thanks.

 -chen



 *From:* Scott Devoid [mailto:dev...@anl.gov]
 *Sent:* Thursday, June 26, 2014 9:34 AM
 *To:* Li, Chen
 *Cc:* Sushma Korati; openst...@lists.openstack.org
 *Subject:* Re: [Openstack] How can I enable operation for non-admin user



 Hi Li,



 The problem here is that db.service_get_by_compute_host() requires admin
 context. [1] The live_migrate command needs to check that both hosts have a
 running nova-compute service before it begins migration. Perhaps the
 live_migrate task is passing the incorrect context in for this database
 query? [2] I would think that conductor should be running under it's own
 context and not the caller's context? (Devs?)



 And before someone comments that migration should always be *admin-only*,
 I'll point out that there are legitimate reasons an operator might want to
 give someone migrate permissions and not all admin permissions.



 ~ Scott



 [1]
 https://github.com/openstack/nova/blob/master/nova/db/sqlalchemy/api.py#L485

 [2]
 https://github.com/openstack/nova/blob/master/nova/conductor/tasks/live_migrate.py#L87



 On Tue, Jun 24, 2014 at 9:11 PM, Li, Chen chen...@intel.com wrote:

 Hi Sushma,



 Thanks for the reply.



 Well, edit /etc/nova/policy.json do works for command “nova migrate”.



 But when I run command “nova live-migration”, I still get errors, in
  /var/log/nova/conductor.log:





 2014-06-25 02:07:23.897 115385 INFO oslo.messaging._drivers.impl_qpid [-]
 Connected to AMQP server on 192.168.40.122:5672

 2014-06-25 02:08:59.221 115395 ERROR nova.conductor.manager
 [req-63f0a004-ef69-47f4-aefb-e0fa194d99b9 fa970646fa92442fa14b2b759cf381a6
 2eb6bd3a69ad454a90489dd12b9cdf3b] Migration of instance
 446d96d7-2073-46ac-b40c-0f167fbd04b2 to host None unexpectedly failed.

 2014-06-25 02:08:59.221 115395 TRACE nova.conductor.manager Traceback
 (most recent call last):

 2014-06-25 02:08:59.221 115395 TRACE nova.conductor.manager   File
 /usr/lib/python2.6/site-packages/nova/conductor/manager.py, line 757, in
 _live_migrate

 2014-06-25 02:08:59.221 115395 TRACE nova.conductor.manager
 block_migration, disk_over_commit)

 2014-06-25 02:08:59.221 115395 TRACE nova.conductor.manager   File
 /usr/lib/python2.6/site-packages/nova/conductor/tasks/live_migrate.py,
 line 191, in execute

 2014-06-25 02:08:59.221 115395 TRACE nova.conductor.manager return
 task.execute()

 2014-06-25 02:08:59.221 115395 TRACE nova.conductor.manager   File
 /usr/lib/python2.6/site-packages/nova/conductor/tasks/live_migrate.py,
 line 56, in execute

 2014-06-25 02:08:59.221 115395 TRACE nova.conductor.manager
 self._check_host_is_up(self.source)

 2014-06-25 02:08:59.221 115395 TRACE nova.conductor.manager   File
 /usr/lib/python2.6/site-packages/nova/conductor/tasks/live_migrate.py,
 line 87, in _check_host_is_up

 2014-06-25 02:08:59.221 115395 TRACE nova.conductor.manager service =
 db.service_get_by_compute_host(self.context, host)

 2014-06-25 02:08:59.221 115395 TRACE nova.conductor.manager   File
 /usr/lib/python2.6/site-packages/nova/db/api.py, line 129, in
 service_get_by_compute_host

 2014-06-25 02:08:59.221 115395 TRACE nova.conductor.manager return
 IMPL.service_get_by_compute_host(context, host)

 2014-06-25 02:08:59.221 115395 TRACE nova.conductor.manager   File
 /usr/lib/python2.6/site-packages/nova/db/sqlalchemy/api.py, line 145, in
 wrapper

 2014-06-25 02:08:59.221 115395 TRACE nova.conductor.manager
 nova.context.require_admin_context(args[0])

 2014-06-25 02:08:59.221 115395 TRACE nova.conductor.manager   File
 /usr/lib/python2.6/site-packages/nova/context.py, 

Re: [openstack-dev] [Keystone] Token invalidation in deleting role assignments

2014-06-25 Thread Takashi Natsume
Dolph,

Thank you so much.
I understand that using OS-REVOKE Extension will solve these issues.

Regards,
Takashi Natsume
NTT Software Innovation Center
Tel: +81-422-59-4399
E-mail: natsume.taka...@lab.ntt.co.jp

From: Dolph Mathews [mailto:dolph.math...@gmail.com] 
Sent: Thursday, June 26, 2014 12:11 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Keystone] Token invalidation in deleting role 
assignments

This is a known limitation of the token backend and the token revocation list: 
we don't index tokens in the backend by roles (and we don't want to iterate the 
token table to find matching tokens).

However, if we land support for token revocation events [1] in the auth_token 
[2] middleware, we'll be able to deny tokens with invalid roles as they are 
presented to other services.

[1] 
https://github.com/openstack/identity-api/blob/master/v3/src/markdown/identity-api-v3-os-revoke-ext.md
[2] https://launchpad.net/keystonemiddleware

On Wed, Jun 25, 2014 at 1:19 AM, Takashi Natsume 
natsume.taka...@lab.ntt.co.jp wrote:
Hi all,

When deleting role assignments, not only tokens that are related with
deleted role assignments but also other tokens that the(same) user has are
invalidated in stable/icehouse(2014.1.1).

For example,
A) Role assignment between domain and user by OS-INHERIT(*1)
1. Assign a role(For example,'Member') between 'Domain1' and 'user' by
OS-INHERIT
2. Assign the role('Member') between 'Domain2' and 'user' by OS-INHERIT
3. Get a token with specifying 'user' and 'Project1'(in 'Domain1')
4. Get a token with specifying 'user' and 'Project2'(in 'Domain2')
5. Create reources(For example, cinder volumes) in 'Project1' with the token
that was gotten in 3.
it is possible to create them.
6. Create reources in 'Project2' with the token that was gotten in 4.
it is possible to create them.
7. Delete the role assignment between 'Domain1' and 'user' (that was added
in 1.)

(After validated token cache is expired in cinder, etc.)
8. Create reources in 'Project1' with the token that was gotten in 3.
it is not possible to create them. 401 Unauthorized.
9. Create reources in 'Project2' with the token that was gotten in 4.
it is not possible to create them. 401 Unauthorized.

In 9., my expectation is that it is possible to create resources with the
token that was gotten in 4..

*1:
v3/OS-INHERIT/domains/{domain_id}/users/{user_id}/roles/{role_id}/inherited_
to_projects

B) Role assignment between project and user
1. Assign a role(For example,'Member') between 'Project1' and 'user'
2. Assign the role('Member') between 'Project2' and 'user'
3. Get a token with specifying 'user' and 'Project1'
4. Get a token with specifying 'user' and 'Project2'
5. Create reources(For example, cinder volumes) in 'Project1' with the token
that was gotten in 3.
it is possible to create them.
6. Create reources in 'Project2' with the token that was gotten in 4.
it is possible to create them.
7. Delete the role assignment between 'Project1' and 'user' (that was added
in 1.)

(After validated token cache is expired in cinder, etc.)
8. Create reources in 'Project1' with the token that was gotten in 3.
it is not possible to create them. 401 Unauthorized.
9. Create reources in 'Project2' with the token that was gotten in 4.
it is not possible to create them. 401 Unauthorized.

In 9., my expectation is that it is possible to create resources with the
token that was gotten in 4..


Are these bugs?
Or are there any reasons to implement these ways?

Regards,
Takashi Natsume
NTT Software Innovation Center
Tel: +81-422-59-4399
E-mail: natsume.taka...@lab.ntt.co.jp




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA] Questions about test policy for scenario test

2014-06-25 Thread Frittoli, Andrea (HP Cloud)
There's a spec in progress related to this,  I'd love to see your comments in 
there:

https://review.openstack.org/#/c/94741/

Andrea

Sent from my tiny device


 Daryl Walleck wrote 

I really like this option, especially if it leaves a generic hook available for 
validation. This could allow for different types of compute validators such as 
hypervisor specific or 3rd party compute validators to be implemented.

From: Fei Long Wang feil...@catalyst.net.nzmailto:feil...@catalyst.net.nz
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Wednesday, June 25, 2014 at 5:06 PM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [QA] Questions about test policy for scenario test

Good to know. I think it's a good idea to implement a common compute verifier 
after instances booted. Maybe we can define different checking levels so that 
it can be leveraged by different test cases. I will see what I can do.

On 24/06/14 22:27, Sean Dague wrote:

On 06/24/2014 01:29 AM, Fei Long Wang wrote:


Greetings,

We're leveraging the scenario test of Tempest to do the end-to-end
functional test to make sure everything work great after upgrade,
patching, etc. And We're happy to fill the gaps we found. However, I'm a
little bit confused about the test policy from the scenario test
perspective, especially comparing with the API test. IMHO, scenario test
will cover some typical work flows of one specific service or mixed
services, and it would be nice to make sure the function is really
working instead of just checking the object status from OpenStack
perspective. Is that correct?

For example, live migration of Nova, it has been covered in API test of
Tempest (see
https://github.com/openstack/tempest/blob/master/tempest/api/compute/test_live_block_migration.py).
But as you see, it just checks if the instance is Active or not instead
of checking if the instance can be login/ssh successfully. Obviously,
from an real world view, we'd like to check if it's working indeed. So
the question is, should this be improved? If so, the enhanced code
should be in API test, scenario test or any other places? Thanks you.


The fact that computes aren't verified fully during the API testing is
mostly historical. I think they should be. The run_ssh flag used to be
used for this, however because of some long standing race conditions in
the networking stack, that wasn't able to be turned on in upstream
testing. My guess is that it's rotted now.

We've had some conversations in the QA team about a compute verifier
that would be run after any of the compute jobs to make sure they booted
correctly, and more importantly, did a very consistent set of debug
capture when they didn't. Would be great if that's something you'd like
to help out with.

-Sean





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
Cheers  Best regards,
Fei Long Wang (王飞龙)
--
Senior Cloud Software Engineer
Tel: +64-48032246
Email: flw...@catalyst.net.nzmailto:flw...@catalyst.net.nz
Catalyst IT Limited
Level 6, Catalyst House, 150 Willis Street, Wellington
--
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >