Re: [openstack-dev] [keystone] Output on stderr

2015-03-04 Thread Dolph Mathews
On Wednesday, March 4, 2015, David Stanek dsta...@dstanek.com wrote:


 On Wed, Mar 4, 2015 at 6:50 AM, Abhishek Talwar/HYD/TCS 
 abhishek.tal...@tcs.com
 javascript:_e(%7B%7D,'cvml','abhishek.tal...@tcs.com'); wrote:

 While working on a bug for keystoneclient I have replaced sys.exit with
 return. However, the code reviewers want that the output should be on
 stderr(as sys.exit does). So how can we get the output on stderr.


 The print function allows you to specify a file:

  from __future__ import print_function
  import sys
  print('something to', file=sys.stderr)

 The __future__ import is needed for Python 2.6/2.7 because print was
 changed to a function in Python 3.


I hope that answers your question. Can we have a link to the bug and/or
code review?




 --
 David
 blog: http://www.traceback.org
 twitter: http://twitter.com/dstanek
 www: http://dstanek.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Issue when upgrading from Juno to Kilo due to agent report_state RPC namespace patch

2015-03-04 Thread Assaf Muller
Hello everyone,

I'd like to highlight an issue with:
https://review.openstack.org/#/c/154670/

According to my understanding, most deployments upgrade the controllers first
and compute/network nodes later. During that time period, all agents will
fail to report state as they're sending the report_state message outside
of any namespace while the server is expecting that message in a namespace.
This is a show stopper as the Neutron server will think all of its agents are 
dead.

I think the obvious solution is to modify the Neutron server code so that
it accepts the report_state method both in and outside of the report_state
RPC namespace and chuck that code away in L (Assuming we support rolling 
upgrades
only from version N to N+1, which while is unfortunate, is the behavior I've
seen in multiple places in the code).

Finally, are there additional similar issues for other RPC methods placed in a 
namespace
this cycle?


Assaf Muller, Cloud Networking Engineer
Red Hat

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Plans to fix numa_topology related issues with migration/resize/evacuate

2015-03-04 Thread Wensley, Barton
Hi,

I have been exercising the numa topology related features in kilo (cpu 
pinning, numa topology, huge pages) and have seen that there are issues
when an operation moves an instance between compute nodes. In summary,
the numa_topology is not recalculated for the destination node, which 
results in the instance running with the wrong topology (or even 
failing to run if the topology isn't supported on the destination). 
This impacts live migration, cold migration, resize and evacuate.

I have spent some time over the last couple weeks and have a working 
fix for these issues that I would like to push upstream. The fix for
cold migration and resize is the most straightfoward, so I plan to
start there.

At a high level, here is what I have done to fix cold migrate and 
resize:
- Add the source_numa_topology and dest_numa_topology to the migration 
  object and migrations table.
- When a resize_claim is done, store the claimed numa topology in the
  dest_numa_topology in the migration record. Also store the current 
  numa topology as the source_numa_topology in the migration record.
- Use the source_numa_topology and dest_numa_topology from the 
  migration record in the resource accounting when referencing 
  migration claims as appropriate. This is done for claims, dropped 
  claims and the resource audit.
- Set the numa_topology in the instance after the cold migration/resize
  is finished to the dest_numa_topology from the migration object - 
  done in finish_resize RPC on the destination compute to match where 
  the rest of the resources for the instance are updated (there is a 
  call to _set_instance_info here that sets the memory, vcpus, disk 
  space, etc... for the migrated instance).
- Set the numa_topology in the instance if the cold migration/resize is 
  reverted to the source_numa_topology from the migration object - 
  done in finish_revert_resize RPC on the source compute.

I would appreciate any comments on my approach. I plan to start
submitting the code for this against bug 1417667 - I will split it
into several chunks to make it easier to review.

Fixing live migration was significantly more effort - I'll start a
different thread on that once I have feedback on the above approach.

Thanks,

Bart Wensley, Member of Technical Staff, Wind River


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Policy][Group-based-policy] SFC USe Case

2015-03-04 Thread Yapeng Wu
Venkat,

Service Function chaining (redirect action) is not supported right now.

Yapeng


From: groupbasedpolicy-dev-boun...@lists.opendaylight.org 
[mailto:groupbasedpolicy-dev-boun...@lists.opendaylight.org] On Behalf Of 
Venkatrangan G - ERS, HCL Tech
Sent: Wednesday, March 04, 2015 4:38 AM
To: openstack-dev@lists.openstack.org
Cc: groupbasedpolicy-...@lists.opendaylight.org
Subject: [groupbasedpolicy-dev] [Policy][Group-based-policy] SFC USe Case

Hi,
 I am trying out the procedure listed in this page 
https://wiki.openstack.org/wiki/GroupBasedPolicy/InstallODLIntegrationDevstack. 
I was looking for  an use case with Service Function chaining i.e. to use 
redirect with gbp command.
Can you please point me to some link that can help me out to try the same with 
GBP?

Regards,
Venkat G.



::DISCLAIMER::

The contents of this e-mail and any attachment(s) are confidential and intended 
for the named recipient(s) only.
E-mail transmission is not guaranteed to be secure or error-free as information 
could be intercepted, corrupted,
lost, destroyed, arrive late or incomplete, or may contain viruses in 
transmission. The e mail and its contents
(with or without referred errors) shall therefore not attach any liability on 
the originator or HCL or its affiliates.
Views or opinions, if any, presented in this email are solely those of the 
author and may not necessarily reflect the
views or opinions of HCL or its affiliates. Any form of reproduction, 
dissemination, copying, disclosure, modification,
distribution and / or publication of this message without the prior written 
consent of authorized representative of
HCL is strictly prohibited. If you have received this email in error please 
delete it and notify the sender immediately.
Before opening any email and/or attachments, please check them for viruses and 
other defects.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][libvirt] The None and 'none' for CONF.libvirt.cpu_mode

2015-03-04 Thread Daniel P. Berrange
On Wed, Mar 04, 2015 at 02:52:06PM +, Jiang, Yunhong wrote:
 Hi, Daniel
   I'm a bit confused of the None/'none' for CONF.libvirt.cpu_mode. Per my 
 understanding, None means there is no configuration provided and libvirt will 
 select the default value based on the virt_type, none means no cpu_mode 
 information should be provided. For the guest, am I right?
 
   In _get_guest_cpu_model_config() on virt/libvirt/driver.py,
 if mode is 'none', kvm/qemu virt_type will return a
 vconfig.LibvirtConfigGuestCPU() while other virt type will return None.
 What's the difference of this return difference?

The LibvirtConfigGuestCPU  object is used for more than just configuring
the CPU model. It is also used for expressing CPU topology (sockets, cores,
threads) and NUMA topology. So even if cpu model is None, we still need
that object in the kvm case.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [sahara] team meeting Mar 5 1800 UTC

2015-03-04 Thread Sergey Lukjanov
Hi folks,

We'll be having the Sahara team meeting in #openstack-meeting-alt channel.

Agenda: https://wiki.openstack.org/wiki/Meetings/SaharaAgenda#Next_meetings

http://www.timeanddate.com/worldclock/fixedtime.html?msg=Sahara+Meetingiso=20150305T18

-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Issue when upgrading from Juno to Kilo due to agent report_state RPC namespace patch

2015-03-04 Thread Miguel Ángel Ajo
I agree with Assaf, this is an issue across updates, and
we may want (if that’s technically possible) to provide  
access to those functions with/without namespace.

Or otherwise think about reverting for now until we find a
migration strategy

https://review.openstack.org/#/q/status:merged+project:openstack/neutron+branch:master+topic:bp/rpc-docs-and-namespaces,n,z


Best regards,
Miguel Ángel Ajo


On Wednesday, 4 de March de 2015 at 17:00, Assaf Muller wrote:

 Hello everyone,
  
 I'd like to highlight an issue with:
 https://review.openstack.org/#/c/154670/
  
 According to my understanding, most deployments upgrade the controllers first
 and compute/network nodes later. During that time period, all agents will
 fail to report state as they're sending the report_state message outside
 of any namespace while the server is expecting that message in a namespace.
 This is a show stopper as the Neutron server will think all of its agents are 
 dead.
  
 I think the obvious solution is to modify the Neutron server code so that
 it accepts the report_state method both in and outside of the report_state
 RPC namespace and chuck that code away in L (Assuming we support rolling 
 upgrades
 only from version N to N+1, which while is unfortunate, is the behavior I've
 seen in multiple places in the code).
  
 Finally, are there additional similar issues for other RPC methods placed in 
 a namespace
 this cycle?
  
  
 Assaf Muller, Cloud Networking Engineer
 Red Hat
  
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
 (mailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe)
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
  


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Python 3 is dead, long live Python 3

2015-03-04 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 03/03/2015 03:37 PM, Doug Hellmann wrote:
 
 
 On Tue, Mar 3, 2015, at 07:56 AM, Ihar Hrachyshka wrote: On
 02/02/2015 05:15 PM, Jeremy Stanley wrote:
 After a long wait and much testing, we've merged a change[1]
 which moves the remainder of Python 3.3 based jobs to Python
 3.4. This is primarily in service of getting rid of the
 custom workers we implemented to perform 3.3 testing more
 than a year ago, since we can now run 3.4 tests on normal
 Ubuntu Trusty workers (with the exception of a couple
 bugs[2][3] which have caused us to temporarily suspend[4]
 Py3K jobs for oslo.messaging and oslo.rootwrap).
 
 I've personally tested `tox -e py34` on every project hosted
 in our infrastructure which was gating on Python 3.3 jobs and
 they all still work, so you shouldn't see any issues arise
 from this change. If you do, however, please let the
 Infrastructure team know about it as soon as possible.
 Thanks!
 
 [1] https://review.openstack.org/151713 [2] 
 https://launchpad.net/bugs/1367907 [3] 
 https://launchpad.net/bugs/1382607 [4] 
 http://lists.openstack.org/pipermail/openstack-dev/2015-January/055270.html



 
The switch broke Icehouse stable branch for oslo-incubator [1] since
 those jobs run on Precise and not Trusty.
 
 Anyone has ideas how to fix it?
 
 The incubator python 3 job was added to help us port incubated
 code to python 3 before graduating it. We won't be graduating
 modules from the stable branch, so as long as none of the
 consuming projects have python 3 jobs on their stable branches we
 can just drop the icehouse python 3 job for oslo-incubator.
 

Got it: https://review.openstack.org/161303

/Ihar
-BEGIN PGP SIGNATURE-
Version: GnuPG v1

iQEcBAEBAgAGBQJU9zPDAAoJEC5aWaUY1u57YFAIAImMxb2K2aBWdwJnZKwt5t7P
cOZhoS0AdrVRI1eaiXoNiwb/wAl3+VOxc5OxOjeYK1SneSXAZZOSbiHeLPjseb8z
UFRa+n+9y6NnyLOCf2h+QdIVe9fZZs/tFP0FjBket5nlTN0CNX7rBbDnxi6O2+Rn
c/PW9SomneJy+9pmoYXD+W/ZRLhBvj6/arSSt9D+FwQ74jN2EQPILc0atYgjLYI7
4+6ChcK3ci6VWUSurDa70z/C2/JSmg/KsZA1l7qf5T04rmcWVLOXSLfVK+R/jxwx
auuueHMm0OarTWZQ+GxgkhX60zmaKZ2UdmgYB/7ZyHc3/qmjCgL9mOMnZqt9q0I=
=8Cqd
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Cinder] scheduler_default_filters are assumed to be work with 'create' only?

2015-03-04 Thread Pradip Mukhopadhyay
Hello Experts,



What we're trying to do is that: passing volume-type in get_pools() and
filter out the pools matching (well, string matching) with the extra-specs
k,v pair that may be present in the volume-type.

So we were looking into the create_volume flow. Inside the
filter_scheduler's *schedule_create_volume* method, the *request_specs *are
pumped into the *filter_properties*. Later on during
*get_weighted_candidates*, the pumped up* filter_properties *are
passed to *get_filtered_host
*of HostManager. This, in turn, calls all the three
availability-zone,capacity and capability filters one-by-one.

Inside the capability_filters, looks like the code *assumes *a fixed
structure of the *filter-properties*. Now the queries are:


1. What is the difference b/w request-spec and the filter-properties? For
create, we merely pass vol-type and the size. How request-spec and
filter-properties are linked to the given inputs to vol create?

2. One can potentially defines her own filters or weighers. The fixed
structure of the filter-properties, pumped up with request-specs, has to be
assumed?

3. Is there a way available/possible where from volume-type (output of
*db.volume_type_extra_specs_get(vol-id)*)  can be adapted exactly what
filters (specifically capability-filer) expects?



Thanks a lot in advance,
Pradip  (IRC nick: pradipm)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] auto-abandon changesets considered harmful (was Re: [stable][all] Revisiting the 6 month release cycle [metrics])

2015-03-04 Thread Doug Hellmann


On Wed, Mar 4, 2015, at 07:24 AM, Alexis Lee wrote:
 John Griffith john.griffi...@gmail.com writes:
  ​Should we just rename this thread to Sensitivity training for
  contributors?
 
 Culture plays a role in shaping ones expectations here. I was lucky
 enough to grow up in open source culture, so I can identify an automated
 response easily and I don't take it too seriously. Not everyone has the
 same culture and although I agree we need to confront these gaps when
 they impede us, it's more constructive to reach out and bridge the gap
 than blame the outsider.
 
 
 James E. Blair said on Tue, Mar 03, 2015 at 07:49:03AM -0800:
  If I had to deprioritize something I was working on and it was
  auto-abandoned, I would not find out.
 
 You should receive messages from Gerrit about this. If you've made the
 choice to disable or delete those messages, you should expect not to be
 notified. The review dropping out of your personal dashboard active
 review queue is a problem though - an email can be forgotten.

I used to use email to track such things, but I have reached the point
where keeping up with the push notifications from gerrit would consume
all of my waking time. I currently have 2931 unread messages in my
filtered mailbox representing comments only on changesets I have
submitted or reviewed, which does not include changes in repositories I
am interested in or where I am a core reviewer. I'm sure that's a small
number compared to some of our other developers more active than I am.

We all have different workflows and ways to keep up. We can't assume
that the solution any one of us uses will work for everyone involved in
the project -- we work differently and we have different scopes in terms
of number of repos we watch, or even the areas within a repository that
we care about.

Jim's proposal to provide a variation on one tool based on gerrit
queries is a small change, and seems more reasonable than what appears
socially as throwing away someone's work. Based on his arguments in this
thread, I am going to stop abandoning patches in Oslo repositories (I
was doing it by hand, rather than with a script) and start leaving
comments suggesting that authors may want to update or abandon their
patch instead. We have a good dashboard [1] set up thanks to Sean's
dashboard creator tool [2], and I use it with reasonably good results
when I sit down to do reviews.

[1] https://wiki.openstack.org/wiki/Oslo#Review_Links
[2] http://git.openstack.org/cgit/stackforge/gerrit-dash-creator

 
 For what little it's worth, I think having a community-wide definition
 of inactive and flagging that in the system makes sense. This helps us
 maintain a clear and consistent policy in our tools. However, I've come
 around to see that abandon semantics don't fit that flag. We need to
 narrow in on what inactive really means and what effects the flag should
 have. We may need two flags, one for 'needs submitter attention' and one
 for 'needs review attention'.

As Jim and others have pointed out, we can identify those changesets
using existing attributes rather than having to add a new flag. For
example, that Oslo dashboard includes a section for patches that haven't
been reviewed in the last 2 days and another for changes that were
submitted in the last 5 days and have no reviews at all. All of the
queries filter out patches failing Jenkins tests [3], so it's possible
for us to ignore those easily. Gerrit's query language is a bit clunky,
but it is quite powerful for building these sorts of useful views.

[3]
http://git.openstack.org/cgit/stackforge/gerrit-dash-creator/tree/dashboards/oslo-program.dash

Doug

 
 
 Alexis
 -- 
 Nova Engineer, HP Cloud.  AKA lealexis, lxsli.
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo][messaging][zmq] Discussion on zmq driver design issues

2015-03-04 Thread ozamiatin

Hi,

By this e-mail I'd like to start a discussion about current zmq driver 
internal design problems I've found out.
I wish to collect here all proposals and known issues. I hope this 
discussion will be continued on Liberty design summit.

And hope it will drive our further zmq driver development efforts.

ZMQ Driver issues list (I address all issues with # and references are 
in []):


1. ZMQContext per socket (blocker is neutron improper usage of messaging 
via fork) [3]


2. Too many different contexts.
We have InternalContext used for ZmqProxy, RPCContext used in 
ZmqReactor, and ZmqListener.
There is also zmq.Context which is zmq API entity. We need to 
consider a possibility to unify their usage over inheritance (maybe 
stick to RPCContext)
or to hide them as internal entities in their modules (see 
refactoring #6)



3. Topic related code everywhere. We have no topic entity. It is all 
string operations.
We need some topics management entity and topic itself as an entity 
(not a string).

It causes issues like [4], [5]. (I'm already working on it).
There was a spec related [7].


4. Manual implementation of messaging patterns.
   Now we can observe poor usage of zmq features in zmq driver. Almost 
everything is implemented over PUSH/PULL.


4.1 Manual polling - use zmq.Poller (listening and replying for 
multiple sockets)

4.2 Manual request/reply implementation for call [1].
Using of REQ/REP (ROUTER/DEALER) socket solves many issues. A 
lot of code may be reduced.

4.3 Timeouts waiting


5. Add possibility to work without eventlet [2]. #4.1 is also related 
here, we can reuse many of the implemented solutions
   like zmq.Poller over asynchronous sockets in one separate thread 
(instead of spawning on each new socket).

   I will update the spec [2] on that.


6. Put all zmq driver related stuff (matchmakers, most classes from 
zmq_impl) into a separate package.
   Don't keep all classes (ZmqClient, ZmqProxy, Topics management, 
ZmqListener, ZmqSocket, ZmqReactor)

   in one impl_zmq.py module.

_drivers (package)
+-- impl_rabbit.py
+-- impl_zmq.py - leave only ZmqDriver class here
+-- zmq_driver (package)
|+--- matchmaker.py
|+--- matchmaker_ring.py
|+--- matchmaker_redis.py
|+--- matchmaker_.py
...
|+--- client.py
|+--- reactor.py
|+--- proxy.py
|+--- topic.py
...

7. Need more technical documentation on the driver like [6].
   I'm willing to prepare a current driver architecture overview with 
some graphics UML charts, and to continue discuss the driver architecture.


Please feel free to add or to argue about any issue, I'd like to have 
your feedback on these issues.

Thanks.

Oleksii Zamiatin


References:

[1] https://review.openstack.org/#/c/154094/
[2] https://review.openstack.org/#/c/151185/
[3] https://review.openstack.org/#/c/150735/
[4] https://bugs.launchpad.net/oslo.messaging/+bug/1282297
[5] https://bugs.launchpad.net/oslo.messaging/+bug/1381972
[6] https://review.openstack.org/#/c/130943/8
[7] https://review.openstack.org/#/c/144149/1

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Plans to fix numa_topology related issues with migration/resize/evacuate

2015-03-04 Thread Nikola Đipanov
On 03/04/2015 03:17 PM, Wensley, Barton wrote:
 Hi,
 
 I have been exercising the numa topology related features in kilo (cpu 
 pinning, numa topology, huge pages) and have seen that there are issues
 when an operation moves an instance between compute nodes. In summary,
 the numa_topology is not recalculated for the destination node, which 
 results in the instance running with the wrong topology (or even 
 failing to run if the topology isn't supported on the destination). 
 This impacts live migration, cold migration, resize and evacuate.
 
 I have spent some time over the last couple weeks and have a working 
 fix for these issues that I would like to push upstream. The fix for
 cold migration and resize is the most straightfoward, so I plan to
 start there.
 

First of all thanks for all the hard work on this. Some comments on the
proposed changes bellow - but as usual it's best to see the code :)

 At a high level, here is what I have done to fix cold migrate and 
 resize:
 - Add the source_numa_topology and dest_numa_topology to the migration 
   object and migrations table.

Migration has access to the instance, and thus access to the current
topology. Also it seems that we actually always load the instance when
we query for migrations in the resource tracker.

Also - it might be better to have something akin to 'new_' flavor for
new topology so we can store both in the instance_extra table which
would be sligthtly more consistent.

Again - best to see the code first.

 - When a resize_claim is done, store the claimed numa topology in the
   dest_numa_topology in the migration record. Also store the current 
   numa topology as the source_numa_topology in the migration record.
 - Use the source_numa_topology and dest_numa_topology from the 
   migration record in the resource accounting when referencing 
   migration claims as appropriate. This is done for claims, dropped 
   claims and the resource audit.
 - Set the numa_topology in the instance after the cold migration/resize
   is finished to the dest_numa_topology from the migration object - 
   done in finish_resize RPC on the destination compute to match where 
   the rest of the resources for the instance are updated (there is a 
   call to _set_instance_info here that sets the memory, vcpus, disk 
   space, etc... for the migrated instance).
 - Set the numa_topology in the instance if the cold migration/resize is 
   reverted to the source_numa_topology from the migration object - 
   done in finish_revert_resize RPC on the source compute.
 
 I would appreciate any comments on my approach. I plan to start
 submitting the code for this against bug 1417667 - I will split it
 into several chunks to make it easier to review.
 

All of the above sounds relatively reasonable overall.

I'd like to hear from Jay, Sylvain and other scheduler devs on how they
see this impacting some of the planned blueprints like the RequestSpec
one [1]

Also note that this will require fixing this completely NUMA filter as
well: I've proposed a way to do it here [2]

N.

[1] https://blueprints.launchpad.net/nova/+spec/request-spec-object
[2] https://review.openstack.org/160484

 Fixing live migration was significantly more effort - I'll start a
 different thread on that once I have feedback on the above approach.
 
 Thanks,
 
 Bart Wensley, Member of Technical Staff, Wind River
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TC][Magnum][Containers] Magnum - OpenStack Containers Service

2015-03-04 Thread Adrian Otto
+TC topic to subject line in accordance with astute suggestions.

On Mar 4, 2015, at 8:53 AM, Adrian Otto adrian.o...@rackspace.com wrote:

 OpenStack Devs,
 
 There is an item [1] on the 2015-03-10 TC meeting agenda [2] that I would 
 like to highlight so discussion can commence prior to voting. Magnum’s 
 contributors have indicated our intent to join the OpenStack namespace in 
 accordance with our new governance requirements [3]. I will attend the 
 upcoming TC meeting, and will be available to answer any questions leading up 
 to the meeting by email or on IRC with our team [4].
 
 Thanks,
 
 Adrian Otto
 
 [1] https://review.openstack.org/161080
 [2] https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee
 [3] 
 https://git.openstack.org/cgit/openstack/governance/tree/reference/new-projects-requirements.rst
 [4] I am adrian_otto in #openstack-containers on Freenode.
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Congress] murano-driver review

2015-03-04 Thread Tran, Steven
Hi core reviewers,
   I have a review for murano-driver improvement that's waiting for approval 
from core reviewers.  Please take a look at your convenience.  I just want to 
remind so it won't slip your attention.
https://review.openstack.org/#/c/158924/

Thanks,
-Steven
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstackclient] doodle for meeting time selection

2015-03-04 Thread Steve Martinelli
+1, lets do this!

Lin Hua Cheng os.lch...@gmail.com wrote on 03/04/2015 12:13:22 PM:

 From: Lin Hua Cheng os.lch...@gmail.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Date: 03/04/2015 12:21 PM
 Subject: Re: [openstack-dev] [openstackclient] doodle for meeting 
 time selection
 
 +1
 
 Thanks Doug for setting up the Doodle.
 
 On Wed, Mar 4, 2015 at 8:55 AM, Ian Cordasco ian.corda...@rackspace.com
  wrote:
 +1
 
 On 3/4/15, 08:01, Doug Hellmann d...@doughellmann.com wrote:
 
 
 
 On Tue, Mar 3, 2015, at 06:17 PM, Dean Troyer wrote:
  On Thu, Feb 26, 2015 at 3:32 PM, Doug Hellmann 
d...@doughellmann.com
  wrote:
 
   As we discussed in the meeting today, I?ve created a Doodle to
 coordinate
   a good day and time for future meetings. I picked a bunch of 
options
 based
   on when it looked like there were IRC rooms obviously available. If
 none of
   these options suit us, I can dig harder to find other open times.
  
   http://doodle.com/4uy5w2ehn8y2eayh
 
 
  Thanks Doug.
 
  At this point two times are at the top of the list.  Since one of 
them
 is
  one hour later than the time we originally proposed and the other is 
the
  same time on Monday, I propose we declare our first choice to move 
the
  meeting one hour later to 19:00 UTC on Thursdays in 
#openstack-meeting.
 
 +1
 
 
  dt
 
  --
 
  Dean Troyer
  dtro...@gmail.com
 
 
_
 _
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
  openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
__
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
__
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
__
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][libvirt] The None and 'none' for CONF.libvirt.cpu_mode

2015-03-04 Thread Daniel P. Berrange
On Wed, Mar 04, 2015 at 05:24:53PM +, Jiang, Yunhong wrote:
 Daniel, thanks for your clarification.
 
 Another related question is, what will be the guest's real cpu model
 is the cpu_model is None? This is about a reported regression at

The guest CPU will be unspecified - it will be some arbitrary
hypervisor decided default which nova cannot know.

 https://bugs.launchpad.net/nova/+bug/1082414 . When the
 instance.vcpu_model.mode is None, we should compare the source/target
 cpu model, as the suggestion from Tony, am I right?

If CPU model is none, best we can do is compare the *host* CPU of
the two hosts to make sure the host doesn't loose any features, as
we have no way of knowing what features the guest is relying on.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstackclient] doodle for meeting time selection

2015-03-04 Thread Ian Cordasco
+1

On 3/4/15, 08:01, Doug Hellmann d...@doughellmann.com wrote:



On Tue, Mar 3, 2015, at 06:17 PM, Dean Troyer wrote:
 On Thu, Feb 26, 2015 at 3:32 PM, Doug Hellmann d...@doughellmann.com
 wrote:
 
  As we discussed in the meeting today, I’ve created a Doodle to
coordinate
  a good day and time for future meetings. I picked a bunch of options
based
  on when it looked like there were IRC rooms obviously available. If
none of
  these options suit us, I can dig harder to find other open times.
 
  http://doodle.com/4uy5w2ehn8y2eayh
 
 
 Thanks Doug.
 
 At this point two times are at the top of the list.  Since one of them
is
 one hour later than the time we originally proposed and the other is the
 same time on Monday, I propose we declare our first choice to move the
 meeting one hour later to 19:00 UTC on Thursdays in #openstack-meeting.

+1

 
 dt
 
 -- 
 
 Dean Troyer
 dtro...@gmail.com
 
_
_
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] Ceph native driver for manila

2015-03-04 Thread Gregory Farnum
On Wed, Mar 4, 2015 at 7:03 AM, Csaba Henk ch...@redhat.com wrote:


 - Original Message -
 From: Danny Al-Gaaf danny.al-g...@bisect.de
 To: Csaba Henk ch...@redhat.com, OpenStack Development Mailing List 
 (not for usage questions)
 openstack-dev@lists.openstack.org
 Cc: ceph-de...@vger.kernel.org
 Sent: Wednesday, March 4, 2015 3:26:52 PM
 Subject: Re: [openstack-dev] [Manila] Ceph native driver for manila

 Am 04.03.2015 um 15:12 schrieb Csaba Henk:
  - Original Message -
  From: Danny Al-Gaaf danny.al-g...@bisect.de To: OpenStack
  Development Mailing List (not for usage questions)
  openstack-dev@lists.openstack.org, ceph-de...@vger.kernel.org
  Sent: Sunday, March 1, 2015 3:07:36 PM Subject: Re:
  [openstack-dev] [Manila] Ceph native driver for manila
  ...
  For us security is very critical, as the performance is too. The
  first solution via ganesha is not what we prefer (to use CephFS
  via p9 and NFS would not perform that well I guess). The second
  solution, to use
 
  Can you please explain that why does the Ganesha based stack
  involve 9p? (Maybe I miss something basic, but I don't know.)

 Sorry, seems that I mixed it up with the p9 case. But the performance
 is may still an issue if you use NFS on top of CephFS (incl. all the
 VM layer involved within this setup).

 For me the question with all these NFS setups is: why should I use NFS
 on top on CephFS? What is the right to exist of CephFS in this case? I
 would like to use CephFS directly or via filesystem passthrough.

 That's a good question. Or indeed, two questions:

 1. Why to use NFS?
 2. Why does the NFS export of Ceph need to involve CephFS?

 1.

 As of why NFS -- it's probably a good selling point that it's
 standard filesystem export technology and the tenants can remain
 backend-unaware as long as the backend provides NFS export.

 We are working on the Ganesha library --

 https://blueprints.launchpad.net/manila/+spec/gateway-mediated-with-ganesha

 with the aim to make it easy to create Ganesha based drivers. So if you have
 already an FSAL, you can get at an NFS exporting driver almost for free (with 
 a
 modest amount of glue code). So you could consider making such a driver for
 Ceph, to satisfy customers who demand NFS access, even if there is a native
 driver which gets the limelight.

 (See commits implementing this under Work Items of the BP -- one is the
 actual Ganesha library and the other two show how it can be hooked in, by the
 example of the Gluster driver. At the moment flat network (share-server-less)
 drivers are supported.)

 2.

 As of why CephFS was the technology chosen for implementing the Ceph FSAL for
 Ganesha, that's something I'd also like to know. I have the following naive
 question in mind: Would it not have been better to implement Ceph FSAL with
 something »closer to« Ceph?, and I have three actual questions about it:

 - does this question make sense in this form, and if not, how to amend?
 - I'm asking the question itself, or the amended version of it.
 - If the answer is yes, is there a chance someone would create an alternative
   Ceph FSAL on that assumed closer-to-Ceph technology?

I don't understand. What closer-to-Ceph technology do you want than
native use of the libcephfs library? Are you saying to use raw RADOS
to provide storage instead of CephFS?

In that case, it doesn't make a lot of sense: CephFS is how you
provide a real filesystem in the Ceph ecosystem. I suppose if you
wanted to create a lighter-weight pseudo-filesystem you could do so
(somebody is building a RadosFS, I think from CERN?) but then it's
not interoperable with other stuff.
-Greg

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack]Specific Juno version

2015-03-04 Thread Stefano Maffulli
On Tue, 2015-03-03 at 17:42 +0200, Eduard Matei wrote:

 Is there a way to specify the Juno version to be installed using
 devstack.

Let's please reserve this list for discussions about the *future* of
OpenStack development, not questions about using its tools. 

Please, everybody, help us stay on topic.

 This email and any files transmitted with it are confidential and
 intended solely for the use of the individual or entity to whom they
 are addressed.

No, they're not: you sent this message to a public mailing list,
archived publicly with no barriers for any archiver, spider, spammer and
casual human reader. The email you just sent is intended for public use
and consumption. 

/stef


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Magnum][Containers] Magnum - OpenStack Containers Service

2015-03-04 Thread Adrian Otto
OpenStack Devs,

There is an item [1] on the 2015-03-10 TC meeting agenda [2] that I would like 
to highlight so discussion can commence prior to voting. Magnum’s contributors 
have indicated our intent to join the OpenStack namespace in accordance with 
our new governance requirements [3]. I will attend the upcoming TC meeting, and 
will be available to answer any questions leading up to the meeting by email or 
on IRC with our team [4].

Thanks,

Adrian Otto

[1] https://review.openstack.org/161080
[2] https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee
[3] 
https://git.openstack.org/cgit/openstack/governance/tree/reference/new-projects-requirements.rst
[4] I am adrian_otto in #openstack-containers on Freenode.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Re-evaluating the suitability of the 6 month release cycle

2015-03-04 Thread Clint Byrum
Excerpts from Thierry Carrez's message of 2015-03-04 02:19:48 -0800:
 James Bottomley wrote:
  On Tue, 2015-03-03 at 11:59 +0100, Thierry Carrez wrote:
  James Bottomley wrote:
  Actually, this is possible: look at Linux, it freezes for 10 weeks of a
  12 month release cycle (or 6 weeks of an 8 week one).  More on this
  below.
 
  I'd be careful with comparisons with the Linux kernel. First it's a
  single bit of software, not a collection of interconnected projects.
  
  Well, we do have interconnection: the kernel on it's own doesn't do
  anything without a userspace.  The theory was that we didn't have to be
  like BSD (coupled user space and kernel) and we could rely on others
  (principally the GNU project in the early days) to provide the userspace
  and that we could decouple kernel development from the userspace
  releases.  Threading models were, I think, the biggest challenges to
  this assumption, but we survived.
 
 Right. My point there is that you only release one thing. We release a
 lot more pieces. There is (was?) downstream value in coordinating those
 releases, which is a factor in our ability to do it more often than
 twice a year.
 

I think the value of coordinated releases has been agreed upon for a
long time. This thread is more about the cost, don't you think?

  Second it's at a very different evolution/maturity point (20 years old
  vs. 0-4 years old for OpenStack projects).
  
  Yes, but I thought I covered this in the email: you can see that at the
  4 year point in its lifecycle, the kernel was behaving very differently
  (and in fact more similar to OpenStack).  The question I thought was
  still valid is whether anything was learnable from the way the kernel
  evolved later.  I think the key issue, which you seem to have in
  OpenStack is that the separate develop/stabilise phases caused
  frustration to build up in our system which (nine years later) led the
  kernel to adopt the main branch stabilisation with overlapping subsystem
  development cycle.
 
 I agree with you: the evolution the kernel went through is almost a
 natural law, and I know we won't stay in the current model forever. I'm
 just not sure we have reached the level of general stability that makes
 it possible to change *just now*. I welcome brainstorming and discussion
 on future evolutions, though, and intend to lead a cross-project session
 discussion on that in Vancouver.
 

I don't believe that the kernel reached maturity as a point of
eventuality. Just like humans aren't going to jump across the grand
canyon no matter how strong they get, it will take a concerted effort
that may put other goals on hold to build a bridge. With the kernel
there was a clear moment where leadership had tried a few things and
then just had to make it clear that all the code goes in one place, but
instability would not be tolerated. They crossed that chasm, and while
there have been chaotic branches and ruffled feathers, once everybody
got over the paradox, it's been business as usual since then with the
model James describes.

I think the less mature a project is, the wider that chasm is, but I
don't think it's ever going to be an easy thing to do. Since we don't
have a dictator to force us to cross the chasm, we should really think
about planning for the crossing ASAP.

   Finally it sits at a
  different layer, so there is less need for documentation/translations to
  be shipped with the software release.
  
  It's certainly a lot less than you, but we have the entire system call
  man pages.  It's an official project of the kernel:
  
  https://www.kernel.org/doc/man-pages/
  
  And we maintain translations for it
  
  https://www.kernel.org/doc/man-pages/translations.html
 
 By translations I meant strings in the software itself, not doc
 translations. We don't translate docs upstream either :) I guess we
 could drop those (and/or downstream them in a way) if that was the last
 thing holding up adding more agility.
 
 So in summary, yes we can (and do) learn from kernel history, but those
 projects are sufficiently different that the precise timeframes and
 numbers can't really be compared. Apples and oranges are both fruits
 which mature (and rot if left unchecked), but they evolve at different
 speeds :)
 

I'm not super excited about being an apple or an orange, since neither
are sentient and thus cannot collaborate on a better existence than
rotting.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][vpnaas] devstack vpnaas and l3 router

2015-03-04 Thread Sridhar Ramaswamy
Hi Andreas,

vpnaas (i.e. its agent) is a superset of l3-router and it includes the
functionality of the later. Hence you just need to configure either one of
them. q-l3 if you don't need vpnaas and q-vpn if you need both vpnaas and
l3.

Hope this helps.
- Sridhar


On Wed, Mar 4, 2015 at 9:18 AM, Andreas Scheuring 
scheu...@linux.vnet.ibm.com wrote:

 Hello everybody,
 is there a reason why devstack is only able to either deploy vpnaas or
 l3-router?

 Nevertheless both services are enabled in the local.conf on the
 HowToInstall Page [1]. But the code says it's a either or [2] (line
 725).


 if is_service_enabled q-vpn; then
run_process q-vpn $AGENT_VPN_BINARY $(determine_config_files
 neutron-vpn-agent)
 else
run_process q-l3 python $AGENT_L3_BINARY $(determine_config_files
 neutron-l3-agent)
 fi


 What's the reason for having both configured, but only one run?


 [1] https://wiki.openstack.org/wiki/Neutron/VPNaaS/HowToInstall
 [2] https://github.com/openstack-dev/devstack/blob/master/lib/neutron


 --
 Andreas
 (irc: scheuran)




 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][vpnaas] devstack vpnaas and l3 router

2015-03-04 Thread Andreas Scheuring
Hello everybody, 
is there a reason why devstack is only able to either deploy vpnaas or
l3-router? 

Nevertheless both services are enabled in the local.conf on the
HowToInstall Page [1]. But the code says it's a either or [2] (line
725).


if is_service_enabled q-vpn; then
   run_process q-vpn $AGENT_VPN_BINARY $(determine_config_files
neutron-vpn-agent)
else
   run_process q-l3 python $AGENT_L3_BINARY $(determine_config_files
neutron-l3-agent)
fi


What's the reason for having both configured, but only one run?


[1] https://wiki.openstack.org/wiki/Neutron/VPNaaS/HowToInstall
[2] https://github.com/openstack-dev/devstack/blob/master/lib/neutron


-- 
Andreas 
(irc: scheuran)




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][libvirt] The None and 'none' for CONF.libvirt.cpu_mode

2015-03-04 Thread Jiang, Yunhong
Daniel, thanks for your clarification.

Another related question is, what will be the guest's real cpu model is the 
cpu_model is None? This is about a reported regression at 
https://bugs.launchpad.net/nova/+bug/1082414 . When the 
instance.vcpu_model.mode is None, we should compare the source/target cpu 
model, as the suggestion from Tony, am I right?

Thanks
--jyh

 -Original Message-
 From: Daniel P. Berrange [mailto:berra...@redhat.com]
 Sent: Wednesday, March 4, 2015 6:56 AM
 To: Jiang, Yunhong
 Cc: openstack-dev@lists.openstack.org; Xu, Hejie
 Subject: Re: [nova][libvirt] The None and 'none' for CONF.libvirt.cpu_mode
 
 On Wed, Mar 04, 2015 at 02:52:06PM +, Jiang, Yunhong wrote:
  Hi, Daniel
  I'm a bit confused of the None/'none' for CONF.libvirt.cpu_mode.
 Per my understanding, None means there is no configuration provided and
 libvirt will select the default value based on the virt_type, none means no
 cpu_mode information should be provided. For the guest, am I right?
 
In _get_guest_cpu_model_config() on virt/libvirt/driver.py,
  if mode is 'none', kvm/qemu virt_type will return a
  vconfig.LibvirtConfigGuestCPU() while other virt type will return None.
  What's the difference of this return difference?
 
 The LibvirtConfigGuestCPU  object is used for more than just configuring
 the CPU model. It is also used for expressing CPU topology (sockets, cores,
 threads) and NUMA topology. So even if cpu model is None, we still need
 that object in the kvm case.
 
 Regards,
 Daniel
 --
 |: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
 |: http://libvirt.org  -o- http://virt-manager.org :|
 |: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
 |: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder]Request to to revisit this patch

2015-03-04 Thread Walter A. Boring IV
Since the Nova side isn't in, and won't land for Kilo, then there is no 
reason for Cinder to have it for Kilo, as it will simply not work.


We can revisit this for the L release if you like.

Also, make sure you have 3rd Party CI setup for this driver, or it won't 
be accepted in the L release either.


$0.02
Walt


Hi Mike, Jay, and Walter,

Please revisit this patch https://review.openstack.org/#/c/151959/and 
don’t revert this, thank you very much!


I think it’s apposite to merge the SDSHypervisor driver in cinder 
first, and next to request nova to add a new libvirt volume driver.


Meanwhile nova side always ask whether the driver is merged into 
Cinder, please see my comments in nova spec 
https://review.openstack.org/#/c/130919/, thank you very much!


Best regards

ZhangNi



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Issue when upgrading from Juno to Kilo due to agent report_state RPC namespace patch

2015-03-04 Thread Salvatore Orlando
To put in another way I think we might say that change 154670 broke
backward compatibility on the RPC interface.
To be fair this probably happened because RPC interfaces were organised in
a way such that this kind of breakage was unavoidable.

I think the strategy proposed by Assaf is a viable one. The point about
being able to do rolling upgrades only from version N to N+1 is a sensible
one, but it has more to do with general backward compability rules for RPC
interfaces.

In the meanwhile this is breaking a typical upgrade scenario. If a fix
allowing agent state updates both namespaced and not is available today or
tomorrow, that's fine. Otherwise I'd revert just to be safe.

By the way, we were supposed to have already removed all server rpc
callbacks in the appropriate package... did we forget out this one or is
there a reason for which it's still in neutron.db?

Salvatore

On 4 March 2015 at 17:23, Miguel Ángel Ajo majop...@redhat.com wrote:

  I agree with Assaf, this is an issue across updates, and
 we may want (if that’s technically possible) to provide
 access to those functions with/without namespace.

 Or otherwise think about reverting for now until we find a
 migration strategy


 https://review.openstack.org/#/q/status:merged+project:openstack/neutron+branch:master+topic:bp/rpc-docs-and-namespaces,n,z


 Best regards,
 Miguel Ángel Ajo

 On Wednesday, 4 de March de 2015 at 17:00, Assaf Muller wrote:

 Hello everyone,

 I'd like to highlight an issue with:
 https://review.openstack.org/#/c/154670/

 According to my understanding, most deployments upgrade the controllers
 first
 and compute/network nodes later. During that time period, all agents will
 fail to report state as they're sending the report_state message outside
 of any namespace while the server is expecting that message in a namespace.
 This is a show stopper as the Neutron server will think all of its agents
 are dead.

 I think the obvious solution is to modify the Neutron server code so that
 it accepts the report_state method both in and outside of the report_state
 RPC namespace and chuck that code away in L (Assuming we support rolling
 upgrades
 only from version N to N+1, which while is unfortunate, is the behavior
 I've
 seen in multiple places in the code).

 Finally, are there additional similar issues for other RPC methods placed
 in a namespace
 this cycle?


 Assaf Muller, Cloud Networking Engineer
 Red Hat

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstackclient] doodle for meeting time selection

2015-03-04 Thread Lin Hua Cheng
+1

Thanks Doug for setting up the Doodle.

On Wed, Mar 4, 2015 at 8:55 AM, Ian Cordasco ian.corda...@rackspace.com
wrote:

 +1

 On 3/4/15, 08:01, Doug Hellmann d...@doughellmann.com wrote:

 
 
 On Tue, Mar 3, 2015, at 06:17 PM, Dean Troyer wrote:
  On Thu, Feb 26, 2015 at 3:32 PM, Doug Hellmann d...@doughellmann.com
  wrote:
 
   As we discussed in the meeting today, I’ve created a Doodle to
 coordinate
   a good day and time for future meetings. I picked a bunch of options
 based
   on when it looked like there were IRC rooms obviously available. If
 none of
   these options suit us, I can dig harder to find other open times.
  
   http://doodle.com/4uy5w2ehn8y2eayh
 
 
  Thanks Doug.
 
  At this point two times are at the top of the list.  Since one of them
 is
  one hour later than the time we originally proposed and the other is the
  same time on Monday, I propose we declare our first choice to move the
  meeting one hour later to 19:00 UTC on Thursdays in #openstack-meeting.
 
 +1
 
 
  dt
 
  --
 
  Dean Troyer
  dtro...@gmail.com
 
 _
 _
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
  openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Core nominations.

2015-03-04 Thread Bhandaru, Malini K
Flavio, I concur, for a lively committee need active core reviewers. Core 
status is an honor and responsibility.
I agree it’s a good idea to replace inactive cores, no offense, priorities and 
focus of developers change, and should they want to return, can be fast pathed 
then.
Regards
Malini

-Original Message-
From: Flavio Percoco [mailto:fla...@redhat.com] 
Sent: Wednesday, March 04, 2015 4:09 AM
To: OpenStack Development Mailing List (not for usage questions)
Cc: krag...@gmail.com
Subject: Re: [openstack-dev] [Glance] Core nominations.

On 03/03/15 16:10 +, Nikhil Komawar wrote:
If it was not clear in my previous message, I would like to again 
emphasize that I truly appreciate the vigor and intent behind Flavio's 
proposal. We need to be proactive and keep making the community better in such 
regards.


However, at the same time we need to act fairly, with patience and have 
a friendly strategy for doing the same (thus maintaining a good balance 
in our progress). I should probably respond to another thread on ML 
mentioning my opinion that the community's success depends on trust 
and empathy and everyone's intent as well as effort in maintaining 
these principles. Without them, it will not take very long to make the 
situation chaotic.

I'm sorry but no. I don't think there's anything that requires extra patience 
than 2 (or even more) cycles without provaiding reviews or even any kind of 
active contribution.

I personally don't think adding new cores without cleaning up that list is 
something healthy for our community, which is what we're trying to improve 
here. Therefore I'm still -2-W on adding new folks without removing non-active 
core members.

The questions I poised are still unanswered:

There are a few members who have been relatively inactive this cycle in 
terms of reviews and have been missed in Flavio's list (That list is 
not comprehensive). On what basis have some of them been missed out and 
if we do not have strong reason, are we being fair? Again, I would like 
to emphasize that, cleaning of the list in such proportions at this 
point of time does NOT look OK strategy to me.

The list contains the names of ppl that have not provided *any* kind of review 
in the last 2 cycles. If there are folks in that list that you think shouldn't 
be there, please, bring them up now. If there are folks you think *should* be 
in that list, please, bring them on now.

There's nothing unpolite in what's being discussed here. The proposal is based 
on the facts that these folks seem to be focused in different things now and 
that's perfectly fine.

As I mentioned in my first email, we're not questioning their knowledge but 
their focus and they are more than welcome to join again.

I do not think *counting* the stats of everyone makes sense here, we're not 
competing on who reviews more patches. That's nonsense.
We're just trying to keep the list of folks that will have the power to approve 
patches short.

To answer your concerns: (Why was this not proposed earlier in the 
cycle?)

[snip] ?

The essence of the matter is:

We need to change the dynamics slowly and with patience for maintaining 
a good balance.

As I mentioned above, I don't think we're being impatient. As a matter of fact, 
some of this folks haven't been around in *years* so, pardon my stubborness but 
I believe we have been way to patient and I'd have loved this folks to step 
down themselves.

I infinitely thank these folks past work and efforts (and hopefully future 
works too) but I think it's time for us to have a clearer view of who's working 
in the project.

As a last note, it's really important to have the list of members updated, some 
folks rely on that to know who are the contacts for some projects.

Flavio

Best,
-Nikhil
━━━

From: Kuvaja, Erno kuv...@hp.com
Sent: Tuesday, March 3, 2015 9:48 AM
To: OpenStack Development Mailing List (not for usage questions); Daniel P.
Berrange
Cc: krag...@gmail.com
Subject: Re: [openstack-dev] [Glance] Core nominations.
 

Nikhil,

 

If I recall correctly this matter was discussed last time at the start 
of the L-cycle and at that time we agreed to see if there is change of 
pattern to later of the cycle. There has not been one and I do not see 
reason to postpone this again, just for the courtesy of it in the hopes 
some of our older cores happens to make review or two.

 

I think Flavio’s proposal combined with the new members would be the 
right way to reinforce to momentum we’ve gained in Glance over past few 
months. I think it’s also the right message to send out for the new 
cores (including you and myself ;) ) that activity is the key to maintain such 
status.

 

-  Erno

 

From: Nikhil Komawar [mailto:nikhil.koma...@rackspace.com]
Sent: 03 March 2015 04:47
To: Daniel P. Berrange; OpenStack Development Mailing List (not for 
usage
questions)
Cc: krag...@gmail.com
Subject: 

[openstack-dev] [all] oslo.config 1.9.0 release

2015-03-04 Thread Doug Hellmann
We are pumped to announce the release of:

oslo.config 1.9.0: Oslo Configuration API

For more details, please see the git log history below and:

http://launchpad.net/oslo.config/+milestone/1.9.0

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.config

Changes in /home/dhellmann/repos/openstack/oslo.config 1.8.0..1.9.0
---

dacc035 Add ability to deprecate opts for removal
8b4d75b Typo in StrOpt docstring: Integer to String

Diffstat (except docs and test files)
-

oslo_config/cfg.py| 20 +---
2 files changed, 58 insertions(+), 3 deletions(-)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ceilometer] checking on odds that spec will be approved for K

2015-03-04 Thread Chris K
Hello Ceilometer,

I would like to check on the chances that spec
https://review.openstack.org/#/c/130359 will land in the K cycle. We in
Ironic have specs that are dependent on the above spec and are checking to
see if we need to bump them to L? Any information regarding this would be
helpful.

Thank you in advance
Chris Krelle
-NobodyCam
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [API] #openstack-api created

2015-03-04 Thread Ian Cordasco
At (or maybe after) the last API-WG meeting, the idea to create an API-WG
specific channel came up again and there was consensus that it would be
beneficial to have. I registered the channel and have submitted the
appropriate changes to system-config and project-config to enable logging
of the channel and notifications from openstack/api-wg to be displayed.

To join the channel, merely log onto irc.freenode.net and /join
#openstack-api.

The reviews are linked below:

https://review.openstack.org/#/c/161330/ will enable logging

https://review.openstack.org/#/c/161337/ will enable notifications

Cheers,
Ian


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Trove] request to backport the fix for bug 1333852 to juno

2015-03-04 Thread Amrith Kumar
There has been a request to backport the fix for bug 1333852 
(https://bugs.launchpad.net/trove/+bug/1333852) which was fixed in Kilo into 
the Juno release.

The change includes a database change and a small change to the Trove API. The 
change also requires a change to the trove client and the trove controller code 
(trove-api). It is arguable whether this is a backport or a new feature; I'm 
inclined to think it is more of an extension of an existing feature than a new 
feature.

As such, I don't believe that this change should be considered a good candidate 
for backport to Juno but I'm going to see whether there is sufficient interest 
in this, to consider this change to be an exception.

Thanks,

-amrith

--

Amrith Kumar, CTO Tesora (www.tesora.com)

Twitter: @amrithkumar
IRC: amrith @freenode


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Proposal to add Ihar Hrachyshka as a Neutron Core Reviewer

2015-03-04 Thread Kyle Mestery
I'd like to propose that we add Ihar Hrachyshka to the Neutron core
reviewer team. Ihar has been doing a great job reviewing in Neutron as
evidence by his stats [1]. Ihar is the Oslo liaison for Neutron, he's been
doing a great job keeping Neutron current there. He's already a critical
reviewer for all the Neutron repositories. In addition, he's a stable
maintainer. Ihar makes himself available in IRC, and has done a great job
working with the entire Neutron team. His reviews are thoughtful and he
really takes time to work with code submitters to ensure his feedback is
addressed.

I'd also like to again remind everyone that reviewing code is a
responsibility, in Neutron the same as other projects. And core reviewers
are especially beholden to this responsibility. I'd also like to point out
and reinforce that +1/-1 reviews are super useful, and I encourage everyone
to continue reviewing code across Neutron as well as the other OpenStack
projects, regardless of your status as a core reviewer on these projects.

Existing Neutron cores, please vote +1/-1 on this proposal to add Ihar to
the core reviewer team.

Thanks!
Kyle

[1] http://stackalytics.com/report/contribution/neutron-group/90
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Output on stderr

2015-03-04 Thread Lin Hua Cheng
Here's the link to the code review: https://review.openstack.org/#/c/147399/

On Wed, Mar 4, 2015 at 7:17 AM, Dolph Mathews dolph.math...@gmail.com
wrote:



 On Wednesday, March 4, 2015, David Stanek dsta...@dstanek.com wrote:


 On Wed, Mar 4, 2015 at 6:50 AM, Abhishek Talwar/HYD/TCS 
 abhishek.tal...@tcs.com wrote:

 While working on a bug for keystoneclient I have replaced sys.exit with
 return. However, the code reviewers want that the output should be on
 stderr(as sys.exit does). So how can we get the output on stderr.


 The print function allows you to specify a file:

  from __future__ import print_function
  import sys
  print('something to', file=sys.stderr)

 The __future__ import is needed for Python 2.6/2.7 because print was
 changed to a function in Python 3.


 I hope that answers your question. Can we have a link to the bug and/or
 code review?




 --
 David
 blog: http://www.traceback.org
 twitter: http://twitter.com/dstanek
 www: http://dstanek.com


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Proposal to add Ihar Hrachyshka as a Neutron Core Reviewer

2015-03-04 Thread Carl Baldwin
+1
On Mar 4, 2015 12:44 PM, Kyle Mestery mest...@mestery.com wrote:

 I'd like to propose that we add Ihar Hrachyshka to the Neutron core
 reviewer team. Ihar has been doing a great job reviewing in Neutron as
 evidence by his stats [1]. Ihar is the Oslo liaison for Neutron, he's been
 doing a great job keeping Neutron current there. He's already a critical
 reviewer for all the Neutron repositories. In addition, he's a stable
 maintainer. Ihar makes himself available in IRC, and has done a great job
 working with the entire Neutron team. His reviews are thoughtful and he
 really takes time to work with code submitters to ensure his feedback is
 addressed.

 I'd also like to again remind everyone that reviewing code is a
 responsibility, in Neutron the same as other projects. And core reviewers
 are especially beholden to this responsibility. I'd also like to point out
 and reinforce that +1/-1 reviews are super useful, and I encourage everyone
 to continue reviewing code across Neutron as well as the other OpenStack
 projects, regardless of your status as a core reviewer on these projects.

 Existing Neutron cores, please vote +1/-1 on this proposal to add Ihar to
 the core reviewer team.

 Thanks!
 Kyle

 [1] http://stackalytics.com/report/contribution/neutron-group/90

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Proposal to add Ihar Hrachyshka as a Neutron Core Reviewer

2015-03-04 Thread Salvatore Orlando
Ihar has proved in several circumstances that he knows neutron source trees
way better than me, his reviews are more frequent, thorough, and useful
than those of the average core team member. Summarising, in my opinion it
is a nonsense that I am part of this team and he's not.

So I am obviously happy to have him in the team.

Salvatore

On 4 March 2015 at 20:42, Kyle Mestery mest...@mestery.com wrote:

 I'd like to propose that we add Ihar Hrachyshka to the Neutron core
 reviewer team. Ihar has been doing a great job reviewing in Neutron as
 evidence by his stats [1]. Ihar is the Oslo liaison for Neutron, he's been
 doing a great job keeping Neutron current there. He's already a critical
 reviewer for all the Neutron repositories. In addition, he's a stable
 maintainer. Ihar makes himself available in IRC, and has done a great job
 working with the entire Neutron team. His reviews are thoughtful and he
 really takes time to work with code submitters to ensure his feedback is
 addressed.

 I'd also like to again remind everyone that reviewing code is a
 responsibility, in Neutron the same as other projects. And core reviewers
 are especially beholden to this responsibility. I'd also like to point out
 and reinforce that +1/-1 reviews are super useful, and I encourage everyone
 to continue reviewing code across Neutron as well as the other OpenStack
 projects, regardless of your status as a core reviewer on these projects.

 Existing Neutron cores, please vote +1/-1 on this proposal to add Ihar to
 the core reviewer team.

 Thanks!
 Kyle

 [1] http://stackalytics.com/report/contribution/neutron-group/90

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder] [Third-party-announce] Cinder Merged patch broke HDS driver

2015-03-04 Thread Marcus Vinícius Ramires do Nascimento
Hi folks,

This weekend, the patch Snapshot and volume objects (
https://review.openstack.org/#/c/133566) was merged and this one broke our
HDS HBSD driver and the respective CI.

When CI tries to run tempest.api.volume.admin.test_snapshots_actions the
following error is shown:

2015-03-04 14:00:34.368 ERROR oslo_messaging.rpc.dispatcher
[req-c941792b-963f-4a7d-a6ac-9f1d9f823fd1 915289d113dd4f9db2f2a792c18b3564
984bc8d228c8497689dde60dc2b8f300] Exception during messag
e handling: 'class 'cinder.objects.snapshot.Snapshot'' object has no
attribute 'snapshot_metadata'
2015-03-04 14:00:34.368 TRACE oslo_messaging.rpc.dispatcher Traceback (most
recent call last):
2015-03-04 14:00:34.368 TRACE oslo_messaging.rpc.dispatcher   File
/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py,
line 142, in _dispatch_and_reply
2015-03-04 14:00:34.368 TRACE oslo_messaging.rpc.dispatcher
executor_callback))
2015-03-04 14:00:34.368 TRACE oslo_messaging.rpc.dispatcher   File
/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py,
line 186, in _dispatch
2015-03-04 14:00:34.368 TRACE oslo_messaging.rpc.dispatcher
executor_callback)
2015-03-04 14:00:34.368 TRACE oslo_messaging.rpc.dispatcher   File
/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py,
line 130, in _do_dispatch
2015-03-04 14:00:34.368 TRACE oslo_messaging.rpc.dispatcher result =
func(ctxt, **new_args)
2015-03-04 14:00:34.368 TRACE oslo_messaging.rpc.dispatcher   File
/usr/local/lib/python2.7/dist-packages/osprofiler/profiler.py, line 105,
in wrapper
2015-03-04 14:00:34.368 TRACE oslo_messaging.rpc.dispatcher return
f(*args, **kwargs)
2015-03-04 14:00:34.368 TRACE oslo_messaging.rpc.dispatcher   File
/opt/stack/cinder/cinder/volume/manager.py, line 156, in lso_inner1
2015-03-04 14:00:34.368 TRACE oslo_messaging.rpc.dispatcher return
lso_inner2(inst, context, snapshot, **kwargs)
2015-03-04 14:00:34.368 TRACE oslo_messaging.rpc.dispatcher   File
/usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py,
line 431, in inner
2015-03-04 14:00:34.368 TRACE oslo_messaging.rpc.dispatcher return
f(*args, **kwargs)
2015-03-04 14:00:34.368 TRACE oslo_messaging.rpc.dispatcher   File
/opt/stack/cinder/cinder/volume/manager.py, line 155, in lso_inner2
2015-03-04 14:00:34.368 TRACE oslo_messaging.rpc.dispatcher return
f(*_args, **_kwargs)
2015-03-04 14:00:34.368 TRACE oslo_messaging.rpc.dispatcher   File
/opt/stack/cinder/cinder/volume/manager.py, line 635, in delete_snapshot
2015-03-04 14:00:34.368 TRACE oslo_messaging.rpc.dispatcher
snapshot.save()
2015-03-04 14:00:34.368 TRACE oslo_messaging.rpc.dispatcher   File
/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py, line 82,
in __exit__
2015-03-04 14:00:34.368 TRACE oslo_messaging.rpc.dispatcher
six.reraise(self.type_, self.value, self.tb)
2015-03-04 14:00:34.368 TRACE oslo_messaging.rpc.dispatcher   File
/opt/stack/cinder/cinder/volume/manager.py, line 625, in delete_snapshot
2015-03-04 14:00:34.368 TRACE oslo_messaging.rpc.dispatcher
self.driver.delete_snapshot(snapshot)
2015-03-04 14:00:34.368 TRACE oslo_messaging.rpc.dispatcher   File
/usr/local/lib/python2.7/dist-packages/osprofiler/profiler.py, line 105,
in wrapper
2015-03-04 14:00:34.368 TRACE oslo_messaging.rpc.dispatcher return
f(*args, **kwargs)
2015-03-04 14:00:34.368 TRACE oslo_messaging.rpc.dispatcher   File
/opt/stack/cinder/cinder/volume/drivers/hitachi/hbsd_iscsi.py, line 314,
in delete_snapshot
2015-03-04 14:00:34.368 TRACE oslo_messaging.rpc.dispatcher
self.common.delete_snapshot(snapshot)
2015-03-04 14:00:34.368 TRACE oslo_messaging.rpc.dispatcher   File
/opt/stack/cinder/cinder/volume/drivers/hitachi/hbsd_common.py, line 635,
in delete_snapshot
2015-03-04 14:00:34.368 TRACE oslo_messaging.rpc.dispatcher is_vvol =
self.get_snapshot_is_vvol(snapshot)
2015-03-04 14:00:34.368 TRACE oslo_messaging.rpc.dispatcher   File
/opt/stack/cinder/cinder/volume/drivers/hitachi/hbsd_common.py, line 189,
in get_snapshot_is_vvol
2015-03-04 14:00:34.368 TRACE oslo_messaging.rpc.dispatcher return
self.get_is_vvol(snapshot, 'snapshot_metadata')
2015-03-04 14:00:34.368 TRACE oslo_messaging.rpc.dispatcher   File
/opt/stack/cinder/cinder/volume/drivers/hitachi/hbsd_common.py, line 183,
in get_is_vvol
2015-03-04 14:00:34.368 TRACE oslo_messaging.rpc.dispatcher return
self.get_value(obj, name, 'type') == 'V-VOL'
2015-03-04 14:00:34.368 TRACE oslo_messaging.rpc.dispatcher   File
/opt/stack/cinder/cinder/volume/drivers/hitachi/hbsd_common.py, line 176,
in get_value
2015-03-04 14:00:34.368 TRACE oslo_messaging.rpc.dispatcher if
obj.get(name):
2015-03-04 14:00:34.368 TRACE oslo_messaging.rpc.dispatcher   File
/opt/stack/cinder/cinder/objects/base.py, line 615, in get
2015-03-04 14:00:34.368 TRACE oslo_messaging.rpc.dispatcher
self.__class__, key))
2015-03-04 14:00:34.368 TRACE oslo_messaging.rpc.dispatcher AttributeError:
'class 'cinder.objects.snapshot.Snapshot'*' object has 

Re: [openstack-dev] [neutron] Proposal to add Ihar Hrachyshka as a Neutron Core Reviewer

2015-03-04 Thread Kevin Benton
+1
On Mar 4, 2015 12:25 PM, Maru Newby ma...@redhat.com wrote:

 +1 from me, Ihar has been doing great work and it will be great to have
 him finally able to merge!

  On Mar 4, 2015, at 11:42 AM, Kyle Mestery mest...@mestery.com wrote:
 
  I'd like to propose that we add Ihar Hrachyshka to the Neutron core
 reviewer team. Ihar has been doing a great job reviewing in Neutron as
 evidence by his stats [1]. Ihar is the Oslo liaison for Neutron, he's been
 doing a great job keeping Neutron current there. He's already a critical
 reviewer for all the Neutron repositories. In addition, he's a stable
 maintainer. Ihar makes himself available in IRC, and has done a great job
 working with the entire Neutron team. His reviews are thoughtful and he
 really takes time to work with code submitters to ensure his feedback is
 addressed.
 
  I'd also like to again remind everyone that reviewing code is a
 responsibility, in Neutron the same as other projects. And core reviewers
 are especially beholden to this responsibility. I'd also like to point out
 and reinforce that +1/-1 reviews are super useful, and I encourage everyone
 to continue reviewing code across Neutron as well as the other OpenStack
 projects, regardless of your status as a core reviewer on these projects.
 
  Existing Neutron cores, please vote +1/-1 on this proposal to add Ihar
 to the core reviewer team.
 
  Thanks!
  Kyle
 
  [1] http://stackalytics.com/report/contribution/neutron-group/90
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Testing NUMA, CPU pinning and large pages

2015-03-04 Thread Steve Gordon
- Original Message -
 From: Adrian Hoban adrian.ho...@intel.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 
  -Original Message-
  From: Steve Gordon [mailto:sgor...@redhat.com]
  Sent: Wednesday, February 11, 2015 8:49 PM
  To: OpenStack Development Mailing List (not for usage questions)
  Cc: Znoinski, Waldemar
  Subject: Re: [openstack-dev] Testing NUMA, CPU pinning and large
  pages
  
  - Original Message -
   From: Adrian Hoban adrian.ho...@intel.com
  
   Hi Folks,
  
   I just wanted to share some details on the Intel CI testing
   strategy for NFV.
  
   You will see two Intel CIs commenting:
   #1: Intel-PCI-CI
   - Yongli He and Shane Wang are leading this effort for us.
   - The focus in this environment is on PCIe and SR-IOV specific
   testing.
   - Commenting back to review.openstack.org has started.
  
  With regards to SR-IOV / PCI specifically it seemed based on
  https://review.openstack.org/#/c/139000/ and
  https://review.openstack.org/#/c/141270/ that there was still some
  confusion as to where the tests should actually live (and I expect
  the same is
  true for the NUMA, Large Pages, etc. tests). Is this resolved or
  are there still
  open questions?
  
  Thanks,
  
  Steve
  
 
 Hi Steve,
 
 The PCIe test code is being put on github at:
 https://github.com/intel-hw-ci/Intel-Openstack-Hardware-CI/tree/master/pci_testcases
 We would readily welcome some feedback if folks think it should go
 elsewhere, but for now the tests are publically available and we can
 continue to make progress.
 
 Regards,
 Adrian

Thanks Adrian, Vladik had created a branch of functional tests here:


https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:numa_functional_tests,n,z

...but looks like we need some more iteration.

Thanks,

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] blueprint about multiple workers supported in nova-scheduler

2015-03-04 Thread Mike Bayer


Attila Fazekas afaze...@redhat.com wrote:

 Hi,
 
 I wonder what is the planned future of the scheduling.
 
 The scheduler does a lot of high field number query,
 which is CPU expensive when you are using sqlalchemy-orm.
 Does anyone tried to switch those operations to sqlalchemy-core ?

An upcoming feature in SQLAlchemy 1.0 will remove the vast majority of CPU
overhead from the query side of SQLAlchemy ORM by caching all the work done
up until the SQL is emitted, including all the function overhead of building
up the Query object, producing a core select() object internally from the
Query, working out a large part of the object fetch strategies, and finally
the string compilation of the select() into a string as well as organizing
the typing information for result columns. With a query that is constructed
using the “Baked” feature, all of these steps are cached in memory and held
persistently; the same query can then be re-used at which point all of these
steps are skipped. The system produces the cache key based on the in-place
construction of the Query using lambdas so no major changes to code
structure are needed; just the way the Query modifications are performed
needs to be preceded with “lambda q:”, essentially.

With this approach, the traditional session.query(Model) approach can go
from start to SQL being emitted with an order of magnitude less function
calls. On the fetch side, fetching individual columns instead of full
entities has always been an option with ORM and is about the same speed as a
Core fetch of rows. So using ORM with minimal changes to existing ORM code
you can get performance even better than you’d get using Core directly,
since caching of the string compilation is also added.

On the persist side, the new bulk insert / update features provide a bridge
from ORM-mapped objects to bulk inserts/updates without any unit of work
sorting going on. ORM mapped objects are still more expensive to use in that
instantiation and state change is still more expensive, but bulk
insert/update accepts dictionaries as well, which again is competitive with
a straight Core insert.

Both of these features are completed in the master branch, the “baked query”
feature just needs documentation, and I’m basically two or three tickets
away from beta releases of 1.0. The “Baked” feature itself lives as an
extension and if we really wanted, I could backport it into oslo.db as well
so that it works against 0.9.

So I’d ask that folks please hold off on any kind of migration from ORM to
Core for performance reasons. I’ve spent the past several months adding
features directly to SQLAlchemy that allow an ORM-based app to have routes
to operations that perform just as fast as that of Core without a rewrite of
code.

 The scheduler does lot of thing in the application, like filtering 
 what can be done on the DB level more efficiently. Why it is not done
 on the DB side ? 
 
 There are use cases when the scheduler would need to know even more data,
 Is there a plan for keeping `everything` in all schedulers process memory 
 up-to-date ?
 (Maybe zookeeper)
 
 The opposite way would be to move most operation into the DB side,
 since the DB already knows everything. 
 (stored procedures ?)
 
 Best Regards,
 Attila
 
 
 - Original Message -
 From: Rui Chen chenrui.m...@gmail.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Sent: Wednesday, March 4, 2015 4:51:07 AM
 Subject: [openstack-dev] [nova] blueprint about multiple workers supported   
 in nova-scheduler
 
 Hi all,
 
 I want to make it easy to launch a bunch of scheduler processes on a host,
 multiple scheduler workers will make use of multiple processors of host and
 enhance the performance of nova-scheduler.
 
 I had registered a blueprint and commit a patch to implement it.
 https://blueprints.launchpad.net/nova/+spec/scheduler-multiple-workers-support
 
 This patch had applied in our performance environment and pass some test
 cases, like: concurrent booting multiple instances, currently we didn't find
 inconsistent issue.
 
 IMO, nova-scheduler should been scaled horizontally on easily way, the
 multiple workers should been supported as an out of box feature.
 
 Please feel free to discuss this feature, thanks.
 
 Best Regards
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development 

Re: [openstack-dev] [neutron] Proposal to add Ihar Hrachyshka as a Neutron Core Reviewer

2015-03-04 Thread Armando M.
+1!

On 4 March 2015 at 22:29, Kevin Benton blak...@gmail.com wrote:

 +1
 On Mar 4, 2015 12:25 PM, Maru Newby ma...@redhat.com wrote:

 +1 from me, Ihar has been doing great work and it will be great to have
 him finally able to merge!

  On Mar 4, 2015, at 11:42 AM, Kyle Mestery mest...@mestery.com wrote:
 
  I'd like to propose that we add Ihar Hrachyshka to the Neutron core
 reviewer team. Ihar has been doing a great job reviewing in Neutron as
 evidence by his stats [1]. Ihar is the Oslo liaison for Neutron, he's been
 doing a great job keeping Neutron current there. He's already a critical
 reviewer for all the Neutron repositories. In addition, he's a stable
 maintainer. Ihar makes himself available in IRC, and has done a great job
 working with the entire Neutron team. His reviews are thoughtful and he
 really takes time to work with code submitters to ensure his feedback is
 addressed.
 
  I'd also like to again remind everyone that reviewing code is a
 responsibility, in Neutron the same as other projects. And core reviewers
 are especially beholden to this responsibility. I'd also like to point out
 and reinforce that +1/-1 reviews are super useful, and I encourage everyone
 to continue reviewing code across Neutron as well as the other OpenStack
 projects, regardless of your status as a core reviewer on these projects.
 
  Existing Neutron cores, please vote +1/-1 on this proposal to add Ihar
 to the core reviewer team.
 
  Thanks!
  Kyle
 
  [1] http://stackalytics.com/report/contribution/neutron-group/90
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo] reminder of feature freeze on 12 Mar

2015-03-04 Thread Doug Hellmann
We’re about 1 week out from the Oslo feature freeze, which is 1 week earlier 
than the app freeze at the third milestone. If you have features you were 
expecting to land for this cycle, please make sure the core review teams are 
aware so we can prioritize those reviews.

Doug


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [os-ansible-deployment] Addition of channel logging

2015-03-04 Thread Ian Cordasco
Hey everyone,

I just wanted to make the list aware that the
stackforge/os-ansible-deployment project has submitted a change to start
logging its channel (#openstack-ansible). Please vote on the proposal
https://review.openstack.org/#/c/161412/

Cheers,
Ian

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Proposal to add Ihar Hrachyshka as a Neutron Core Reviewer

2015-03-04 Thread Henry Gessau
+1 !!!

On Wed, Mar 04, 2015, Kyle Mestery mest...@mestery.com wrote:
 I'd like to propose that we add Ihar Hrachyshka to the Neutron core reviewer
 team. Ihar has been doing a great job reviewing in Neutron as evidence by his
 stats [1]. Ihar is the Oslo liaison for Neutron, he's been doing a great job
 keeping Neutron current there. He's already a critical reviewer for all the
 Neutron repositories. In addition, he's a stable maintainer. Ihar makes
 himself available in IRC, and has done a great job working with the entire
 Neutron team. His reviews are thoughtful and he really takes time to work with
 code submitters to ensure his feedback is addressed.
 
 I'd also like to again remind everyone that reviewing code is a
 responsibility, in Neutron the same as other projects. And core reviewers are
 especially beholden to this responsibility. I'd also like to point out and
 reinforce that +1/-1 reviews are super useful, and I encourage everyone to
 continue reviewing code across Neutron as well as the other OpenStack
 projects, regardless of your status as a core reviewer on these projects.
 
 Existing Neutron cores, please vote +1/-1 on this proposal to add Ihar to the
 core reviewer team.
 
 Thanks!
 Kyle
 
 [1] http://stackalytics.com/report/contribution/neutron-group/90



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Proposal to add Ihar Hrachyshka as a Neutron Core Reviewer

2015-03-04 Thread Doug Wiegley
+1

 On Mar 4, 2015, at 12:42 PM, Kyle Mestery mest...@mestery.com wrote:
 
 I'd like to propose that we add Ihar Hrachyshka to the Neutron core reviewer 
 team. Ihar has been doing a great job reviewing in Neutron as evidence by his 
 stats [1]. Ihar is the Oslo liaison for Neutron, he's been doing a great job 
 keeping Neutron current there. He's already a critical reviewer for all the 
 Neutron repositories. In addition, he's a stable maintainer. Ihar makes 
 himself available in IRC, and has done a great job working with the entire 
 Neutron team. His reviews are thoughtful and he really takes time to work 
 with code submitters to ensure his feedback is addressed.
 
 I'd also like to again remind everyone that reviewing code is a 
 responsibility, in Neutron the same as other projects. And core reviewers are 
 especially beholden to this responsibility. I'd also like to point out and 
 reinforce that +1/-1 reviews are super useful, and I encourage everyone to 
 continue reviewing code across Neutron as well as the other OpenStack 
 projects, regardless of your status as a core reviewer on these projects.
 
 Existing Neutron cores, please vote +1/-1 on this proposal to add Ihar to the 
 core reviewer team.
 
 Thanks!
 Kyle
 
 [1] http://stackalytics.com/report/contribution/neutron-group/90 
 http://stackalytics.com/report/contribution/neutron-group/90
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] blueprint about multiple workers supported in nova-scheduler

2015-03-04 Thread Jay Pipes

On 03/04/2015 01:51 AM, Attila Fazekas wrote:

Hi,

I wonder what is the planned future of the scheduling.

The scheduler does a lot of high field number query,
which is CPU expensive when you are using sqlalchemy-orm.
Does anyone tried to switch those operations to sqlalchemy-core ?


Actually, the scheduler does virtually no SQLAlchemy ORM queries. Almost 
all database access is serialized from the nova-scheduler through the 
nova-conductor service via the nova.objects remoting framework.



The scheduler does lot of thing in the application, like filtering
what can be done on the DB level more efficiently. Why it is not done
on the DB side ?


That's a pretty big generalization. Many filters (check out NUMA 
configuration, host aggregate extra_specs matching, any of the JSON 
filters, etc) don't lend themselves to SQL column-based sorting and 
filtering.



There are use cases when the scheduler would need to know even more data,
Is there a plan for keeping `everything` in all schedulers process memory 
up-to-date ?
(Maybe zookeeper)


Zookeeper has nothing to do with scheduling decisions -- only whether or 
not a compute node's service descriptor is active or not. The end goal 
(after splitting the Nova scheduler out into Gantt hopefully at the 
start of the L release cycle) is to have the Gantt database be more 
optimized to contain the resource usage amounts of all resources 
consumed in the entire cloud, and to use partitioning/sharding to scale 
the scheduler subsystem, instead of having each scheduler process handle 
requests for all resources in the cloud (or cell...)



The opposite way would be to move most operation into the DB side,
since the DB already knows everything.
(stored procedures ?)


See above. This assumes that the data the scheduler is iterating over is 
well-structured and consistent, and that is a false assumption.


Best,
-jay


Best Regards,
Attila


- Original Message -

From: Rui Chen chenrui.m...@gmail.com
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Sent: Wednesday, March 4, 2015 4:51:07 AM
Subject: [openstack-dev] [nova] blueprint about multiple workers supported  
in nova-scheduler

Hi all,

I want to make it easy to launch a bunch of scheduler processes on a host,
multiple scheduler workers will make use of multiple processors of host and
enhance the performance of nova-scheduler.

I had registered a blueprint and commit a patch to implement it.
https://blueprints.launchpad.net/nova/+spec/scheduler-multiple-workers-support

This patch had applied in our performance environment and pass some test
cases, like: concurrent booting multiple instances, currently we didn't find
inconsistent issue.

IMO, nova-scheduler should been scaled horizontally on easily way, the
multiple workers should been supported as an out of box feature.

Please feel free to discuss this feature, thanks.

Best Regards


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Proposal to add Ihar Hrachyshka as a Neutron Core Reviewer

2015-03-04 Thread Maru Newby
+1 from me, Ihar has been doing great work and it will be great to have him 
finally able to merge!

 On Mar 4, 2015, at 11:42 AM, Kyle Mestery mest...@mestery.com wrote:
 
 I'd like to propose that we add Ihar Hrachyshka to the Neutron core reviewer 
 team. Ihar has been doing a great job reviewing in Neutron as evidence by his 
 stats [1]. Ihar is the Oslo liaison for Neutron, he's been doing a great job 
 keeping Neutron current there. He's already a critical reviewer for all the 
 Neutron repositories. In addition, he's a stable maintainer. Ihar makes 
 himself available in IRC, and has done a great job working with the entire 
 Neutron team. His reviews are thoughtful and he really takes time to work 
 with code submitters to ensure his feedback is addressed.
 
 I'd also like to again remind everyone that reviewing code is a 
 responsibility, in Neutron the same as other projects. And core reviewers are 
 especially beholden to this responsibility. I'd also like to point out and 
 reinforce that +1/-1 reviews are super useful, and I encourage everyone to 
 continue reviewing code across Neutron as well as the other OpenStack 
 projects, regardless of your status as a core reviewer on these projects.
 
 Existing Neutron cores, please vote +1/-1 on this proposal to add Ihar to the 
 core reviewer team.
 
 Thanks!
 Kyle
 
 [1] http://stackalytics.com/report/contribution/neutron-group/90
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [OpenStackClient] Weekly meeting time change

2015-03-04 Thread Dean Troyer
Per the results from the Doodle set up by dhellmann [0] we are moving the
scheduled OpenStackClient team meeting back to 1900 UTC starting tomorrow,
05 Mar 2014, still in freenode #openstack-meeting.

The agenda for the meeting can be found at
https://wiki.openstack.org/wiki/Meetings/OpenStackClient#Next_Meeting_Agenda.
Anyone is welcome to add things to the agenda, please indicate your IRC
handle and if you are unable to attend the meeting sending
background/summary to me or someone who will attend is appreciated.

A timezone helper is at
http://www.timeanddate.com/worldclock/fixedtime.html?hour=19min=0sec=0:

04:00 JST
05:30 ACDT
20:00 CET
14:00 EST
13:00 CST
11:00 PST

dt

[0] http://doodle.com/4uy5w2ehn8y2eayh

-- 

Dean Troyer
dtro...@gmail.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][messaging][zmq] Discussion on zmq driver design issues

2015-03-04 Thread Doug Hellmann
Thanks for pulling this list together, Oleksii. More comments inline. -
Doug

On Wed, Mar 4, 2015, at 12:10 PM, ozamiatin wrote:
 Hi,
 
 By this e-mail I'd like to start a discussion about current zmq driver 
 internal design problems I've found out.
 I wish to collect here all proposals and known issues. I hope this 
 discussion will be continued on Liberty design summit.
 And hope it will drive our further zmq driver development efforts.
 
 ZMQ Driver issues list (I address all issues with # and references are 
 in []):
 
 1. ZMQContext per socket (blocker is neutron improper usage of messaging 
 via fork) [3]

It looks like I had a question about managing backwards-compatibility on
that, and Mehdi responded that he thinks things are broken enough ZMQ
can't actually be used in production now. If that's true, then I agree
we don't need to be concerned about upgrades. Can you add a comment to
the review with your impression of the current version's suitability for
production use?

 
 2. Too many different contexts.
  We have InternalContext used for ZmqProxy, RPCContext used in 
 ZmqReactor, and ZmqListener.
  There is also zmq.Context which is zmq API entity. We need to 
 consider a possibility to unify their usage over inheritance (maybe 
 stick to RPCContext)
  or to hide them as internal entities in their modules (see 
 refactoring #6)
 
 
 3. Topic related code everywhere. We have no topic entity. It is all 
 string operations.
  We need some topics management entity and topic itself as an entity 
 (not a string).
  It causes issues like [4], [5]. (I'm already working on it).
  There was a spec related [7].
 
 
 4. Manual implementation of messaging patterns.
 Now we can observe poor usage of zmq features in zmq driver. Almost 
 everything is implemented over PUSH/PULL.
 
  4.1 Manual polling - use zmq.Poller (listening and replying for 
 multiple sockets)
  4.2 Manual request/reply implementation for call [1].
  Using of REQ/REP (ROUTER/DEALER) socket solves many issues. A 
 lot of code may be reduced.
  4.3 Timeouts waiting
 
 
 5. Add possibility to work without eventlet [2]. #4.1 is also related 
 here, we can reuse many of the implemented solutions
 like zmq.Poller over asynchronous sockets in one separate thread 
 (instead of spawning on each new socket).
 I will update the spec [2] on that.
 
 
 6. Put all zmq driver related stuff (matchmakers, most classes from 
 zmq_impl) into a separate package.
 Don't keep all classes (ZmqClient, ZmqProxy, Topics management, 
 ZmqListener, ZmqSocket, ZmqReactor)
 in one impl_zmq.py module.

The AMQP 1.0 driver work did something similar under a protocols
directory. It would be nice to be consistent with the existing work on
organizing driver-related files.

 
  _drivers (package)
  +-- impl_rabbit.py
  +-- impl_zmq.py - leave only ZmqDriver class here
  +-- zmq_driver (package)
  |+--- matchmaker.py
  |+--- matchmaker_ring.py
  |+--- matchmaker_redis.py
  |+--- matchmaker_.py
  ...
  |+--- client.py
  |+--- reactor.py
  |+--- proxy.py
  |+--- topic.py
  ...
 
 7. Need more technical documentation on the driver like [6].
 I'm willing to prepare a current driver architecture overview with 
 some graphics UML charts, and to continue discuss the driver
 architecture.
 
 Please feel free to add or to argue about any issue, I'd like to have 
 your feedback on these issues.

This looks like a good list, and I'm encouraged to see activity around
the ZMQ driver. I would like to see more participation in reviews for
the ZMQ-related specs before the summit, so we can use our time together
in person to resolve remaining issues rather than starting from scratch.

Doug

 Thanks.
 
 Oleksii Zamiatin
 
 
 References:
 
 [1] https://review.openstack.org/#/c/154094/
 [2] https://review.openstack.org/#/c/151185/
 [3] https://review.openstack.org/#/c/150735/
 [4] https://bugs.launchpad.net/oslo.messaging/+bug/1282297
 [5] https://bugs.launchpad.net/oslo.messaging/+bug/1381972
 [6] https://review.openstack.org/#/c/130943/8
 [7] https://review.openstack.org/#/c/144149/1
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] [infra][all] CI Check Queue stuck

2015-03-04 Thread Andreas Jaeger
On 03/04/2015 09:27 AM, Andreas Jaeger wrote:
 There has been a maintenance window in one of our providers cloud and it
 seems that some of the OpenStack CI infrastructure is not working as is.
 You will notice that Jenkins is not commenting on any changes submitted
 to review.openstack.org, the check queue is currently stuck.
 
 Please have patience until the maintenance window is over and the infra
 team has taken care of our CI infrastructure so that it processes all
 events again - this should happen during the US morning.
 
 There's no need to recheck any changes that you have submitted, they
 will not move forward.

The issue has been resolved, the gate is slowly going through all
changes now,

Andreas
-- 
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Jennifer Guild, Dilip Upmanyu,
   Graham Norton, HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] SQLAlchemy performance suite and upcoming features (was: [nova] blueprint about multiple workers)

2015-03-04 Thread Mike Bayer


Mike Bayer mba...@redhat.com wrote:

 
 
 Attila Fazekas afaze...@redhat.com wrote:
 
 Hi,
 
 I wonder what is the planned future of the scheduling.
 
 The scheduler does a lot of high field number query,
 which is CPU expensive when you are using sqlalchemy-orm.
 Does anyone tried to switch those operations to sqlalchemy-core ?
 
 An upcoming feature in SQLAlchemy 1.0 will remove the vast majority of CPU
 overhead from the query side of SQLAlchemy ORM by caching all the work done

Just to keep the Openstack community of what’s upcoming, here’s some more detail
on some of the new SQLAlchemy performance features, which are based on the 
goals I first set up last summer at 
https://wiki.openstack.org/wiki/OpenStack_and_SQLAlchemy.

As 1.0 features a lot of new styles of doing things that are primarily in
the name of performance, in order to help categorize and document these
techniques, 1.0 includes a performance suite in examples/ which features a
comprehensive collection of common database idioms run under timing and
function-count profiling. These idioms are broken into major categories like
“short selects”, “large resultsets”, “bulk inserts”, and serve not only as a
way to compare the relative performance of different techniques, but also as
a way to provide example code categorized into use cases that illustrate the
variety of ways to achieve that case, including the tradeoffs for each,
across Core and ORM. So in this case, we can see what the “baked” query
looks like in the “short_selects” suite, which times how long it takes to
perform 1 queries, each of which return one object or row:

https://bitbucket.org/zzzeek/sqlalchemy/src/cc58a605d6cded0594f7db1caa840b3c00b78e5a/examples/performance/short_selects.py?at=ticket_3054#cl-73

The results of this suite look like the following:

test_orm_query : test a straight ORM query of the full entity. (1 
iterations); total time 7.363434 sec
test_orm_query_cols_only : test an ORM query of only the entity columns. (1 
iterations); total time 6.509266 sec
test_baked_query : test a baked query of the full entity. (1 iterations); 
total time 1.999689 sec
test_baked_query_cols_only : test a baked query of only the entity columns. 
(1 iterations); total time 1.990916 sec
test_core_new_stmt_each_time : test core, creating a new statement each time. 
(1 iterations); total time 3.842871 sec
test_core_reuse_stmt : test core, reusing the same statement (but recompiling 
each time). (1 iterations); total time 2.806590 sec
test_core_reuse_stmt_compiled_cache : test core, reusing the same statement + 
compiled cache. (1 iterations); total time 0.659902 sec

Where above, “test_orm” and “test_baked” are both using the ORM API
exclusively. We can see that the “baked” approach, returning column tuples
is almost twice as fast as a naive Core approach, that is, one which
constructs select() objects each time and does not attempt to use any
compilation caching.

For the use case of fetching large numbers of rows, we can look at the
large_resultsets suite
(https://bitbucket.org/zzzeek/sqlalchemy/src/cc58a605d6cded0594f7db1caa840b3c00b78e5a/examples/performance/large_resultsets.py?at=ticket_3054).
This suite illustrates a single query which fetches 500K rows. The “Baked”
approach isn’t relevant here as we are only emitting a query once, however
the approach we use to fetch rows is significant. Here we can see that
ORM-based “tuple” approaches are very close in speed to the fetching of rows
using Core directly. We also have a comparison of Core against raw DBAPI
access, where we see very little speed improvement; an example where we
create a very simple object for each DBAPI row fetched is also present to
illustrate how quickly even the most minimal Python function overhead adds
up when we do something 500K times.

test_orm_full_objects_list : Load fully tracked ORM objects into one big 
list(). (50 iterations); total time 11.055097 sec
test_orm_full_objects_chunks : Load fully tracked ORM objects a chunk at a time 
using yield_per(). (50 iterations); total time 7.323350 sec
test_orm_bundles : Load lightweight bundle objects using the ORM. (50 
iterations); total time 2.128237 sec
test_orm_columns : Load individual columns into named tuples using the ORM. 
(50 iterations); total time 1.585236 sec
test_core_fetchall : Load Core result rows using fetchall. (50 iterations); 
total time 1.187013 sec
test_core_fetchmany_w_streaming : Load Core result rows using 
fetchmany/streaming. (50 iterations); total time 0.945906 sec
test_core_fetchmany : Load Core result rows using Core / fetchmany. (50 
iterations); total time 0.959626 sec
test_dbapi_fetchall_plus_append_objects : Load rows using DBAPI fetchall(), 
generate an object for each row. (50 iterations); total time 1.168365 sec
test_dbapi_fetchall_no_object : Load rows using DBAPI fetchall(), don't make 
any objects. (50 iterations); total time 0.835586 sec

An ongoing 

[openstack-dev] [openstack-operators] [nova] Nova options as instance metadata

2015-03-04 Thread Belmiro Moreira
Hi,
in nova there are several options that can be defined in the flavor (extra
specs)
and/or as image properties.
This is great, however to deploy some of these options we will need offer
the
same image with different properties or let the users upload the same image
with
the right properties.

It will be interesting to have some of these options available as instance
metadata.
In this case the user would be able to speficy them when creating the
instance
(ex: --meta hw_watchdog_action=pause) avoiding the need to upload a
different
image or in other cases requesting a new flavor.

Is this option being considered?


Belmiro
---
CERN
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] blueprint about multiple workers supported in nova-scheduler

2015-03-04 Thread Rui Chen
Looks like it's a complicated problem, and nova-scheduler can't scale-out
horizontally in active/active mode.

Maybe we should illustrate the problem in the HA docs.

http://docs.openstack.org/high-availability-guide/content/_schedulers.html

Thanks for everybody's attention.

2015-03-05 5:38 GMT+08:00 Mike Bayer mba...@redhat.com:



 Attila Fazekas afaze...@redhat.com wrote:

  Hi,
 
  I wonder what is the planned future of the scheduling.
 
  The scheduler does a lot of high field number query,
  which is CPU expensive when you are using sqlalchemy-orm.
  Does anyone tried to switch those operations to sqlalchemy-core ?

 An upcoming feature in SQLAlchemy 1.0 will remove the vast majority of CPU
 overhead from the query side of SQLAlchemy ORM by caching all the work done
 up until the SQL is emitted, including all the function overhead of
 building
 up the Query object, producing a core select() object internally from the
 Query, working out a large part of the object fetch strategies, and finally
 the string compilation of the select() into a string as well as organizing
 the typing information for result columns. With a query that is constructed
 using the “Baked” feature, all of these steps are cached in memory and held
 persistently; the same query can then be re-used at which point all of
 these
 steps are skipped. The system produces the cache key based on the in-place
 construction of the Query using lambdas so no major changes to code
 structure are needed; just the way the Query modifications are performed
 needs to be preceded with “lambda q:”, essentially.

 With this approach, the traditional session.query(Model) approach can go
 from start to SQL being emitted with an order of magnitude less function
 calls. On the fetch side, fetching individual columns instead of full
 entities has always been an option with ORM and is about the same speed as
 a
 Core fetch of rows. So using ORM with minimal changes to existing ORM code
 you can get performance even better than you’d get using Core directly,
 since caching of the string compilation is also added.

 On the persist side, the new bulk insert / update features provide a bridge
 from ORM-mapped objects to bulk inserts/updates without any unit of work
 sorting going on. ORM mapped objects are still more expensive to use in
 that
 instantiation and state change is still more expensive, but bulk
 insert/update accepts dictionaries as well, which again is competitive with
 a straight Core insert.

 Both of these features are completed in the master branch, the “baked
 query”
 feature just needs documentation, and I’m basically two or three tickets
 away from beta releases of 1.0. The “Baked” feature itself lives as an
 extension and if we really wanted, I could backport it into oslo.db as well
 so that it works against 0.9.

 So I’d ask that folks please hold off on any kind of migration from ORM to
 Core for performance reasons. I’ve spent the past several months adding
 features directly to SQLAlchemy that allow an ORM-based app to have routes
 to operations that perform just as fast as that of Core without a rewrite
 of
 code.

  The scheduler does lot of thing in the application, like filtering
  what can be done on the DB level more efficiently. Why it is not done
  on the DB side ?
 
  There are use cases when the scheduler would need to know even more data,
  Is there a plan for keeping `everything` in all schedulers process
 memory up-to-date ?
  (Maybe zookeeper)
 
  The opposite way would be to move most operation into the DB side,
  since the DB already knows everything.
  (stored procedures ?)
 
  Best Regards,
  Attila
 
 
  - Original Message -
  From: Rui Chen chenrui.m...@gmail.com
  To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
  Sent: Wednesday, March 4, 2015 4:51:07 AM
  Subject: [openstack-dev] [nova] blueprint about multiple workers
 supported   in nova-scheduler
 
  Hi all,
 
  I want to make it easy to launch a bunch of scheduler processes on a
 host,
  multiple scheduler workers will make use of multiple processors of host
 and
  enhance the performance of nova-scheduler.
 
  I had registered a blueprint and commit a patch to implement it.
 
 https://blueprints.launchpad.net/nova/+spec/scheduler-multiple-workers-support
 
  This patch had applied in our performance environment and pass some test
  cases, like: concurrent booting multiple instances, currently we didn't
 find
  inconsistent issue.
 
  IMO, nova-scheduler should been scaled horizontally on easily way, the
  multiple workers should been supported as an out of box feature.
 
  Please feel free to discuss this feature, thanks.
 
  Best Regards
 
 
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  

Re: [openstack-dev] [oslo.policy] guraduation status

2015-03-04 Thread Osanai, Hisashi

Doug,

Thank you for the response and sorry to respond you late.
Recently I could not receive e-mails from this list and your e-mail was one of 
them.
I don't know the reason but I found out your response in archive.

On Mon, 02 Mar 2015 12:28:06 -0800, Doug Hellmann wrote:
 We're making good progress and expect to have a public release with a
 stable API fairly soon.

Good information! I'm looking forward to using it.

Thanks again!
Hisashi Osanai

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] what's the merge plan for current proposed microversions?

2015-03-04 Thread Christopher Yeoh
On Wed, Mar 4, 2015 at 9:51 PM, Alexandre Levine alev...@cloudscaling.com
wrote:

 Christopher,

 Does this

 So the plan for assignment of microversion api numbers is the same as
 what we currently do for db migration changes - take the next one
 knowing that you may need to rebase if someone else merges before you

 mean that I should put 2.2 in my review for now instead of 2.4?


I don't think that'd be worth it because in this case as its the first
microversion we've already decided to merge
https://review.openstack.org/140313
first and you'll need to rebase to at least 2.3. I think the reason I
recommended 2.4 to you was that https://review.openstack.org/128940 was the
other patch identified as
a possible good candidate for being the first microversion but in the end
wasn't selected so it ended up with 2.3. From a quick look at 128940  I
think it might have spec approval
issues thoughso if your patch is ready when 140313 merges you might end up
with 2.3.


 Other suggestion would be to pack several simultaneously incoming changes
 into one microversion. Maybe spawn them once a week, or once a couple of
 weeks, or even with longer period, depending on the amount of incoming
 changes. For example, wouldn't it be convenient for clients to acquire all
 of the accumulated changes in one microversion (2.2 as I understand)
 without needing to understand which one came with what number? To clarify,
 I'm suggesting to pass reviews for all of the hanging API changes against
 2.2 version.



So I think its probably better to keep the number of changes small per
microversion. Though I have also suggested in the past that very minor
changes in the same such as formatting etc be fixed if we're making api
chnges in the same area anyway and clients will be forced to modify what
they do regardless. However bundling a lot of api changes in one
microversion will for CD we'd need them to be enabled all at once meaning
we'd either need to merge them into one patch or have an additional patch
which just enables say 2.2 once all the dependent patches are merged. This
increases the probability that one delayed patch will delay a bunch of
others whereas now whoever is ready the first will merge first..

It someone has experience from db migrations that they think would work
well, please let us know!

Regards,

Chris

Best regards,
   Alex Levine


 On 3/4/15 11:44 AM, Christopher Yeoh wrote:

 On Tue, 03 Mar 2015 10:28:34 -0500
 Sean Dague s...@dague.net wrote:

  On 03/03/2015 10:24 AM, Claudiu Belu wrote:

 Hello.

 I've talked with Christopher Yeoh yesterday and I've asked him
 about the microversions and when will they be able to merge. He
 said that for now, this commit had to get in before any other
 microversions: https://review.openstack.org/#/c/159767/

 He also said that he'll double check everything, and if everything
 is fine, the first microversions should be getting in soon after.

 Best regards,

 Claudiu Belu

 I just merged that one this morning, so hopefully we can dislodge.


 So just before we merged the the keypairs microversion change someone
 enabled response schema tests which showed some further problems. They
 have now all been fixed but https://review.openstack.org/#/c/161112/
 which needs just one more +2. After that the first api change which uses
 microversions https://review.openstack.org/#/c/140313/ can merge (has
 3 +2's just needs the v2.1 fix first.

  
 From: Alexandre Levine [alev...@cloudscaling.com]
 Sent: Tuesday, March 03, 2015 4:22 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [nova] what's the merge plan for
 current proposed microversions?

 Bump.

 I'd really appreciate some answers to the question Sean asked. I
 still have the 2.4 in my review (the very one Sean mentioned) but
 it seems that it might not be the case.

 Best regards,
 Alex Levine

 On 3/2/15 2:30 PM, Sean Dague wrote:

 This change for the additional attributes for ec2 looks like it's
 basically ready to go, except it has the wrong microversion on it
 (as they anticipated the other changes landing ahead of them) -
 https://review.openstack.org/#/c/155853

 What's the plan for merging the outstanding microversions? I
 believe we're all conceptually approved on all them, and it's an
 important part of actually moving forward on the new API. It seems
 like we're in a bit of a holding pattern on all of them right now,
 and I'd like to make sure we start merging them this week so that
 they have breathing space before the freeze.

  So the plan for assignment of microversion api numbers is the same as
 what we currently do for db migration changes - take the next one
 knowing that you may need to rebase if someone else merges before you.
 Other suggestions welcome but will have to follow the requirement that
 they always merge in version order.



 -Sean


 

Re: [openstack-dev] [API] #openstack-api created

2015-03-04 Thread michael mccune

huzzah!

thanks Ian =)

mike

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] what's the merge plan for current proposed microversions?

2015-03-04 Thread Alexandre Levine

Christopher,

If I'm not mistaken about what you mean I understand the point with the 
small changes, but what's wrong with already approved, tested and ready 
changes to be added as one version? Alternatively you risk getting 
rather quickly to high version numbers like 2.25, 2.37 It'll lead to 
a zoo of folders in jsons and docs and lots of functions and test 
classes with version-specific names and decorators in the code, all of 
them with ever increasing numbers. I mean it is a trade-off - quantity 
of versions against sizes of changes. So maybe the mid-path is good 
enough? Say, all trailing changes existing now at the moment to be 
packed into one microversion? I understand there are 3 of them and all 
of them are ready and waiting, aren't they?



Best regards,
   Alex Levine

On 3/5/15 2:53 AM, Christopher Yeoh wrote:



On Wed, Mar 4, 2015 at 9:51 PM, Alexandre Levine 
alev...@cloudscaling.com mailto:alev...@cloudscaling.com wrote:


Christopher,

Does this

So the plan for assignment of microversion api numbers is the same as
what we currently do for db migration changes - take the next one
knowing that you may need to rebase if someone else merges before you

mean that I should put 2.2 in my review for now instead of 2.4?


I don't think that'd be worth it because in this case as its the first 
microversion we've already decided to merge 
https://review.openstack.org/140313
first and you'll need to rebase to at least 2.3. I think the reason I 
recommended 2.4 to you was that https://review.openstack.org/128940 
was the other patch identified as
a possible good candidate for being the first microversion but in the 
end wasn't selected so it ended up with 2.3. From a quick look at 
128940  I think it might have spec approval
issues thoughso if your patch is ready when 140313 merges you might 
end up with 2.3.


Other suggestion would be to pack several simultaneously incoming
changes into one microversion. Maybe spawn them once a week, or
once a couple of weeks, or even with longer period, depending on
the amount of incoming changes. For example, wouldn't it be
convenient for clients to acquire all of the accumulated changes
in one microversion (2.2 as I understand) without needing to
understand which one came with what number? To clarify, I'm
suggesting to pass reviews for all of the hanging API changes
against 2.2 version.



So I think its probably better to keep the number of changes small per 
microversion. Though I have also suggested in the past that very minor 
changes in the same such as formatting etc be fixed if we're making 
api chnges in the same area anyway and clients will be forced to 
modify what they do regardless. However bundling a lot of api changes 
in one microversion will for CD we'd need them to be enabled all at 
once meaning we'd either need to merge them into one patch or have an 
additional patch which just enables say 2.2 once all the dependent 
patches are merged. This increases the probability that one delayed 
patch will delay a bunch of others whereas now whoever is ready the 
first will merge first..


It someone has experience from db migrations that they think would 
work well, please let us know!


Regards,

Chris

Best regards,
  Alex Levine


On 3/4/15 11:44 AM, Christopher Yeoh wrote:

On Tue, 03 Mar 2015 10:28:34 -0500
Sean Dague s...@dague.net mailto:s...@dague.net wrote:

On 03/03/2015 10:24 AM, Claudiu Belu wrote:

Hello.

I've talked with Christopher Yeoh yesterday and I've
asked him
about the microversions and when will they be able to
merge. He
said that for now, this commit had to get in before
any other
microversions: https://review.openstack.org/#/c/159767/

He also said that he'll double check everything, and
if everything
is fine, the first microversions should be getting in
soon after.

Best regards,

Claudiu Belu

I just merged that one this morning, so hopefully we can
dislodge.


So just before we merged the the keypairs microversion change
someone
enabled response schema tests which showed some further
problems. They
have now all been fixed but
https://review.openstack.org/#/c/161112/
which needs just one more +2. After that the first api change
which uses
microversions https://review.openstack.org/#/c/140313/ can
merge (has
3 +2's just needs the v2.1 fix first.


From: Alexandre Levine [alev...@cloudscaling.com
mailto:alev...@cloudscaling.com]
Sent: Tuesday, March 03, 2015 4:22 PM
To: 

Re: [openstack-dev] removal of v3 in tree testing

2015-03-04 Thread GHANSHYAM MANN
Hi Sean,

Yes having V3 directory/file names is very confusing now.

But current v3 sample tests cases tests v2.1 plugins. As /v3 url is
redirected to v21 plugins, v3 sample tests make call through v3 url and
test v2.1 plugins.

I think we can start cleaning up the *v3* from everywhere and change it to
v2.1 or much appropriate name.

To cleanup the same from sample files, I was planning to rearrange sample
files structure. Please check if that direction looks good (still need to
release patch for directory restructure)

https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:sample_files_structure,n,z



On Wed, Mar 4, 2015 at 11:06 PM, Sean Dague s...@dague.net wrote:

 I have proposed the following patch series to remove the API v3 direct

testing in tree -

https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:rm_v3_paste,n,z


 This also removes the /v3 entry from the paste.ini that we ship. I think

it's important that we remove that before kilo releases as I believe it

continues to confuse people.


 This also drops the functional test runs by about 50% in time.


 I couldn't think of a reason why we still needed this, but if someone

does, please speak up.


 -Sean


 --

Sean Dague

http://dague.net


 __

OpenStack Development Mailing List (not for usage questions)

Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Thanks  Regards
Ghanshyam Mann
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Re-evaluating the suitability of the 6 month release cycle

2015-03-04 Thread Steve Gordon
- Original Message -
 From: Thierry Carrez thie...@openstack.org
 To: James Bottomley james.bottom...@hansenpartnership.com
 
  It's certainly a lot less than you, but we have the entire system
  call
  man pages.  It's an official project of the kernel:
  
  https://www.kernel.org/doc/man-pages/
  
  And we maintain translations for it
  
  https://www.kernel.org/doc/man-pages/translations.html
 
 By translations I meant strings in the software itself, not doc
 translations. We don't translate docs upstream either :) I guess we
 could drop those (and/or downstream them in a way) if that was the
 last
 thing holding up adding more agility.

There is actually a group of contributors working on translation of 
documentation and translations in various stages of completeness are available 
at docs.openstack.org (hit the more releases and languages drop down to find 
them). The challenge for the translators of course is they are trying to hit a 
target that is moving not just until the release but often well beyond it as 
the documentation team themselves try to catch up.

-Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Behavior of default security group

2015-03-04 Thread Hirofumi Ichihara
Thank you for your response.

 That's a fair point. But I think it's because you're not expected to
 run as admin, and having a way to drop the group as admin can be of
 value for e.g. debugging or cleaning up after some bugs [1].
You’re right. 
Regenerate logic seems strange to me. But I’m not sure the logic must be fixed.

 This is because original neutron/nova authors thought that following
 the AWS way [2] is essential for project success.
 
 Since [3], neutron allows default group to be renamed. Though nova
 still assumes 'default' is the only way the group can be named [4].
I got it. It may be worth fixing.

Thanks,
Hirofumi

2015/02/24 2:00、Ihar Hrachyshka ihrac...@redhat.com :

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1
 
 On 02/20/2015 11:45 AM, Hirofumi Ichihara wrote:
 Neutron experts,
 
 I caught a bug report[1].
 
 Currently, Neutron enable admin to delete default security group.
 But Neutron doesn’t allow default security group to keep deleted.
 Neutron regenerates default security group as security group api is
 called next.
 
 I actually believe the design is unfortunate, and instead of this,
 keystone would better notify services about new tenant, and services
 would create resources like default security groups for them. AFAIK
 keystone does not notify at the moment, so we had few options.
 Speaking of current design, ...
 
 I have two questions about the behavior.
 
 1. Why does Neutron regenerate default security group? If default 
 security group is essential, we shouldn’t enable admin to delete
 it.
 
 That's a fair point. But I think it's because you're not expected to
 run as admin, and having a way to drop the group as admin can be of
 value for e.g. debugging or cleaning up after some bugs [1].
 
 2. Why is security group named “default essential? Users may want
 to change its name.
 
 
 This is because original neutron/nova authors thought that following
 the AWS way [2] is essential for project success.
 
 Since [3], neutron allows default group to be renamed. Though nova
 still assumes 'default' is the only way the group can be named [4].
 
 [1]: https://bugs.launchpad.net/neutron/+bug/1194579
 [2]:
 http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-network-security.html#default-security-group
 [3]:
 http://git.openstack.org/cgit/openstack/neutron/commit/?id=79c97120de9cff4d0992b5d41ff4bbf05e890f89
 [4]:
 https://git.openstack.org/cgit/openstack/nova/tree/nova/compute/api.py#n1074
 
 /Ihar
 -BEGIN PGP SIGNATURE-
 Version: GnuPG v1
 
 iQEcBAEBAgAGBQJU61zHAAoJEC5aWaUY1u57UE4H/30jKnhrQthzuw0xuKJ3VDu7
 Fi+eqbhis7/ntGSQLlDFEPzsHjCxjkwXVN7kdPPaftp6RsnpwJNko+Zbvv2gWEMj
 qS3dxsCYiQVAjmbDIXrlz1K/za+QYJL3FvD9hP/ixA90ZeL0l6VFs2KwKAr35AEP
 EmkBK237tlHBJfqVh9H81cMn36iPKMd/g+4cAuysxajEFiWSqBBegngGpCiUJ6Vm
 51AeOBR4bwR585XvIRyDQIfQD/rLSYHzTZSn+ChLy6It14x7WHs/xgTn5V3EqNKB
 VIHhiU6j2QuW07wDa1/HEGaTao8Np1OcL7IuEdDb6ioCZRMaC3cpuTOE3OoVeW4=
 =8BCo
 -END PGP SIGNATURE-
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Re-evaluating the suitability of the 6 month release cycle

2015-03-04 Thread James Bottomley
On Wed, 2015-03-04 at 11:19 +0100, Thierry Carrez wrote:
 James Bottomley wrote:
  On Tue, 2015-03-03 at 11:59 +0100, Thierry Carrez wrote:
  Second it's at a very different evolution/maturity point (20 years old
  vs. 0-4 years old for OpenStack projects).
  
  Yes, but I thought I covered this in the email: you can see that at the
  4 year point in its lifecycle, the kernel was behaving very differently
  (and in fact more similar to OpenStack).  The question I thought was
  still valid is whether anything was learnable from the way the kernel
  evolved later.  I think the key issue, which you seem to have in
  OpenStack is that the separate develop/stabilise phases caused
  frustration to build up in our system which (nine years later) led the
  kernel to adopt the main branch stabilisation with overlapping subsystem
  development cycle.
 
 I agree with you: the evolution the kernel went through is almost a
 natural law, and I know we won't stay in the current model forever. I'm
 just not sure we have reached the level of general stability that makes
 it possible to change *just now*. I welcome brainstorming and discussion
 on future evolutions, though, and intend to lead a cross-project session
 discussion on that in Vancouver.

OK, I'll be in Vancouver, so happy to come and give input from
participating in the kernel process (for a bit longer than I care to
admit to ...).

One interesting thing might be to try and work out where roughly
OpenStack is on the project trajectory.  It's progressing much more
rapidly than the kernel (by 4 years in, the kernel didn't even have
source control), so the release crisis it took the kernel 12 years to
reach might be a bit closer than people imagine.

James


James



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][vpnaas] VPNaaS Subteam meetings

2015-03-04 Thread Sridhar Ramaswamy
Hi Paul.

I'd vote for (C) and a slightly later time-slot on Tuesdays - 1630 UTC (or
later).

The meetings so far was indeed quite useful. I guess the current busy Kilo
cycle is also contributing to the low turnout. As we pick up things going
forward this forum will be quite useful to discuss edge-vpn and, perhaps,
other vpn variants.

- Sridhar

On Tue, Mar 3, 2015 at 3:38 AM, Paul Michali p...@michali.net wrote:

 Hi all! The email, that I sent on 2/24 didn't make it to the mailing list
 (no wonder I didn't get responses!). I think I had an issue with my email
 address used - sorry for the confusion!

 So, I'll hold the meeting today (1500 UTC meeting-4, if it is still
 available), and we can discuss this...


 We've been having very low turnout for meetings for the past several
 weeks, so I'd like to ask those in the community interested in VPNaaS, what
 the preference would be regarding meetings...

 A) hold at the same day/time, but only on-demand.
 B) hold at a different day/time.
 C) hold at a different day/time, but only on-demand.
 D) hold as a on-demand topic in main Neutron meeting.

 Please vote your interest, and provide desired day/time, if you pick B or
 C. The fallback will be (D), if there's not much interest anymore for
 meeting, or we can't seem to come to a consensus (or super-majority :)

 Regards,

 PCM

 Twitter: @pmichali
 TEXT: 6032894458
 PCM (Paul Michali)

 IRC pc_m (irc.freenode.com)
 Twitter... @pmichali


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Glance] [all] python-glanceclient release 0.16.1

2015-03-04 Thread Nikhil Komawar
The python-glanceclient release management team is pleased to announce:
python-glanceclient version 0.16.1 has been released on Thursday, Mar 5th 
around 04:56 UTC.

For more information, please find the details at:


https://launchpad.net/python-glanceclient/+milestone/v0.16.1https://launchpad.net/python-glanceclient/+milestone/v0.16.0

Please report the issues through launchpad:

https://bugs.launchpad.net/python-glanceclient

Thanks,
-Nikhil
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Question about boot-from-volume instance and flavor

2015-03-04 Thread Alex Xu
2015-03-04 4:45 GMT+08:00 Jay Pipes jaypi...@gmail.com:

 On 03/03/2015 01:10 AM, Rui Chen wrote:

 Hi all,

 When we boot instance from volume, we find some ambiguous description
 about flavor root_gb in operations guide,
 http://docs.openstack.org/openstack-ops/content/flavors.html

 /Virtual root disk size in gigabytes. This is an ephemeral disk the base
 image is copied into. You don't use it when you boot from a persistent
 volume. /
 /The 0 size is a special case that uses the native base image size as
 the size of the ephemeral root volume./
 /
 /
 'You don't use it(root_gb) when you boot from a persistent volume.'
 It means that we need to set the root_gb to 0 or not? I don't know.


 Hi Rui, I agree the documentation -- and frankly, the code in Nova -- is
 confusing around this area.

  But I find out that the root_gb will been added into local_gb_used of
 compute_node so that it will impact the next scheduling. Think about a
 use case, the local_gb of compute_node is 10, boot instance from volume
 with the root_gb=5 flavor, in this case, I can only boot 2
 boot-from-volume instances on the compute_nodes, although these
 instances don't use the local disk of compute_nodes.

 I find a patch that try to fix this issue,
 https://review.openstack.org/#/c/136284/

 I want to know that which solution is better for you?

 Solution #1: boot instance from volume with the root_gb=0 flavor.
 Solution #2: add some special logic in order to correct the disk usage,
 like patch #136284


 Solution #2 is a better idea, IMO. There should not be any magic setting
 for root_gb that needs to be interpreted both by the user and the Nova code
 base.

 The issue with the 136284 patch is that it is trying to address the
 problem in the wrong place, IMHO.


Emm.I'm thinking of one case. There two flavors, one with root_gb=0,
another one with root_gb=10. And user choice different flavor will pay
different money.
If user choice the flavor with root_gb=10, and then boot from volume...then
user still need pay the extra money. That's good strange. Or say we should
let the bill system to distinguish the instance is boot from volume or not.
Or say we just tell user you make wrong choice... For this case, should we
choice
solution #1?



 Best,
 -jay

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] blueprint about multiple workers supported in nova-scheduler

2015-03-04 Thread Rui Chen
We will face the same issue in multiple nova-scheduler process case, like
Sylvain say, right?

Two processes/workers can actually consume two distinct resources on the
same HostState.




2015-03-05 13:26 GMT+08:00 Alex Xu sou...@gmail.com:

 Rui, you still can run multiple nova-scheduler process now.


 2015-03-05 10:55 GMT+08:00 Rui Chen chenrui.m...@gmail.com:

 Looks like it's a complicated problem, and nova-scheduler can't scale-out
 horizontally in active/active mode.

 Maybe we should illustrate the problem in the HA docs.

 http://docs.openstack.org/high-availability-guide/content/_schedulers.html

 Thanks for everybody's attention.

 2015-03-05 5:38 GMT+08:00 Mike Bayer mba...@redhat.com:



 Attila Fazekas afaze...@redhat.com wrote:

  Hi,
 
  I wonder what is the planned future of the scheduling.
 
  The scheduler does a lot of high field number query,
  which is CPU expensive when you are using sqlalchemy-orm.
  Does anyone tried to switch those operations to sqlalchemy-core ?

 An upcoming feature in SQLAlchemy 1.0 will remove the vast majority of
 CPU
 overhead from the query side of SQLAlchemy ORM by caching all the work
 done
 up until the SQL is emitted, including all the function overhead of
 building
 up the Query object, producing a core select() object internally from the
 Query, working out a large part of the object fetch strategies, and
 finally
 the string compilation of the select() into a string as well as
 organizing
 the typing information for result columns. With a query that is
 constructed
 using the “Baked” feature, all of these steps are cached in memory and
 held
 persistently; the same query can then be re-used at which point all of
 these
 steps are skipped. The system produces the cache key based on the
 in-place
 construction of the Query using lambdas so no major changes to code
 structure are needed; just the way the Query modifications are performed
 needs to be preceded with “lambda q:”, essentially.

 With this approach, the traditional session.query(Model) approach can go
 from start to SQL being emitted with an order of magnitude less function
 calls. On the fetch side, fetching individual columns instead of full
 entities has always been an option with ORM and is about the same speed
 as a
 Core fetch of rows. So using ORM with minimal changes to existing ORM
 code
 you can get performance even better than you’d get using Core directly,
 since caching of the string compilation is also added.

 On the persist side, the new bulk insert / update features provide a
 bridge
 from ORM-mapped objects to bulk inserts/updates without any unit of work
 sorting going on. ORM mapped objects are still more expensive to use in
 that
 instantiation and state change is still more expensive, but bulk
 insert/update accepts dictionaries as well, which again is competitive
 with
 a straight Core insert.

 Both of these features are completed in the master branch, the “baked
 query”
 feature just needs documentation, and I’m basically two or three tickets
 away from beta releases of 1.0. The “Baked” feature itself lives as an
 extension and if we really wanted, I could backport it into oslo.db as
 well
 so that it works against 0.9.

 So I’d ask that folks please hold off on any kind of migration from ORM
 to
 Core for performance reasons. I’ve spent the past several months adding
 features directly to SQLAlchemy that allow an ORM-based app to have
 routes
 to operations that perform just as fast as that of Core without a
 rewrite of
 code.

  The scheduler does lot of thing in the application, like filtering
  what can be done on the DB level more efficiently. Why it is not done
  on the DB side ?
 
  There are use cases when the scheduler would need to know even more
 data,
  Is there a plan for keeping `everything` in all schedulers process
 memory up-to-date ?
  (Maybe zookeeper)
 
  The opposite way would be to move most operation into the DB side,
  since the DB already knows everything.
  (stored procedures ?)
 
  Best Regards,
  Attila
 
 
  - Original Message -
  From: Rui Chen chenrui.m...@gmail.com
  To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
  Sent: Wednesday, March 4, 2015 4:51:07 AM
  Subject: [openstack-dev] [nova] blueprint about multiple workers
 supported   in nova-scheduler
 
  Hi all,
 
  I want to make it easy to launch a bunch of scheduler processes on a
 host,
  multiple scheduler workers will make use of multiple processors of
 host and
  enhance the performance of nova-scheduler.
 
  I had registered a blueprint and commit a patch to implement it.
 
 https://blueprints.launchpad.net/nova/+spec/scheduler-multiple-workers-support
 
  This patch had applied in our performance environment and pass some
 test
  cases, like: concurrent booting multiple instances, currently we
 didn't find
  inconsistent issue.
 
  IMO, nova-scheduler should been scaled horizontally on easily way, the
  multiple 

Re: [openstack-dev] [nova] blueprint about multiple workers supported in nova-scheduler

2015-03-04 Thread Alex Xu
Rui, you still can run multiple nova-scheduler process now.

2015-03-05 10:55 GMT+08:00 Rui Chen chenrui.m...@gmail.com:

 Looks like it's a complicated problem, and nova-scheduler can't scale-out
 horizontally in active/active mode.

 Maybe we should illustrate the problem in the HA docs.

 http://docs.openstack.org/high-availability-guide/content/_schedulers.html

 Thanks for everybody's attention.

 2015-03-05 5:38 GMT+08:00 Mike Bayer mba...@redhat.com:



 Attila Fazekas afaze...@redhat.com wrote:

  Hi,
 
  I wonder what is the planned future of the scheduling.
 
  The scheduler does a lot of high field number query,
  which is CPU expensive when you are using sqlalchemy-orm.
  Does anyone tried to switch those operations to sqlalchemy-core ?

 An upcoming feature in SQLAlchemy 1.0 will remove the vast majority of CPU
 overhead from the query side of SQLAlchemy ORM by caching all the work
 done
 up until the SQL is emitted, including all the function overhead of
 building
 up the Query object, producing a core select() object internally from the
 Query, working out a large part of the object fetch strategies, and
 finally
 the string compilation of the select() into a string as well as organizing
 the typing information for result columns. With a query that is
 constructed
 using the “Baked” feature, all of these steps are cached in memory and
 held
 persistently; the same query can then be re-used at which point all of
 these
 steps are skipped. The system produces the cache key based on the in-place
 construction of the Query using lambdas so no major changes to code
 structure are needed; just the way the Query modifications are performed
 needs to be preceded with “lambda q:”, essentially.

 With this approach, the traditional session.query(Model) approach can go
 from start to SQL being emitted with an order of magnitude less function
 calls. On the fetch side, fetching individual columns instead of full
 entities has always been an option with ORM and is about the same speed
 as a
 Core fetch of rows. So using ORM with minimal changes to existing ORM code
 you can get performance even better than you’d get using Core directly,
 since caching of the string compilation is also added.

 On the persist side, the new bulk insert / update features provide a
 bridge
 from ORM-mapped objects to bulk inserts/updates without any unit of work
 sorting going on. ORM mapped objects are still more expensive to use in
 that
 instantiation and state change is still more expensive, but bulk
 insert/update accepts dictionaries as well, which again is competitive
 with
 a straight Core insert.

 Both of these features are completed in the master branch, the “baked
 query”
 feature just needs documentation, and I’m basically two or three tickets
 away from beta releases of 1.0. The “Baked” feature itself lives as an
 extension and if we really wanted, I could backport it into oslo.db as
 well
 so that it works against 0.9.

 So I’d ask that folks please hold off on any kind of migration from ORM to
 Core for performance reasons. I’ve spent the past several months adding
 features directly to SQLAlchemy that allow an ORM-based app to have routes
 to operations that perform just as fast as that of Core without a rewrite
 of
 code.

  The scheduler does lot of thing in the application, like filtering
  what can be done on the DB level more efficiently. Why it is not done
  on the DB side ?
 
  There are use cases when the scheduler would need to know even more
 data,
  Is there a plan for keeping `everything` in all schedulers process
 memory up-to-date ?
  (Maybe zookeeper)
 
  The opposite way would be to move most operation into the DB side,
  since the DB already knows everything.
  (stored procedures ?)
 
  Best Regards,
  Attila
 
 
  - Original Message -
  From: Rui Chen chenrui.m...@gmail.com
  To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
  Sent: Wednesday, March 4, 2015 4:51:07 AM
  Subject: [openstack-dev] [nova] blueprint about multiple workers
 supported   in nova-scheduler
 
  Hi all,
 
  I want to make it easy to launch a bunch of scheduler processes on a
 host,
  multiple scheduler workers will make use of multiple processors of
 host and
  enhance the performance of nova-scheduler.
 
  I had registered a blueprint and commit a patch to implement it.
 
 https://blueprints.launchpad.net/nova/+spec/scheduler-multiple-workers-support
 
  This patch had applied in our performance environment and pass some
 test
  cases, like: concurrent booting multiple instances, currently we
 didn't find
  inconsistent issue.
 
  IMO, nova-scheduler should been scaled horizontally on easily way, the
  multiple workers should been supported as an out of box feature.
 
  Please feel free to discuss this feature, thanks.
 
  Best Regards
 
 
 
 __
  OpenStack Development Mailing List (not 

Re: [openstack-dev] [mistral] Break_on in Retry policy

2015-03-04 Thread Nikolay Makhotkin
Ok, we will proceed with error-on and success-on.

Thanks for the reply!

-- 
Best Regards,
Nikolay
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [mistral] Break_on in Retry policy

2015-03-04 Thread Thomas Hsiao
Hi,

I like the idea of explicit checks by success-on and error-on:
success-on to break retry and get SUCCESS state.
error-on to break retry but get ERROR state.

A single break-on seems confusing to me too.

Regards,

Thomas Hsiao
HP Cloud





Nikolay, thanks for sharing this…

I think that we really have a semantical ambiguity here. If we leave
just “break-on” that both types of behavior have a right to exist.

I’m personally for defining that more explicitly with two separate
instructions “success-on” and “error-on”. A use case for “success-on”
may be, for example, checking a completion of another parallel task
and if it’s completed then we can treat our task successful meaning
that we already got what’s required. I know it sounds a little bit
generic but still pointful for me.

Renat Akhmerov
@ Mirantis Inc.



* On 03 Mar 2015, at 19:47, Nikolay Makhotkin nmakhotkin at mirantis.com 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev wrote:
* * Hello,
* * Recently we've found that break_on property of RetryPolicy is
not working now.
* * I tried to solve this problem but faced the another problem: How
does 'break_on' supposed to work?
* * Will 'break_on' change the task state to ERROR or SUCCESS?
** if ERROR, it means 'we interrupt all retry iterations and keep the
state which was before'.
** But if SUCCESS, it means 'we interrupt all retry iterations and
assume SUCCESS state from task because we cought this condition'.
* * This is ambiguous.
* * There is a suggestion to use not just 'break_on' but, say,
another, more explicit properties which will delete this ambiguity.
For example, 'success_on' and 'error_on'.
* * Thoughts?
* * -
** Best Regards,
** Nikolay
** @Mirantis Inc.
** __
** OpenStack Development Mailing List (not for usage questions)
** Unsubscribe: OpenStack-dev-request at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev?subject:unsubscribe
** http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
*
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Question about boot-from-volume instance and flavor

2015-03-04 Thread Lingxian Kong
The option1 is a bad idea, IMO. Think about so many extra_specs in the
flavor today, you really don't want create another dozen of flavor
with root_gb=0 just for booting from volume.

For the billing concern, it really should be taken into account, but
it's not the business of OpenStack, IMHO.

I agree with Jay, option 2 will be good.

2015-03-05 13:58 GMT+08:00 Alex Xu sou...@gmail.com:


 2015-03-04 4:45 GMT+08:00 Jay Pipes jaypi...@gmail.com:

 On 03/03/2015 01:10 AM, Rui Chen wrote:

 Hi all,

 When we boot instance from volume, we find some ambiguous description
 about flavor root_gb in operations guide,
 http://docs.openstack.org/openstack-ops/content/flavors.html

 /Virtual root disk size in gigabytes. This is an ephemeral disk the base
 image is copied into. You don't use it when you boot from a persistent
 volume. /
 /The 0 size is a special case that uses the native base image size as
 the size of the ephemeral root volume./
 /
 /
 'You don't use it(root_gb) when you boot from a persistent volume.'
 It means that we need to set the root_gb to 0 or not? I don't know.


 Hi Rui, I agree the documentation -- and frankly, the code in Nova -- is
 confusing around this area.

 But I find out that the root_gb will been added into local_gb_used of
 compute_node so that it will impact the next scheduling. Think about a
 use case, the local_gb of compute_node is 10, boot instance from volume
 with the root_gb=5 flavor, in this case, I can only boot 2
 boot-from-volume instances on the compute_nodes, although these
 instances don't use the local disk of compute_nodes.

 I find a patch that try to fix this issue,
 https://review.openstack.org/#/c/136284/

 I want to know that which solution is better for you?

 Solution #1: boot instance from volume with the root_gb=0 flavor.
 Solution #2: add some special logic in order to correct the disk usage,
 like patch #136284


 Solution #2 is a better idea, IMO. There should not be any magic setting
 for root_gb that needs to be interpreted both by the user and the Nova code
 base.

 The issue with the 136284 patch is that it is trying to address the
 problem in the wrong place, IMHO.


 Emm.I'm thinking of one case. There two flavors, one with root_gb=0,
 another one with root_gb=10. And user choice different flavor will pay
 different money.
 If user choice the flavor with root_gb=10, and then boot from volume...then
 user still need pay the extra money. That's good strange. Or say we should
 let the bill system to distinguish the instance is boot from volume or not.
 Or say we just tell user you make wrong choice... For this case, should we
 choice
 solution #1?



 Best,
 -jay

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Regards!
---
Lingxian Kong

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Proposal to add Ihar Hrachyshka as a Neutron Core Reviewer

2015-03-04 Thread Gary Kotton
+100Š.000

Very long overdue!

On 3/4/15, 10:23 PM, Maru Newby ma...@redhat.com wrote:

+1 from me, Ihar has been doing great work and it will be great to have
him finally able to merge!

 On Mar 4, 2015, at 11:42 AM, Kyle Mestery mest...@mestery.com wrote:
 
 I'd like to propose that we add Ihar Hrachyshka to the Neutron core
reviewer team. Ihar has been doing a great job reviewing in Neutron as
evidence by his stats [1]. Ihar is the Oslo liaison for Neutron, he's
been doing a great job keeping Neutron current there. He's already a
critical reviewer for all the Neutron repositories. In addition, he's a
stable maintainer. Ihar makes himself available in IRC, and has done a
great job working with the entire Neutron team. His reviews are
thoughtful and he really takes time to work with code submitters to
ensure his feedback is addressed.
 
 I'd also like to again remind everyone that reviewing code is a
responsibility, in Neutron the same as other projects. And core
reviewers are especially beholden to this responsibility. I'd also like
to point out and reinforce that +1/-1 reviews are super useful, and I
encourage everyone to continue reviewing code across Neutron as well as
the other OpenStack projects, regardless of your status as a core
reviewer on these projects.
 
 Existing Neutron cores, please vote +1/-1 on this proposal to add Ihar
to the core reviewer team.
 
 Thanks!
 Kyle
 
 [1] 
https://urldefense.proofpoint.com/v2/url?u=http-3A__stackalytics.com_repo
rt_contribution_neutron-2Dgroup_90d=AwICAgc=Sqcl0Ez6M0X8aeM67LKIiDJAXVe
Aw-YihVMNtXt-uEsr=VlZxHpZBmzzkWT5jqz9JYBk8YTeq9N3-diTlNj4GyNcm=Ai-ZNicr
eqkmcjZEmVtR4ztEWETwC-yUThK_vJ_vFtcs=QOgsZ0G74PZQYC8kvsozIDeWHYMn7t_Rk6p
zqBLFui0e= 
 
_
_
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Proposal to add Ihar Hrachyshka as a Neutron Core Reviewer

2015-03-04 Thread Vikram Choudhary
+1

From: Armando M. [mailto:arma...@gmail.com]
Sent: 05 March 2015 03:12
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] Proposal to add Ihar Hrachyshka as a 
Neutron Core Reviewer

+1!

On 4 March 2015 at 22:29, Kevin Benton 
blak...@gmail.commailto:blak...@gmail.com wrote:

+1
On Mar 4, 2015 12:25 PM, Maru Newby 
ma...@redhat.commailto:ma...@redhat.com wrote:
+1 from me, Ihar has been doing great work and it will be great to have him 
finally able to merge!

 On Mar 4, 2015, at 11:42 AM, Kyle Mestery 
 mest...@mestery.commailto:mest...@mestery.com wrote:

 I'd like to propose that we add Ihar Hrachyshka to the Neutron core reviewer 
 team. Ihar has been doing a great job reviewing in Neutron as evidence by his 
 stats [1]. Ihar is the Oslo liaison for Neutron, he's been doing a great job 
 keeping Neutron current there. He's already a critical reviewer for all the 
 Neutron repositories. In addition, he's a stable maintainer. Ihar makes 
 himself available in IRC, and has done a great job working with the entire 
 Neutron team. His reviews are thoughtful and he really takes time to work 
 with code submitters to ensure his feedback is addressed.

 I'd also like to again remind everyone that reviewing code is a 
 responsibility, in Neutron the same as other projects. And core reviewers are 
 especially beholden to this responsibility. I'd also like to point out and 
 reinforce that +1/-1 reviews are super useful, and I encourage everyone to 
 continue reviewing code across Neutron as well as the other OpenStack 
 projects, regardless of your status as a core reviewer on these projects.

 Existing Neutron cores, please vote +1/-1 on this proposal to add Ihar to the 
 core reviewer team.

 Thanks!
 Kyle

 [1] http://stackalytics.com/report/contribution/neutron-group/90
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: 
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Need help in configuring keystone

2015-03-04 Thread Steve Martinelli
What do the keystone logs indicate?

Steve

Akshik DBK aks...@outlook.com wrote on 03/04/2015 02:18:47 AM:

 From: Akshik DBK aks...@outlook.com
 To: OpenStack Development Mailing List not for usage questions 
 openstack-dev@lists.openstack.org
 Date: 03/04/2015 02:25 AM
 Subject: Re: [openstack-dev] Need help in configuring keystone
 
 Hi Marek,
 
 I tried with the auto-generated shibboleth2.xml, just added the 
 application override attribute, now im stuck with looping issue,
 
 when i access v3/OS-FEDERATION/identity_providers/idp_2/protocols/
 saml2/auth for the first time it is prompting for username and 
 password once provided it goes on loop.
 
 i could see session generated https://115.112.68.53:5000/
 Shibboleth.sso/Session
 Miscellaneous
 Client Address: 121.243.33.212
 Identity Provider: https://idp.testshib.org/idp/shibboleth
 SSO Protocol: urn:oasis:names:tc:SAML:2.0:protocol
 Authentication Time: 2015-03-04T06:44:41.625Z
 Authentication Context Class: urn:oasis:names:tc:SAML:2.
 0:ac:classes:PasswordProtectedTransport
 Authentication Context Decl: (none)
 Session Expiration (barring inactivity): 479 minute(s)
 
 Attributes
 affiliation: mem...@testshib.org;st...@testshib.org
 entitlement: urn:mace:dir:entitlement:common-lib-terms
 eppn: mys...@testshib.org
 persistent-id: https://idp.testshib.org/idp/shibboleth!https://115.
 112.68.53/shibboleth!4Q6X4dS2MRhgTZOPTuL9ubMAcIM=
 unscoped-affiliation: Member;Staff
 here are my config files,
 SPConfig xmlns=urn:mace:shibboleth:2.0:native:sp:config 
 xmlns:md=urn:oasis:names:tc:SAML:2.0:metadata  clockSkew=1800
 ApplicationDefaults entityID=https://115.112.68.53/shibboleth;
 REMOTE_USER=eppn
 Sessions lifetime=28800 timeout=3600 
 checkAddress=false relayState=ss:mem handlerSSL=true 
 handlerSSL=true cookieProps=; path=/; secure
 
 SSO entityID=https://idp.testshib.org/idp/shibboleth;
 SAML2 SAML1
 /SSO
 
 LogoutSAML2 Local/Logout
 
 Handler type=MetadataGenerator Location=/Metadata 
 signing=false/
 Handler type=Status Location=/Status/
 Handler type=Session Location=/Session 
 showAttributeValues=true/
 Handler type=DiscoveryFeed Location=/DiscoFeed/
 /Sessions
 
 Errors supportContact=root@localhost logoLocation=/
 shibboleth-sp/logo.jpg styleSheet=/shibboleth-sp/main.css/
 MetadataProvider type=XML uri=https://www.testshib.org/
 metadata/testshib-providers.xml
  backingFilePath=/tmp/testshib-two-idp-metadata.xml
  reloadInterval=18 /
 AttributeExtractor type=XML validate=true 
 path=attribute-map.xml/
 AttributeResolver type=Query subjectMatch=true/
 AttributeFilter type=XML validate=true path=attribute-
 policy.xml/
 CredentialResolver type=File key=sp-key.pem 
 certificate=sp-cert.pem/
 ApplicationOverride id=idp_2 entityID=https://115.112.
 68.53/shibboleth
!--Sessions lifetime=28800 timeout=3600 
checkAddress=false
relayState=ss:mem handlerSSL=false--
Sessions lifetime=28800 timeout=3600 
checkAddress=false
relayState=ss:mem handlerSSL=true cookieProps=; 
 path=/; secure
 
 !-- Triggers a login request directly to the TestShib IdP. 
--
 SSO entityID=https://idp.testshib.org/idp/shibboleth; 
 ECP=true
 SAML2 SAML1
 /SSO
 LogoutSAML2 Local/Logout
  /Sessions
 MetadataProvider type=XML uri=https://
 www.testshib.org/metadata/testshib-providers.xml
  backingFilePath=/tmp/testshib-two-idp-metadata.xml
  reloadInterval=18 /
 /ApplicationOverride
 /ApplicationDefaults
 SecurityPolicyProvider type=XML validate=true 
 path=security-policy.xml/
 ProtocolProvider type=XML validate=true 
 reloadChanges=false path=protocols.xml/
 /SPConfig
 
 keystone-httpd
 WSGIDaemonProcess keystone user=keystone group=nogroup processes=3 
threads=10
 #WSGIScriptAliasMatch ^(/v3/OS-FEDERATION/identity_providers/.*?/
 protocols/.*?/auth)$ /var/www/keystone/main/$1
 WSGIScriptAliasMatch ^(/v3/OS-FEDERATION/identity_providers/.*?/
 protocols/.*?/auth)$ /var/www/cgi-bin/keystone/main/$1
 
 VirtualHost *:5000
 LogLevel  info
 ErrorLog  /var/log/keystone/keystone-apache-error.log
 CustomLog /var/log/keystone/ssl_access.log combined
 Options +FollowSymLinks
 
 SSLEngine on
 #SSLCertificateFile /etc/ssl/certs/mycert.pem
 #SSLCertificateKeyFile /etc/ssl/private/mycert.key
 SSLCertificateFile/etc/apache2/ssl/server.crt
 SSLCertificateKeyFile /etc/apache2/ssl/server.key
 SSLVerifyClient optional
 SSLVerifyDepth 10
 SSLProtocol all -SSLv2
 SSLCipherSuite 
ALL:!ADH:!EXPORT:!SSLv2:RC4+RSA:+HIGH:+MEDIUM:+LOW
 SSLOptions +StdEnvVars +ExportCertData
 
 WSGIScriptAlias /  /var/www/cgi-bin/keystone/main

Re: [openstack-dev] [cinder] Decision related to cinder metadata

2015-03-04 Thread Duncan Thomas
So, one problem here is that 'metadata' in cinder has (at least) 3
meanings. Here is an attempt to clarify (which should be on some wiki page
somewhere, I'll add it once people tell me the meaning is clear).

Volume metadata:
 - This is set via the --metadata flag at create time, and there are
interfaces to add/edit/delete it for an existing volume
 - Arbitrary key - value pairs
 - Belonging to, and controlled by, tenants
 - This should *not* be given semantic meaning by drivers, it is there for
tenants to organise and label their volumes
- A few drivers (e.g Solidfire, but there are others [1]) can read this for
semantic content. This behaviour exists for purely historical / legacy
reasons, and needs to be enabled in the config. This behaviour may be
removed in future. No new drivers should do this.
- Adding some sort of search / index of this is probably reasonable. The
glance team were talking about some sort of global metadata index, and this
would fit nicely there.

Boot metadata:
 - Called 'glance image metadata' in most of the code currently
 - Passed to nova during boot, serves the same purpose as glance metadata
in controlling the boot
- H/W parameters
- License keys
- etc
 - CRUD interface proposed (Dave Chen), reviews pending
- Needs to tie in with glance protected properties to avoid licence
systems being broken

Admin metadata:
 - Key/value pairs
 - Attached to each volume
 - Contains some driver specific stuff
 - Hidden from the tenant (might be exposed to admin)
 - No CRUD REST API currently, nor I believe a good usecase has been put
forward for one)
 - There to avoid growing the volume table indefinitely
 - Semantic meaning, possibly just for a subset of drivers
 - Exact fields in regular flux




Anybody got any questions, clarification, corrections or rants?




[1] I'm only picking on Solidfire because I know John has fairly thick skin
and so I'm unlikely to get a bunch of complains off him for it, at least
not until I've got a beer in my hand. I hope.




On 3 March 2015 at 20:25, Sasikanth Eda sasikanth@in.ibm.com wrote:

 Hi Stackers,

 I am referring to one of the action item related to metadata discussed
 here;


 http://eavesdrop.openstack.org/meetings/cinder/2014/cinder.2014-11-19-16.00.html

 Can some help with the final take away (Sorry I could not find any thread
 related to its decision after this meeting).

 From the conversation I got a feel that supporting operations via metadata
 needs to avoided / corrected, and the operations / variations needs to be
 provided via volume_types.

 Is this understanding correct ?

 Regards,
 Sasi

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Duncan Thomas
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder]Request to to revisit this patch

2015-03-04 Thread Zhangni
Hi Mike, Jay, and Walter,

Please revisit this patch https://review.openstack.org/#/c/151959/ and don't 
revert this, thank you very much!

I think it's apposite to merge the SDSHypervisor driver in cinder first, and 
next to request nova to add a new libvirt volume driver.

Meanwhile nova side always ask whether the driver is merged into Cinder, please 
see my comments in nova spec https://review.openstack.org/#/c/130919/, thank 
you very much!



Best regards

ZhangNi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [infra][all] CI Check Queue stuck

2015-03-04 Thread Andreas Jaeger
There has been a maintenance window in one of our providers cloud and it
seems that some of the OpenStack CI infrastructure is not working as is.
You will notice that Jenkins is not commenting on any changes submitted
to review.openstack.org, the check queue is currently stuck.

Please have patience until the maintenance window is over and the infra
team has taken care of our CI infrastructure so that it processes all
events again - this should happen during the US morning.

There's no need to recheck any changes that you have submitted, they
will not move forward.

Andreas
-- 
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Jennifer Guild, Dilip Upmanyu,
   Graham Norton, HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [mistral] Break_on in Retry policy

2015-03-04 Thread Renat Akhmerov
Nikolay, thanks for sharing this…

I think that we really have a semantical ambiguity here. If we leave just 
“break-on” that both types of behavior have a right to exist.

I’m personally for defining that more explicitly with two separate instructions 
“success-on” and “error-on”. A use case for “success-on” may be, for example, 
checking a completion of another parallel task and if it’s completed then we 
can treat our task successful meaning that we already got what’s required. I 
know it sounds a little bit generic but still pointful for me.

Renat Akhmerov
@ Mirantis Inc.



 On 03 Mar 2015, at 19:47, Nikolay Makhotkin nmakhot...@mirantis.com wrote:
 
 Hello,
 
 Recently we've found that break_on property of RetryPolicy is not working now.
 
 I tried to solve this problem but faced the another problem: How does 
 'break_on' supposed to work?
 
 Will 'break_on' change the task state to ERROR or SUCCESS?
 if ERROR, it means 'we interrupt all retry iterations and keep the state 
 which was before'. 
 But if SUCCESS, it means 'we interrupt all retry iterations and assume 
 SUCCESS state from task because we cought this condition'.
 
 This is ambiguous.
 
 There is a suggestion to use not just 'break_on' but, say, another, more 
 explicit properties which will delete this ambiguity. For example, 
 'success_on' and 'error_on'.
 
 Thoughts?
 
 -
 Best Regards,
 Nikolay
 @Mirantis Inc.
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Deprecation warnings in python-fuelclient-6.1.*

2015-03-04 Thread Kamil Sambor
Hi all,

IMHO  deprecation warning should be added only to commands that we recently
changed (because users can switch to new interface when they see
deprecation error) or eventually solution #2 sounds ok but is not ideal
because people can forget about warning that they saw in previous release.
Also we discuss 4th solution, simply we should inform users about
deprecation of client and encourage users to use fuel_v2 client with new
commands and parameters.

Best regards,
Kamil Sambor

On Wed, Mar 4, 2015 at 9:28 AM, Przemyslaw Kaminski pkamin...@mirantis.com
wrote:

 Maybe add a Changelog in the repo and maintain it?

 http://keepachangelog.com/

 Option #2 is OK but it can cause pain when testing -- upon each fresh
 installation from ISO we would get that message and it might break some
 tests though that is fixable. Option #3 is OK too. #1 is worst and I
 wouldn't do it.

 Or maybe display that info when showing all the commands (typing 'fuel'
 or 'fuel -h')? We already have a deprecation warning there concerning
 client/config.yaml, it is not very disturbing and shouldn't break any
 currently used automation scripts.

 P.


 On 03/03/2015 03:52 PM, Roman Prykhodchenko wrote:
  Hi folks!
 
 
  According to the refactoring plan [1] we are going to release the 6.1
 version of python-fuelclient which is going to contain recent changes but
 will keep backwards compatibility with what was before. However, the next
 major release will bring users the fresh CLI that won’t be compatible with
 the old one and the new, actually usable IRL API library that also will be
 different.
 
  The issue this message is about is the fact that there is a strong need
 to let both CLI and API users about those changes. At the moment I can see
 3 ways of resolving it:
 
  1. Show deprecation warning for commands and parameters which are going
 to be different. Log deprecation warnings for deprecated library methods.
  The problem with this approach is that the structure of both CLI and the
 library will be changed, so deprecation warning will be raised for mostly
 every command for the whole release cycle. That does not look very user
 friendly, because users will have to run all commands with --quiet for the
 whole release cycle to mute deprecation warnings.
 
  2. Show the list o the deprecated stuff and planned changes on the first
 run. Then mute it.
  The disadvantage of this approach if that there is a need of storing the
 info about the first run to a file. However, it may be cleaned after the
 upgrade.
 
  3. The same as #2 but publish the warning online.
 
  I personally prefer #2, but I’d like to get more opinions on this topic.
 
 
  References:
 
  1. https://blueprints.launchpad.net/fuel/+spec/re-thinking-fuel-client
 
 
  - romcheg
 
 
 
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Need help in configuring keystone

2015-03-04 Thread Akshik DBK
Hi Steve,
here are the log details

== /var/log/shibboleth/shibd.log ==2015-03-04 14:36:05 INFO 
Shibboleth.AttributeExtractor.XML [2]: skipping unmapped SAML 2.0 Attribute 
with Name: urn:oid:0.9.2342.19200300.100.1.12015-03-04 14:36:05 INFO 
Shibboleth.AttributeExtractor.XML [2]: skipping unmapped SAML 2.0 Attribute 
with Name: urn:oid:2.5.4.42015-03-04 14:36:05 INFO 
Shibboleth.AttributeExtractor.XML [2]: skipping unmapped SAML 2.0 Attribute 
with Name: urn:oid:2.5.4.32015-03-04 14:36:05 INFO 
Shibboleth.AttributeExtractor.XML [2]: skipping unmapped SAML 2.0 Attribute 
with Name: urn:oid:2.5.4.202015-03-04 14:36:05 INFO Shibboleth.SessionCache 
[2]: new session created: ID (_ee18a916d4e7e7adbc34f55c010695a4) IdP 
(https://idp.testshib.org/idp/shibboleth) 
Protocol(urn:oasis:names:tc:SAML:2.0:protocol) Address (121.243.33.212)
== /var/log/keystone/keystone-apache-error.log ==[Wed Mar 04 14:36:05 2015] 
[info] Subsequent (No.8) HTTPS request received for child 7 (server 
10.1.193.250:5000)[Wed Mar 04 14:36:09 2015] [info] Subsequent (No.9) HTTPS 
request received for child 7 (server 10.1.193.250:5000)
== /var/log/shibboleth/shibd.log ==2015-03-04 14:36:09 INFO 
Shibboleth.AttributeExtractor.XML [2]: skipping unmapped SAML 2.0 Attribute 
with Name: urn:oid:0.9.2342.19200300.100.1.12015-03-04 14:36:09 INFO 
Shibboleth.AttributeExtractor.XML [2]: skipping unmapped SAML 2.0 Attribute 
with Name: urn:oid:2.5.4.42015-03-04 14:36:09 INFO 
Shibboleth.AttributeExtractor.XML [2]: skipping unmapped SAML 2.0 Attribute 
with Name: urn:oid:2.5.4.32015-03-04 14:36:09 INFO 
Shibboleth.AttributeExtractor.XML [2]: skipping unmapped SAML 2.0 Attribute 
with Name: urn:oid:2.5.4.202015-03-04 14:36:09 INFO Shibboleth.SessionCache 
[2]: new session created: ID (_10d6c414a9f198b6601b5d4f36a9057a) IdP 
(https://idp.testshib.org/idp/shibboleth) 
Protocol(urn:oasis:names:tc:SAML:2.0:protocol) Address (121.243.33.212)
== /var/log/keystone/keystone-apache-error.log ==[Wed Mar 04 14:36:09 2015] 
[info] Subsequent (No.10) HTTPS request received for child 7 (server 
10.1.193.250:5000)[Wed Mar 04 14:36:14 2015] [info] [client 121.243.33.212] 
(70007)The timeout specified has expired: SSL input filter read failed.[Wed Mar 
04 14:36:14 2015] [info] [client 121.243.33.212] Connection closed to child 7 
with standard shutdown (server 10.1.193.250:5000)

To: openstack-dev@lists.openstack.org
From: steve...@ca.ibm.com
Date: Wed, 4 Mar 2015 03:04:52 -0500
Subject: Re: [openstack-dev] Need help in configuring keystone

What do the keystone logs indicate?



Steve



Akshik DBK aks...@outlook.com wrote on 03/04/2015
02:18:47 AM:



 From: Akshik DBK aks...@outlook.com

 To: OpenStack Development Mailing List not for
usage questions 

 openstack-dev@lists.openstack.org

 Date: 03/04/2015 02:25 AM

 Subject: Re: [openstack-dev] Need help in configuring
keystone

 

 Hi Marek,

 

 I tried with the auto-generated shibboleth2.xml, just added the 

 application override attribute, now im stuck with looping issue,

 

 when i access v3/OS-FEDERATION/identity_providers/idp_2/protocols/

 saml2/auth for the first time it is prompting for username and 

 password once provided it goes on loop.

 

 i could see session generated https://115.112.68.53:5000/

 Shibboleth.sso/Session

 Miscellaneous

 Client Address: 121.243.33.212

 Identity Provider: https://idp.testshib.org/idp/shibboleth

 SSO Protocol: urn:oasis:names:tc:SAML:2.0:protocol

 Authentication Time: 2015-03-04T06:44:41.625Z

 Authentication Context Class: urn:oasis:names:tc:SAML:2.

 0:ac:classes:PasswordProtectedTransport

 Authentication Context Decl: (none)

 Session Expiration (barring inactivity): 479 minute(s)

 

 Attributes

 affiliation: mem...@testshib.org;st...@testshib.org

 entitlement: urn:mace:dir:entitlement:common-lib-terms

 eppn: mys...@testshib.org

 persistent-id: https://idp.testshib.org/idp/shibboleth!https://115.

 112.68.53/shibboleth!4Q6X4dS2MRhgTZOPTuL9ubMAcIM=

 unscoped-affiliation: Member;Staff

 here are my config files,

 SPConfig xmlns=urn:mace:shibboleth:2.0:native:sp:config


 xmlns:md=urn:oasis:names:tc:SAML:2.0:metadata  clockSkew=1800

 ApplicationDefaults entityID=https://115.112.68.53/shibboleth;

 REMOTE_USER=eppn

 Sessions lifetime=28800
timeout=3600 

 checkAddress=false relayState=ss:mem handlerSSL=true


 handlerSSL=true cookieProps=; path=/; secure

 

 SSO entityID=https://idp.testshib.org/idp/shibboleth;

  
  SAML2 SAML1

 /SSO

 

 LogoutSAML2 Local/Logout

 

 Handler type=MetadataGenerator
Location=/Metadata 

 signing=false/

 Handler
type=Status Location=/Status/

 Handler
type=Session Location=/Session 

 showAttributeValues=true/

 Handler
type=DiscoveryFeed Location=/DiscoFeed/

 /Sessions

 

 Errors supportContact=root@localhost
logoLocation=/

 shibboleth-sp/logo.jpg styleSheet=/shibboleth-sp/main.css/

   

Re: [openstack-dev] [all] Re-evaluating the suitability of the 6 month release cycle

2015-03-04 Thread Thierry Carrez
James Bottomley wrote:
 On Tue, 2015-03-03 at 11:59 +0100, Thierry Carrez wrote:
 James Bottomley wrote:
 Actually, this is possible: look at Linux, it freezes for 10 weeks of a
 12 month release cycle (or 6 weeks of an 8 week one).  More on this
 below.

 I'd be careful with comparisons with the Linux kernel. First it's a
 single bit of software, not a collection of interconnected projects.
 
 Well, we do have interconnection: the kernel on it's own doesn't do
 anything without a userspace.  The theory was that we didn't have to be
 like BSD (coupled user space and kernel) and we could rely on others
 (principally the GNU project in the early days) to provide the userspace
 and that we could decouple kernel development from the userspace
 releases.  Threading models were, I think, the biggest challenges to
 this assumption, but we survived.

Right. My point there is that you only release one thing. We release a
lot more pieces. There is (was?) downstream value in coordinating those
releases, which is a factor in our ability to do it more often than
twice a year.

 Second it's at a very different evolution/maturity point (20 years old
 vs. 0-4 years old for OpenStack projects).
 
 Yes, but I thought I covered this in the email: you can see that at the
 4 year point in its lifecycle, the kernel was behaving very differently
 (and in fact more similar to OpenStack).  The question I thought was
 still valid is whether anything was learnable from the way the kernel
 evolved later.  I think the key issue, which you seem to have in
 OpenStack is that the separate develop/stabilise phases caused
 frustration to build up in our system which (nine years later) led the
 kernel to adopt the main branch stabilisation with overlapping subsystem
 development cycle.

I agree with you: the evolution the kernel went through is almost a
natural law, and I know we won't stay in the current model forever. I'm
just not sure we have reached the level of general stability that makes
it possible to change *just now*. I welcome brainstorming and discussion
on future evolutions, though, and intend to lead a cross-project session
discussion on that in Vancouver.

  Finally it sits at a
 different layer, so there is less need for documentation/translations to
 be shipped with the software release.
 
 It's certainly a lot less than you, but we have the entire system call
 man pages.  It's an official project of the kernel:
 
 https://www.kernel.org/doc/man-pages/
 
 And we maintain translations for it
 
 https://www.kernel.org/doc/man-pages/translations.html

By translations I meant strings in the software itself, not doc
translations. We don't translate docs upstream either :) I guess we
could drop those (and/or downstream them in a way) if that was the last
thing holding up adding more agility.

So in summary, yes we can (and do) learn from kernel history, but those
projects are sufficiently different that the precise timeframes and
numbers can't really be compared. Apples and oranges are both fruits
which mature (and rot if left unchecked), but they evolve at different
speeds :)

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] what's the merge plan for current proposed microversions?

2015-03-04 Thread Christopher Yeoh
On Tue, 03 Mar 2015 10:28:34 -0500
Sean Dague s...@dague.net wrote:

 On 03/03/2015 10:24 AM, Claudiu Belu wrote:
  Hello.
  
  I've talked with Christopher Yeoh yesterday and I've asked him
  about the microversions and when will they be able to merge. He
  said that for now, this commit had to get in before any other
  microversions: https://review.openstack.org/#/c/159767/
  
  He also said that he'll double check everything, and if everything
  is fine, the first microversions should be getting in soon after.
  

  Best regards,
  
  Claudiu Belu
 
 I just merged that one this morning, so hopefully we can dislodge.


So just before we merged the the keypairs microversion change someone
enabled response schema tests which showed some further problems. They
have now all been fixed but https://review.openstack.org/#/c/161112/
which needs just one more +2. After that the first api change which uses
microversions https://review.openstack.org/#/c/140313/ can merge (has
3 +2's just needs the v2.1 fix first.

 
  
  
  From: Alexandre Levine [alev...@cloudscaling.com]
  Sent: Tuesday, March 03, 2015 4:22 PM
  To: OpenStack Development Mailing List (not for usage questions)
  Subject: Re: [openstack-dev] [nova] what's the merge plan for
  current proposed microversions?
  
  Bump.
  
  I'd really appreciate some answers to the question Sean asked. I
  still have the 2.4 in my review (the very one Sean mentioned) but
  it seems that it might not be the case.
  
  Best regards,
 Alex Levine
  
  On 3/2/15 2:30 PM, Sean Dague wrote:
  This change for the additional attributes for ec2 looks like it's
  basically ready to go, except it has the wrong microversion on it
  (as they anticipated the other changes landing ahead of them) -
  https://review.openstack.org/#/c/155853
 
  What's the plan for merging the outstanding microversions? I
  believe we're all conceptually approved on all them, and it's an
  important part of actually moving forward on the new API. It seems
  like we're in a bit of a holding pattern on all of them right now,
  and I'd like to make sure we start merging them this week so that
  they have breathing space before the freeze.
 

So the plan for assignment of microversion api numbers is the same as
what we currently do for db migration changes - take the next one
knowing that you may need to rebase if someone else merges before you.
Other suggestions welcome but will have to follow the requirement that
they always merge in version order.



-Sean
 
  
  
  __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
  openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
  __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
  openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
 
 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Deprecation warnings in python-fuelclient-6.1.*

2015-03-04 Thread Przemyslaw Kaminski
Maybe add a Changelog in the repo and maintain it?

http://keepachangelog.com/

Option #2 is OK but it can cause pain when testing -- upon each fresh
installation from ISO we would get that message and it might break some
tests though that is fixable. Option #3 is OK too. #1 is worst and I
wouldn't do it.

Or maybe display that info when showing all the commands (typing 'fuel'
or 'fuel -h')? We already have a deprecation warning there concerning
client/config.yaml, it is not very disturbing and shouldn't break any
currently used automation scripts.

P.


On 03/03/2015 03:52 PM, Roman Prykhodchenko wrote:
 Hi folks!
 
 
 According to the refactoring plan [1] we are going to release the 6.1 version 
 of python-fuelclient which is going to contain recent changes but will keep 
 backwards compatibility with what was before. However, the next major release 
 will bring users the fresh CLI that won’t be compatible with the old one and 
 the new, actually usable IRL API library that also will be different.
 
 The issue this message is about is the fact that there is a strong need to 
 let both CLI and API users about those changes. At the moment I can see 3 
 ways of resolving it:
 
 1. Show deprecation warning for commands and parameters which are going to be 
 different. Log deprecation warnings for deprecated library methods.
 The problem with this approach is that the structure of both CLI and the 
 library will be changed, so deprecation warning will be raised for mostly 
 every command for the whole release cycle. That does not look very user 
 friendly, because users will have to run all commands with --quiet for the 
 whole release cycle to mute deprecation warnings.
 
 2. Show the list o the deprecated stuff and planned changes on the first run. 
 Then mute it.
 The disadvantage of this approach if that there is a need of storing the info 
 about the first run to a file. However, it may be cleaned after the upgrade.
 
 3. The same as #2 but publish the warning online.
 
 I personally prefer #2, but I’d like to get more opinions on this topic.
 
 
 References:
 
 1. https://blueprints.launchpad.net/fuel/+spec/re-thinking-fuel-client
 
 
 - romcheg
 
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Policy][Group-based-policy] Bug squashing day

2015-03-04 Thread Sumit Naiksatam
Thanks to the entire team for participating today, we made very good
progress with knocking off a number of long standing bugs. We will
also be cutting a new stable/juno release towards the end of this week
since we ended up back porting quite a few fixes.

On Sat, Feb 28, 2015 at 10:56 AM, Sumit Naiksatam
sumitnaiksa...@gmail.com wrote:
 Hi, Per our discussion in the last weekly IRC meeting, we will have
 the bug squashing day for GBP on Tuesday, March 3rd. We will
 coordinate activities over the #openstack-gbp channel. Please join and
 participate.

 Thanks,
 ~Sumit.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] FW: [Policy][Group-based-policy] SFC USe Case

2015-03-04 Thread Venkatrangan G - ERS, HCL Tech
Hi,
 I am trying out the procedure listed in this page 
https://wiki.openstack.org/wiki/GroupBasedPolicy/InstallODLIntegrationDevstack. 
I was looking for  an use case with Service Function chaining i.e. to use 
redirect with gbp command.
Can you please point me to some link that can help me out to try the same with 
GBP?

Regards,
Venkat G.


::DISCLAIMER::


The contents of this e-mail and any attachment(s) are confidential and intended 
for the named recipient(s) only.
E-mail transmission is not guaranteed to be secure or error-free as information 
could be intercepted, corrupted,
lost, destroyed, arrive late or incomplete, or may contain viruses in 
transmission. The e mail and its contents
(with or without referred errors) shall therefore not attach any liability on 
the originator or HCL or its affiliates.
Views or opinions, if any, presented in this email are solely those of the 
author and may not necessarily reflect the
views or opinions of HCL or its affiliates. Any form of reproduction, 
dissemination, copying, disclosure, modification,
distribution and / or publication of this message without the prior written 
consent of authorized representative of
HCL is strictly prohibited. If you have received this email in error please 
delete it and notify the sender immediately.
Before opening any email and/or attachments, please check them for viruses and 
other defects.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] blueprint about multiple workers supported in nova-scheduler

2015-03-04 Thread Attila Fazekas
Hi,

I wonder what is the planned future of the scheduling.

The scheduler does a lot of high field number query,
which is CPU expensive when you are using sqlalchemy-orm.
Does anyone tried to switch those operations to sqlalchemy-core ?

The scheduler does lot of thing in the application, like filtering 
what can be done on the DB level more efficiently. Why it is not done
on the DB side ? 

There are use cases when the scheduler would need to know even more data,
Is there a plan for keeping `everything` in all schedulers process memory 
up-to-date ?
(Maybe zookeeper)

The opposite way would be to move most operation into the DB side,
since the DB already knows everything. 
(stored procedures ?)

Best Regards,
Attila


- Original Message -
 From: Rui Chen chenrui.m...@gmail.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Sent: Wednesday, March 4, 2015 4:51:07 AM
 Subject: [openstack-dev] [nova] blueprint about multiple workers supported
 in nova-scheduler
 
 Hi all,
 
 I want to make it easy to launch a bunch of scheduler processes on a host,
 multiple scheduler workers will make use of multiple processors of host and
 enhance the performance of nova-scheduler.
 
 I had registered a blueprint and commit a patch to implement it.
 https://blueprints.launchpad.net/nova/+spec/scheduler-multiple-workers-support
 
 This patch had applied in our performance environment and pass some test
 cases, like: concurrent booting multiple instances, currently we didn't find
 inconsistent issue.
 
 IMO, nova-scheduler should been scaled horizontally on easily way, the
 multiple workers should been supported as an out of box feature.
 
 Please feel free to discuss this feature, thanks.
 
 Best Regards
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] blueprint about multiple workers supported in nova-scheduler

2015-03-04 Thread Sylvain Bauza


Le 04/03/2015 04:51, Rui Chen a écrit :

Hi all,

I want to make it easy to launch a bunch of scheduler processes on a 
host, multiple scheduler workers will make use of multiple processors 
of host and enhance the performance of nova-scheduler.


I had registered a blueprint and commit a patch to implement it.
https://blueprints.launchpad.net/nova/+spec/scheduler-multiple-workers-support

This patch had applied in our performance environment and pass some 
test cases, like: concurrent booting multiple instances, currently we 
didn't find inconsistent issue.


IMO, nova-scheduler should been scaled horizontally on easily way, the 
multiple workers should been supported as an out of box feature.


Please feel free to discuss this feature, thanks.



As I said when reviewing your patch, I think the problem is not just 
making sure that the scheduler is thread-safe, it's more about how the 
Scheduler is accounting resources and providing a retry if those 
consumed resources are higher than what's available.


Here, the main problem is that two workers can actually consume two 
distinct resources on the same HostState object. In that case, the 
HostState object is decremented by the number of taken resources (modulo 
what means a resource which is not an Integer...) for both, but nowhere 
in that section, it does check that it overrides the resource usage. As 
I said, it's not just about decorating a semaphore, it's more about 
rethinking how the Scheduler is managing its resources.



That's why I'm -1 on your patch until [1] gets merged. Once this BP will 
be implemented, we will have a set of classes for managing heterogeneous 
types of resouces and consume them, so it would be quite easy to provide 
a check against them in the consume_from_instance() method.


-Sylvain

[1] 
http://specs.openstack.org/openstack/nova-specs/specs/kilo/approved/resource-objects.html


Best Regards



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Deprecation warnings in python-fuelclient-6.1.*

2015-03-04 Thread Roman Prykhodchenko
I’d like to resolve some questions:

@Przemysław:
 - We can avoid that message by supplying --quiet.
 - Changelog is currently managed automatically by PBR so as soon as there is a 
release there will be a change log
 - I think #2 can be done along with #3

@Kamil:
 - The issue is that it’s not possible to release commands in this release 
because it will immediately make the CLI incompatible.For 7.0 there is a plan 
to get rid of the old CLI completely and replace it with Cliff-based one. I 
agree that people may forget the deprecation warning before 7.1 ISO is 
available but that is partially solvable by a change log. Besides, 
python-fuelclient-7.0 will be available on PyPi much earlier than 7.0 ISO is 
released.
 - ^ is basically the reason why we cannot use #4, because there will be 
nothing new to use, at least in the 6.1 ISO. Keeping both CLIs in the source 
tree will create more mess and will be terribly hard to test.


- romcheg

 4 бер. 2015 о 10:11 Kamil Sambor ksam...@mirantis.com написав(ла):
 
 Hi all,
 
 IMHO  deprecation warning should be added only to commands that we recently 
 changed (because users can switch to new interface when they see deprecation 
 error) or eventually solution #2 sounds ok but is not ideal because people 
 can forget about warning that they saw in previous release. Also we discuss 
 4th solution, simply we should inform users about deprecation of client and 
 encourage users to use fuel_v2 client with new commands and parameters.
 
 Best regards,
 Kamil Sambor
 
 On Wed, Mar 4, 2015 at 9:28 AM, Przemyslaw Kaminski pkamin...@mirantis.com 
 wrote:
 Maybe add a Changelog in the repo and maintain it?
 
 http://keepachangelog.com/
 
 Option #2 is OK but it can cause pain when testing -- upon each fresh
 installation from ISO we would get that message and it might break some
 tests though that is fixable. Option #3 is OK too. #1 is worst and I
 wouldn't do it.
 
 Or maybe display that info when showing all the commands (typing 'fuel'
 or 'fuel -h')? We already have a deprecation warning there concerning
 client/config.yaml, it is not very disturbing and shouldn't break any
 currently used automation scripts.
 
 P.
 
 
 On 03/03/2015 03:52 PM, Roman Prykhodchenko wrote:
  Hi folks!
 
 
  According to the refactoring plan [1] we are going to release the 6.1 
  version of python-fuelclient which is going to contain recent changes but 
  will keep backwards compatibility with what was before. However, the next 
  major release will bring users the fresh CLI that won’t be compatible with 
  the old one and the new, actually usable IRL API library that also will be 
  different.
 
  The issue this message is about is the fact that there is a strong need to 
  let both CLI and API users about those changes. At the moment I can see 3 
  ways of resolving it:
 
  1. Show deprecation warning for commands and parameters which are going to 
  be different. Log deprecation warnings for deprecated library methods.
  The problem with this approach is that the structure of both CLI and the 
  library will be changed, so deprecation warning will be raised for mostly 
  every command for the whole release cycle. That does not look very user 
  friendly, because users will have to run all commands with --quiet for the 
  whole release cycle to mute deprecation warnings.
 
  2. Show the list o the deprecated stuff and planned changes on the first 
  run. Then mute it.
  The disadvantage of this approach if that there is a need of storing the 
  info about the first run to a file. However, it may be cleaned after the 
  upgrade.
 
  3. The same as #2 but publish the warning online.
 
  I personally prefer #2, but I’d like to get more opinions on this topic.
 
 
  References:
 
  1. https://blueprints.launchpad.net/fuel/+spec/re-thinking-fuel-client
 
 
  - romcheg
 
 
 
  __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

Re: [openstack-dev] [Fuel] Deprecation warnings in python-fuelclient-6.1.*

2015-03-04 Thread Kamil Sambor
@romcheg
- the idea is to switch partially to new client so keeping one package with
two entry points: fuel and fuel_v2. It will be convenient for us to add new
commands to fuel_v2 only and switch slowly old commands to new version and
add warnings in old client commands. It will give users time to switch to
new client and it will be easy for us to migrate only old commands. Now
when we add new command we add to old client and then in future still need
to migrate it. SO keeping two entry points for fuel-client IMHO will
be convenient for developers and for users.

Best regards,
Kamil Sambor

On Wed, Mar 4, 2015 at 10:54 AM, Roman Prykhodchenko m...@romcheg.me wrote:

 I’d like to resolve some questions:

 @Przemysław:
  - We can avoid that message by supplying --quiet.
  - Changelog is currently managed automatically by PBR so as soon as there
 is a release there will be a change log
  - I think #2 can be done along with #3

 @Kamil:
  - The issue is that it’s not possible to release commands in this release
 because it will immediately make the CLI incompatible.For 7.0 there is a
 plan to get rid of the old CLI completely and replace it with Cliff-based
 one. I agree that people may forget the deprecation warning before 7.1 ISO
 is available but that is partially solvable by a change log. Besides,
 python-fuelclient-7.0 will be available on PyPi much earlier than 7.0 ISO
 is released.
  - ^ is basically the reason why we cannot use #4, because there will be
 nothing new to use, at least in the 6.1 ISO. Keeping both CLIs in the
 source tree will create more mess and will be terribly hard to test.


 - romcheg

  4 бер. 2015 о 10:11 Kamil Sambor ksam...@mirantis.com написав(ла):
 
  Hi all,
 
  IMHO  deprecation warning should be added only to commands that we
 recently changed (because users can switch to new interface when they see
 deprecation error) or eventually solution #2 sounds ok but is not ideal
 because people can forget about warning that they saw in previous release.
 Also we discuss 4th solution, simply we should inform users about
 deprecation of client and encourage users to use fuel_v2 client with new
 commands and parameters.
 
  Best regards,
  Kamil Sambor
 
  On Wed, Mar 4, 2015 at 9:28 AM, Przemyslaw Kaminski 
 pkamin...@mirantis.com wrote:
  Maybe add a Changelog in the repo and maintain it?
 
  http://keepachangelog.com/
 
  Option #2 is OK but it can cause pain when testing -- upon each fresh
  installation from ISO we would get that message and it might break some
  tests though that is fixable. Option #3 is OK too. #1 is worst and I
  wouldn't do it.
 
  Or maybe display that info when showing all the commands (typing 'fuel'
  or 'fuel -h')? We already have a deprecation warning there concerning
  client/config.yaml, it is not very disturbing and shouldn't break any
  currently used automation scripts.
 
  P.
 
 
  On 03/03/2015 03:52 PM, Roman Prykhodchenko wrote:
   Hi folks!
  
  
   According to the refactoring plan [1] we are going to release the 6.1
 version of python-fuelclient which is going to contain recent changes but
 will keep backwards compatibility with what was before. However, the next
 major release will bring users the fresh CLI that won’t be compatible with
 the old one and the new, actually usable IRL API library that also will be
 different.
  
   The issue this message is about is the fact that there is a strong
 need to let both CLI and API users about those changes. At the moment I can
 see 3 ways of resolving it:
  
   1. Show deprecation warning for commands and parameters which are
 going to be different. Log deprecation warnings for deprecated library
 methods.
   The problem with this approach is that the structure of both CLI and
 the library will be changed, so deprecation warning will be raised for
 mostly every command for the whole release cycle. That does not look very
 user friendly, because users will have to run all commands with --quiet for
 the whole release cycle to mute deprecation warnings.
  
   2. Show the list o the deprecated stuff and planned changes on the
 first run. Then mute it.
   The disadvantage of this approach if that there is a need of storing
 the info about the first run to a file. However, it may be cleaned after
 the upgrade.
  
   3. The same as #2 but publish the warning online.
  
   I personally prefer #2, but I’d like to get more opinions on this
 topic.
  
  
   References:
  
   1. https://blueprints.launchpad.net/fuel/+spec/re-thinking-fuel-client
  
  
   - romcheg
  
  
  
  
 __
   OpenStack Development Mailing List (not for usage questions)
   Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
 
 
 __
  OpenStack Development Mailing List (not for usage questions)
  

Re: [openstack-dev] [devstack]Specific Juno version

2015-03-04 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 03/03/2015 04:42 PM, Eduard Matei wrote:
 Hi,
 
 Is there a way to specify the Juno version to be installed using
 devstack. For now we can only specify stable/juno *git clone -b
 stable/juno https://github.com/openstack-dev/devstack.git 
 /opt/stack/devstack* but this installs 2014.2.3 which appears to be
 still under development (it gives some errors).

Can you elaborate which errors you get?

(Also, please don't send disclaimers like at the bottom of your email
to public lists.)

 How can we specify 2014.2.2 for components (e.g. cinder)?
 
 Thanks, Eduard
 
 --
 
 *Eduard Biceri Matei, Senior Software Developer* 
 www.cloudfounders.com http://www.cloudfounders.com/ |
 eduard.ma...@cloudfounders.com
 mailto:eduard.ma...@cloudfounders.com __ __
 
 __ __ *CloudFounders, The Private Cloud Software Company* __
 __ Disclaimer: This email and any files transmitted with it are
 confidential and intended solely for the use of the individual or
 entity to whom they are addressed. If you are not the named
 addressee or an employee or agent responsible for delivering this
 message to the named addressee, you are hereby notified that you
 are not authorized to read, print, retain, copy or disseminate this
 message or any part of it. If you have received this email in error
 we request you to notify us by reply e-mail and to delete all
 electronic files of the message. If you are not the intended 
 recipient you are notified that disclosing, copying, distributing
 or taking any action in reliance on the contents of this
 information is strictly prohibited. E-mail transmission cannot be
 guaranteed to be secure or error free as information could be
 intercepted, corrupted, lost, destroyed, arrive late or incomplete,
 or contain viruses. The sender therefore does not accept liability
 for any errors or omissions in the content of this message, and
 shall have no liability for any loss or damage suffered by the
 user, which arise as a result of e-mail transmission.
 
 
 
 __

 
OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
-BEGIN PGP SIGNATURE-
Version: GnuPG v1

iQEcBAEBAgAGBQJU9vaCAAoJEC5aWaUY1u57qg8IALtTLMTSm/JxqQkK+xYSnCRo
b17tGv3ECsV3zdTVsrL+GCglMWhRYzVtryaRFogTqm11Z1tylmZmNq+Xq70d/KTC
duwMHwQMP4ySlaOmzdt8RB6hwwTEo+47T5Gsftaed2H+QVcSI6Dtb5VjlWTf0gTC
i8b8DpKIQh91qEOf7fterGfRE5S0lnP4RuFA6ihcB58zQrw5BFNj5Bg/PUpVOjeP
AYEC0wmdhaIdYQjE703Nq1ialBl1qWW6HEPQMgfBV8dQfsBJ/uVP1dP+m/AJuCna
TfzJzWne0oUpUhOn2N1+jjjogdgXvXDDHq7bwSb6jsDWEjKzZaUEs6SSV5bg8AM=
=yn6M
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] auto-abandon changesets considered harmful (was Re: [stable][all] Revisiting the 6 month release cycle [metrics])

2015-03-04 Thread Alexis Lee
John Griffith john.griffi...@gmail.com writes:
 ​Should we just rename this thread to Sensitivity training for
 contributors?

Culture plays a role in shaping ones expectations here. I was lucky
enough to grow up in open source culture, so I can identify an automated
response easily and I don't take it too seriously. Not everyone has the
same culture and although I agree we need to confront these gaps when
they impede us, it's more constructive to reach out and bridge the gap
than blame the outsider.


James E. Blair said on Tue, Mar 03, 2015 at 07:49:03AM -0800:
 If I had to deprioritize something I was working on and it was
 auto-abandoned, I would not find out.

You should receive messages from Gerrit about this. If you've made the
choice to disable or delete those messages, you should expect not to be
notified. The review dropping out of your personal dashboard active
review queue is a problem though - an email can be forgotten.

For what little it's worth, I think having a community-wide definition
of inactive and flagging that in the system makes sense. This helps us
maintain a clear and consistent policy in our tools. However, I've come
around to see that abandon semantics don't fit that flag. We need to
narrow in on what inactive really means and what effects the flag should
have. We may need two flags, one for 'needs submitter attention' and one
for 'needs review attention'.


Alexis
-- 
Nova Engineer, HP Cloud.  AKA lealexis, lxsli.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Core nominations.

2015-03-04 Thread Flavio Percoco

On 03/03/15 16:10 +, Nikhil Komawar wrote:

If it was not clear in my previous message, I would like to again emphasize
that I truly appreciate the vigor and intent behind Flavio's proposal. We need
to be proactive and keep making the community better in such regards.


However, at the same time we need to act fairly, with patience and have a
friendly strategy for doing the same (thus maintaining a good balance in our
progress). I should probably respond to another thread on ML mentioning my
opinion that the community's success depends on trust and empathy and
everyone's intent as well as effort in maintaining these principles. Without
them, it will not take very long to make the situation chaotic.


I'm sorry but no. I don't think there's anything that requires extra
patience than 2 (or even more) cycles without provaiding reviews or
even any kind of active contribution.

I personally don't think adding new cores without cleaning up that
list is something healthy for our community, which is what we're
trying to improve here. Therefore I'm still -2-W on adding new folks
without removing non-active core members.


The questions I poised are still unanswered:

There are a few members who have been relatively inactive this cycle in terms
of reviews and have been missed in Flavio's list (That list is not
comprehensive). On what basis have some of them been missed out and if we do
not have strong reason, are we being fair? Again, I would like to emphasize
that, cleaning of the list in such proportions at this point of time does NOT
look OK strategy to me.


The list contains the names of ppl that have not provided *any* kind
of review in the last 2 cycles. If there are folks in that list that
you think shouldn't be there, please, bring them up now. If there are
folks you think *should* be in that list, please, bring them on now.

There's nothing unpolite in what's being discussed here. The proposal
is based on the facts that these folks seem to be focused in different
things now and that's perfectly fine.

As I mentioned in my first email, we're not questioning their
knowledge but their focus and they are more than welcome to join
again.

I do not think *counting* the stats of everyone makes sense here,
we're not competing on who reviews more patches. That's nonsense.
We're just trying to keep the list of folks that will have the power
to approve patches short.


To answer your concerns: (Why was this not proposed earlier in the cycle?)


[snip] ?


The essence of the matter is:

We need to change the dynamics slowly and with patience for maintaining a good
balance.


As I mentioned above, I don't think we're being impatient. As a matter
of fact, some of this folks haven't been around in *years* so, pardon
my stubborness but I believe we have been way to patient and I'd have
loved this folks to step down themselves.

I infinitely thank these folks past work and efforts (and hopefully
future works too) but I think it's time for us to have a clearer view
of who's working in the project.

As a last note, it's really important to have the list of members
updated, some folks rely on that to know who are the contacts for some
projects.

Flavio


Best,
-Nikhil
━━━
From: Kuvaja, Erno kuv...@hp.com
Sent: Tuesday, March 3, 2015 9:48 AM
To: OpenStack Development Mailing List (not for usage questions); Daniel P.
Berrange
Cc: krag...@gmail.com
Subject: Re: [openstack-dev] [Glance] Core nominations.


Nikhil,



If I recall correctly this matter was discussed last time at the start of the
L-cycle and at that time we agreed to see if there is change of pattern to
later of the cycle. There has not been one and I do not see reason to postpone
this again, just for the courtesy of it in the hopes some of our older cores
happens to make review or two.



I think Flavio’s proposal combined with the new members would be the right way
to reinforce to momentum we’ve gained in Glance over past few months. I think
it’s also the right message to send out for the new cores (including you and
myself ;) ) that activity is the key to maintain such status.



-  Erno



From: Nikhil Komawar [mailto:nikhil.koma...@rackspace.com]
Sent: 03 March 2015 04:47
To: Daniel P. Berrange; OpenStack Development Mailing List (not for usage
questions)
Cc: krag...@gmail.com
Subject: Re: [openstack-dev] [Glance] Core nominations.



Hi all,



After having thoroughly thought about the proposed rotation and evaluating the
pros and cons of the same at this point of time, I would like to make an
alternate proposal.



New Proposal:

1. We should go ahead with adding more core members now.
2. Come up with a plan and give additional notice for the rotation. Get it
   implemented one month into Liberty.

Reasoning:



Traditionally, Glance program did not implement rotation. This was probably
with good reason as the program was small and the developers were
working closely 

Re: [openstack-dev] [keystone] Output on stderr

2015-03-04 Thread David Stanek
On Wed, Mar 4, 2015 at 6:50 AM, Abhishek Talwar/HYD/TCS 
abhishek.tal...@tcs.com wrote:

 While working on a bug for keystoneclient I have replaced sys.exit with
 return. However, the code reviewers want that the output should be on
 stderr(as sys.exit does). So how can we get the output on stderr.


The print function allows you to specify a file:

 from __future__ import print_function
 import sys
 print('something to', file=sys.stderr)

The __future__ import is needed for Python 2.6/2.7 because print was
changed to a function in Python 3.


-- 
David
blog: http://www.traceback.org
twitter: http://twitter.com/dstanek
www: http://dstanek.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] what's the merge plan for current proposed microversions?

2015-03-04 Thread Alexandre Levine

Christopher,

Does this

So the plan for assignment of microversion api numbers is the same as
what we currently do for db migration changes - take the next one
knowing that you may need to rebase if someone else merges before you

mean that I should put 2.2 in my review for now instead of 2.4?

Other suggestion would be to pack several simultaneously incoming 
changes into one microversion. Maybe spawn them once a week, or once a 
couple of weeks, or even with longer period, depending on the amount of 
incoming changes. For example, wouldn't it be convenient for clients to 
acquire all of the accumulated changes in one microversion (2.2 as I 
understand) without needing to understand which one came with what 
number? To clarify, I'm suggesting to pass reviews for all of the 
hanging API changes against 2.2 version.


Best regards,
  Alex Levine

On 3/4/15 11:44 AM, Christopher Yeoh wrote:

On Tue, 03 Mar 2015 10:28:34 -0500
Sean Dague s...@dague.net wrote:


On 03/03/2015 10:24 AM, Claudiu Belu wrote:

Hello.

I've talked with Christopher Yeoh yesterday and I've asked him
about the microversions and when will they be able to merge. He
said that for now, this commit had to get in before any other
microversions: https://review.openstack.org/#/c/159767/

He also said that he'll double check everything, and if everything
is fine, the first microversions should be getting in soon after.

Best regards,

Claudiu Belu

I just merged that one this morning, so hopefully we can dislodge.


So just before we merged the the keypairs microversion change someone
enabled response schema tests which showed some further problems. They
have now all been fixed but https://review.openstack.org/#/c/161112/
which needs just one more +2. After that the first api change which uses
microversions https://review.openstack.org/#/c/140313/ can merge (has
3 +2's just needs the v2.1 fix first.



From: Alexandre Levine [alev...@cloudscaling.com]
Sent: Tuesday, March 03, 2015 4:22 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] what's the merge plan for
current proposed microversions?

Bump.

I'd really appreciate some answers to the question Sean asked. I
still have the 2.4 in my review (the very one Sean mentioned) but
it seems that it might not be the case.

Best regards,
Alex Levine

On 3/2/15 2:30 PM, Sean Dague wrote:

This change for the additional attributes for ec2 looks like it's
basically ready to go, except it has the wrong microversion on it
(as they anticipated the other changes landing ahead of them) -
https://review.openstack.org/#/c/155853

What's the plan for merging the outstanding microversions? I
believe we're all conceptually approved on all them, and it's an
important part of actually moving forward on the new API. It seems
like we're in a bit of a holding pattern on all of them right now,
and I'd like to make sure we start merging them this week so that
they have breathing space before the freeze.


So the plan for assignment of microversion api numbers is the same as
what we currently do for db migration changes - take the next one
knowing that you may need to rebase if someone else merges before you.
Other suggestions welcome but will have to follow the requirement that
they always merge in version order.




   -Sean



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] Output on stderr

2015-03-04 Thread Abhishek Talwar/HYD/TCS
Hi All,

While working on a bug for keystoneclient I have replaced sys.exit with return. 
However, the code reviewers want that the output should be on stderr(as 
sys.exit does). So how can we get the output on stderr.

def do_user_password_update(kc, args):
Update user password.
user = utils.find_resource(kc.users, args.user)
new_passwd = args.passwd or utils.prompt_for_password()
if not new_passwd:
msg = (_(\nPlease specify password using the --pass option 
 or using the prompt))
print(msg)  // I have included print and return here instead of 
sys.exit
return
kc.users.update_password(user, new_passwd) 


-- 
Thanks and Regards
Abhishek Talwar
Employee ID : 770072
Assistant System Engineer
Tata Consultancy Services,Gurgaon
India
Contact Details : +918377882003
=-=-=
Notice: The information contained in this e-mail
message and/or attachments to it may contain 
confidential or privileged information. If you are 
not the intended recipient, any dissemination, use, 
review, distribution, printing or copying of the 
information contained in this e-mail message 
and/or attachments to it are strictly prohibited. If 
you have received this communication in error, 
please notify us by reply e-mail or telephone and 
immediately and permanently delete the message 
and any attachments. Thank you


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] Ceph native driver for manila

2015-03-04 Thread Danny Al-Gaaf
Am 04.03.2015 um 15:18 schrieb Csaba Henk:
 Hi Danny,
 
 - Original Message -
 From: Danny Al-Gaaf danny.al-g...@bisect.de
 To: Deepak Shetty dpkshe...@gmail.com
 Cc: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org,
 ceph-de...@vger.kernel.org
 Sent: Wednesday, March 4, 2015 3:05:46 PM
 Subject: Re: [openstack-dev] [Manila] Ceph native driver for manila

 ...
 Another level of indirection. I really like the approach of filesystem
 passthrough ... the only critical question is if virtfs/p9 is still
 supported in some way (and the question if not: why?).
 
 That only seems to be a biggie, isn't it?

Yes, it is.

 We -- Red Hat -- considered a similar, virtfs based driver for GlusterFS
 but we dropped that plan exactly for virtfs being abandonware.
 
 As far as I know it was meant to be a research project, and providing
 a fairly well working POC it was concluded -- but Deepak knows more of
 the story.

Would like to understand why it was abandoned. I see the need of
filesystem passthrough in the area of virtualization. Is there another
solution available?

Danny

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] Ceph native driver for manila

2015-03-04 Thread Danny Al-Gaaf
Am 04.03.2015 um 15:12 schrieb Csaba Henk:
 Hi Danny,
 
 - Original Message -
 From: Danny Al-Gaaf danny.al-g...@bisect.de To: OpenStack
 Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org, ceph-de...@vger.kernel.org 
 Sent: Sunday, March 1, 2015 3:07:36 PM Subject: Re:
 [openstack-dev] [Manila] Ceph native driver for manila
 ...
 For us security is very critical, as the performance is too. The
 first solution via ganesha is not what we prefer (to use CephFS
 via p9 and NFS would not perform that well I guess). The second
 solution, to use
 
 Can you please explain that why does the Ganesha based stack
 involve 9p? (Maybe I miss something basic, but I don't know.)

Sorry, seems that I mixed it up with the p9 case. But the performance
is may still an issue if you use NFS on top of CephFS (incl. all the
VM layer involved within this setup).

For me the question with all these NFS setups is: why should I use NFS
on top on CephFS? What is the right to exist of CephFS in this case? I
would like to use CephFS directly or via filesystem passthrough.

Danny


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][libvirt] The None and 'none' for CONF.libvirt.cpu_mode

2015-03-04 Thread Jiang, Yunhong
Hi, Daniel
I'm a bit confused of the None/'none' for CONF.libvirt.cpu_mode. Per my 
understanding, None means there is no configuration provided and libvirt will 
select the default value based on the virt_type, none means no cpu_mode 
information should be provided. For the guest, am I right?

  In _get_guest_cpu_model_config() on virt/libvirt/driver.py, if 
mode is 'none', kvm/qemu virt_type will return a 
vconfig.LibvirtConfigGuestCPU() while other virt type will return None. What's 
the difference of this return difference?

Thanks
--jyh

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack]Specific Juno version

2015-03-04 Thread Eduard Matei
Hi,
Indeed CINDER_BRANCH=2014.2.2 did the trick.
Will properly report the issues once we start testing on 2014.2.3.

Eduard
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Gerrit tooling improvements

2015-03-04 Thread Alexis Lee
Salvatore Orlando said on Tue, Mar 03, 2015 at 08:21:08PM +0100:
 Is common sense an option here?

Common sense is never an option :)

Mainly because it's situational and hence arises out of shared culture
and expectations, so those not indoctrinated into the group yet get
scolded for lacking common sense, being rude/pushy or giving up too
easily.

Clear, objective, documented criteria are the best way to set policy.
You can get by on common sense until the friction with outsiders gets
too high.


Alexis
-- 
Nova Engineer, HP Cloud.  AKA lealexis, lxsli.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack]Specific Juno version

2015-03-04 Thread Eduard Matei
Hi Ihar,
This is the error i see in c-vol screen:
2015-03-03 18:39:34.060 DEBUG taskflow.engines.action_engine.runner
[req-87db2771-5c25-4c9b-a3bc-f7a504468315 4af0a30fe2c14deb9f9cf341a97daa9f
1fbab1e235ff414abf249c741fb3c6c9] Exiting old state 'WAITING' in response
to event 'analyze' from (pid=29710) on_exit
/usr/local/lib/python2.7/dist-packages/taskflow/engines/action_engine/runner.py:156
2015-03-03 18:39:34.060 DEBUG taskflow.engines.action_engine.runner
[req-87db2771-5c25-4c9b-a3bc-f7a504468315 4af0a30fe2c14deb9f9cf341a97daa9f
1fbab1e235ff414abf249c741fb3c6c9] Entering new state 'ANALYZING' in
response to event 'analyze' from (pid=29710) on_enter
/usr/local/lib/python2.7/dist-packages/taskflow/engines/action_engine/runner.py:160
2015-03-03 18:39:34.061 CRITICAL cinder [-] TypeError: an integer is
required

2015-03-03 18:39:34.061 TRACE cinder Traceback (most recent call last):
2015-03-03 18:39:34.061 TRACE cinder   File
/opt/stack/devstack/cinder/bin/cinder-volume, line 79, in module
2015-03-03 18:39:34.061 TRACE cinder launcher.wait()
2015-03-03 18:39:34.061 TRACE cinder   File
/opt/stack/devstack/cinder/cinder/openstack/common/service.py, line 393,
in wait
2015-03-03 18:39:34.061 TRACE cinder self._respawn_children()
2015-03-03 18:39:34.061 TRACE cinder   File
/opt/stack/devstack/cinder/cinder/openstack/common/service.py, line 381,
in _respawn_children
2015-03-03 18:39:34.061 TRACE cinder self._start_child(wrap)
2015-03-03 18:39:34.061 TRACE cinder   File
/opt/stack/devstack/cinder/cinder/openstack/common/service.py, line 327,
in _start_child
2015-03-03 18:39:34.061 TRACE cinder os._exit(status)
2015-03-03 18:39:34.061 TRACE cinder TypeError: an integer is required
2015-03-03 18:39:34.061 TRACE cinder
2015-03-03 18:39:34.375 INFO cinder.openstack.common.service [-] Child
29710 exited with status 1
2015-03-03 18:39:34.378 INFO cinder.openstack.common.service [-] Started
child 29948
2015-03-03 18:39:34.380 INFO cinder.service [-] Starting cinder-volume node
(version 2014.2.3)

Regarding the disclaimer, that's the standard signature we have to use in
the company.

Thanks,
Eduard
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][ec2-api] Functional tests for EC2-API in nova

2015-03-04 Thread M Ranga Swami Reddy
Agree with dims.

On Mon, Mar 2, 2015 at 6:14 PM, Davanum Srinivas dava...@gmail.com wrote:
 Alex,

 It's better do a experimental one first before trying to a non-voting:
 http://git.openstack.org/cgit/openstack-infra/project-config/tree/zuul/layout.yaml#n1509

 -- dims

 On Mon, Mar 2, 2015 at 7:24 AM, Alexandre Levine
 alev...@cloudscaling.com wrote:
 All,

 We've finished setting up the functional testing for the stackforge EC2 API
 project. New test suite consists of 96 API and scenario tests covering
 almost (tags functionality and client tokens absent) all of the
 functionality initially covered by original EC2 API tests in Tempest and
 much more. Also this test suite is periodically run against AWS to ensure
 its compatibility with Amazon. Now it works as a gating for this standalone
 EC2 API project, however the question is:

 Does it make sense and do we want to somehow employ this test suite against
 nova's API (without VPC-related tests, which leaves 54 tests altogether)?

 Internally we did this and it seems that nova's EC2 API is sound enough (it
 still does what it did, say, a year ago), however it's still quite short of
 some features and compatibility. So our tests run against it produce the
 following results:

 With nova-network:
 http://logs.openstack.org/02/160302/1/check/check-functional-nova-network-dsvm-ec2api/ab1af0d/console.html

 With neutron:
 http://logs.openstack.org/02/160302/1/check/check-functional-neutron-dsvm-ec2api/f478a19/console.html

 And the review which we used to run the tests:
 https://review.openstack.org/#/c/160302/

 So if we do want to somehow set this up against nova's EC2 API, I'm not sure
 how to most effectively do this. Non-voting job in Nova fetching tests from
 stackforge/ec2-api and running them as we did in the review above?

 Best regards,
   Alex Levine



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 --
 Davanum Srinivas :: https://twitter.com/dims

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >