Re: [openstack-dev] [Mistral] Roll back capabilities execution data context variables in YAML

2015-02-09 Thread Renat Akhmerov
Hi

 1) Roll back support : I came across few blueprints that talk about the 
 roll-back support but not sure if they are delivered yet, I wanted to check 
 if something is there out of the box that I should be aware of. Ofcourse, 
 roll-back being very specific, I'd expect a user defined action being called 
 for rolling back, just wondering if there is any construct after (on-error : 
 ) to be used in YAML

Yes, it’s really specific. We ourselves discussed several options on what 
facilities we could provide regarding rollbacks. None of the BPs that you’ve 
probably seen is implemented yet though and more importantly there haven’t been 
any decisions made on that. I think we should start a wider discussion on that.

As far as the idea itself, the whole point is that it may not be required to 
implement any specific in addition to existing functionality because from what 
we learned we can make a conclusion that most often rollback is a different 
workflow. Meaning that rolling back all the tasks in the same but reverse order 
(assuming they have a roll back action) doesn’t make too much sense. It’s 100% 
true if workflow tasks change some state in a non-revertable manner, for 
example. If so then Mistral already has ‘on-error’ clause which could be used 
to jump to a task associated with calling a workflow (rollback workflow).

I really wonder what your input on this would be?

 2) Execution context predefined ids - Here is what I found that talks about 
 task and execution ids that I can access (execution context 
 https://wiki.openstack.org/wiki/Mistral/DSLv2#Predefinted_Values_in_execution_data_context),
  it would be great if you could point me to the list of all possible 
 variables like $.execution_id that are accessible inside the YAML.

Not too many things at this point:
* $.__execution consisting of fields ‘id’, ‘wf_spec’ (workflow specification as 
a dictionary), ‘input’ and ‘start_params’ (which can contain some additional 
start parameters like ‘task_name’ for reverse workflows and ‘env’ containing 
the environment variables or a name of the previously saved environment, but 
environments are not officially announced yet).
* $.openstack which contains whatever is listed in the doc (project_id etc.)

This part is slightly changing now and there isn’t still a final doc on that. 
It will be created by the end of kilo-3.

Thanks

Renat Akhmerov
@ Mirantis Inc.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ceilometer] nova api.fault notification isn't collected by ceilometer

2015-02-09 Thread yuntong

Hi all,
In nova api, a nova api.fault notification will be send out when the 
when there is en error.

http://git.openstack.org/cgit/openstack/nova/tree/nova/api/openstack/__init__.py#n119
but i couldn't find where they are  processed  in ceilometer,
an error notification can be very desired to be collected, do we have 
plan to add this, shall i need a bp to do that ?


-Yuntong


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] FFE request for passing capabilities in the flavor to ironic

2015-02-09 Thread Lucas Alvares Gomes
+1 for FFE. This should be very low risk to Nova since it affects only
the Ironic driver.

On Sun, Feb 8, 2015 at 11:38 AM, Nisha Agarwal
agarwalnisha1...@gmail.com wrote:
 Hi wanyen and Nova team,

 I support wanyen for this FFE. The code changes are very small and harmless
 to Nova. They just provide a way to ironic to comsume inputs given at
 flavor.

 Regards
 Nisha

 On Sat, Feb 7, 2015 at 12:33 AM, Hsu, Wan-Yen wan-yen@hp.com wrote:

 Hi,



I would like to ask for a feature freeze exception for passing
 capabilities in the flavor to Ironic:




 https://blueprints.launchpad.net/nova/+spec/pass-flavor-capabilities-to-ironic-virt-driver

 Addressed by: https://review.openstack.org/136104
 Pass on the capabilities in the flavor to the ironic

Addressed by: https://review.openstack.org/141012
Pass on the capabilities to instance_info

 several Ironic Kilo features including secure boot, trusted boot, and
 local boot support with partition image are depending on this feature.  It
 also has impact on Ironic vendor driver’s hardware property introspection
 feature.



  Code changes to support this spec in Nova ironic virt driver is very
 small-

 only 31 lines of code (including comments) in
 nova/virt/ironic/patcher.py,  and 22 lines of code in test_patcher.py.



Please consider approving this FFE.  Thanks!



 Regards,

 wanyen




 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 The Secret Of Success is learning how to use pain and pleasure, instead
 of having pain and pleasure use you. If You do that you are in control
 of your life. If you don't life controls you.

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] nova api.fault notification isn't collected by ceilometer

2015-02-09 Thread Julien Danjou
On Mon, Feb 09 2015, yuntong wrote:

 In nova api, a nova api.fault notification will be send out when the when 
 there
 is en error.
 http://git.openstack.org/cgit/openstack/nova/tree/nova/api/openstack/__init__.py#n119
 but i couldn't find where they are  processed  in ceilometer,
 an error notification can be very desired to be collected, do we have plan to
 add this, shall i need a bp to do that ?

Are you talking about:
https://bugs.launchpad.net/ceilometer/+bug/1364708
?

Cheers,
-- 
Julien Danjou
# Free Software hacker
# http://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] JavaScript docs?

2015-02-09 Thread Radomir Dopieralski
On 02/05/2015 07:26 PM, Michael Krotscheck wrote:
 On Thu Feb 05 2015 at 12:07:01 AM Radomir Dopieralski
 openst...@sheep.art.pl mailto:openst...@sheep.art.pl wrote:
 
 
 Plus, the documentation generator that we are using already, Sphinx,
 supports JavaScript perfectly fine, so I see no reason to add
 another tool.
 
 
 Try to empathize with us a little here. What you're asking is equivalent
 to saying OpenStack should use JavaDoc for all its documentation
 because it supports python. For all the reasons that you would mistrust
 JavaDoc, I mistrust Sphinx when it comes to parsing javascript.
 
 With that in mind, how about we run a side-by-side comparison of Sphinx
 vs. NGDoc? Without actual comparable output, this discussion isn't much
 more than warring opinions.

I'm not mistrusting JavaDoc or NGDoc or whatever new documentation
system you are proposing. I merely think that, while JavaScript
programmers are special snowflakes indeed, they are not special enough
warrant introducing and maintaining a whole separate documentation
system, especially since we are already using a documentation system
that is well used and maintained by the whole of OpenStack, not just the
Python programmers in Horizon. And since you will have to learn to use
Sphinx sooner or later anyways, because basically everything in
OpenStack is documented using it, I see no reason why we should expend
additional energy on implementing, deploying and maintaining a new set
of tools, just because you don't like the current one.

If it was JavaDoc instead of Sphinx being used by the whole of
OpenStack, I would advocate its use the same way as I advocate Sphinx now.

It seems that the whole docs format discussion is just a way of putting
away having to actually write the docs.

-- 
Radomir Dopieralski


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano] Translation usage suggestion

2015-02-09 Thread Serg Melikyan
I think all proposed actions make sense, and I strongly agree that we need
to start working on them and finish them in Kilo cycle scope.

On Thu, Feb 5, 2015 at 3:49 PM, Ekaterina Chernova efedor...@mirantis.com
wrote:

 Hi all!

 Recently we have discussed the log and exception translations and have not
 come to a decision.

 I've made some research and found a useful documents:


- https://wiki.openstack.org/wiki/LoggingStandards#Log_Translation
- https://wiki.openstack.org/wiki/Translations


 Here two main points, that I can highlight

 * *Exception text should **not** be marked for translation,*
 because if an exception occurs there is no guarantee that the
 translation machinery will be functional.

 ** Debug level log messages are not translated*

 Some projects do not follow these rules, but I suggest to take them into
 consideration and perform the following actions:

- First of all, we should remove gettext utils usage and replace it
with oslo.i18n;
- Remove exception messages translation;
- Add log translation to info, warn and error log level (and
log.exception which creates log.error message).

 Note, that different log levels can be separated to different files. It's
 also supported to have different log files for different languages.

 What do you think?

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Serg Melikyan, Senior Software Engineer at Mirantis, Inc.
http://mirantis.com | smelik...@mirantis.com

+7 (495) 640-4904, 0261
+7 (903) 156-0836
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon][keystone] SSO

2015-02-09 Thread Anton Zemlyanov
SSO (Single sign-on) is great. There are some problems, though:

1) If the cloud is private without Internet access, then a private SSO
service should be up and runnning
2) There is no such a thing as OpenStack ID. Should we use Launchpad?
Facebook login? Twitter?
3) Technical difficulties embedding SSO into website.

As SSO service is often located on the different domain, the browser opens
a new window, then open the SSO page there, then redirects to our page
that uses some cross domain messageing like postMessage to inform main page
of login results. The main page closes the popup. The stuff is rather
complex, Facebook and other social nets have SDKs that do all the stufff.

That's certainly not an easy task to make an SSO service, implement SSO SDK
and embed in into Horizon.

Anton

On Fri, Feb 6, 2015 at 11:03 PM, Tim Bell tim.b...@cern.ch wrote:



 From the sound of things, we’re not actually talking about SSO. If so, we
 would not be talking about the design of a login screen.



 An SSO application such as Horizon would not have a login page. If the
 user was logged in already through corporate/organisation SSO page, nothing
 would appear before the standard Horizon page.



 We strongly encourage our user community that if there is any web page
 asking for your credentials which is not the CERN standard SSO page, it is
 not authorised. Our SSO also supports Google/Twitter/Eduroam etc. logins.
 Some of these will be refused for OpenStack login so that having a twitter
 account alone does not get you access to CERN’s cloud resources (but this
 is an authorisation rather than authentication problem).



 Is there really the use case for a site where there is SSO from a
 corporate perspective but there is not a federated login SSO capability ? I
 don’t have a fundamental problem with the approach but we should position
 it with respect to the use case which is that I login in the morning and
 all applications I use (cloud and all) are able to recognise that.



 Tim





 *From:* Adam Young [mailto:ayo...@redhat.com]
 *Sent:* 06 February 2015 19:48
 *To:* openstack-dev@lists.openstack.org
 *Subject:* Re: [openstack-dev] [horizon][keystone]



 On 02/04/2015 03:54 PM, Thai Q Tran wrote:

 Hi all,

 I have been helping with the websso effort and wanted to get some feedback.
 Basically, users are presented with a login screen where they can select:
 credentials, default protocol, or discovery service.
 If user selects credentials, it works exactly the same way it works today.
 If user selects default protocol or discovery service, they can choose to
 be redirected to those pages.

 Keep in mind that this is a prototype, early feedback will be good.
 Here are the relevant patches:
 https://review.openstack.org/#/c/136177/
 https://review.openstack.org/#/c/136178/
 https://review.openstack.org/#/c/151842/

 I have attached the files and present them below:




 Replace the dropdown with a specific link for each protocol type:

 SAML and OpenID  are the only real contenders at the moment, but we will
 not likely have so many that it will clutter up the page.

 Thanks for doing this.








  __

 OpenStack Development Mailing List (not for usage questions)

 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] Scheduling for Magnum

2015-02-09 Thread Jay Lau
Thanks Sylvain, we did not work out the API requirement till now but I
think the requirement should be similar with nova: we need
select_destination to select the best target host based on filters and
weights.

There are also some discussions here
https://blueprints.launchpad.net/magnum/+spec/magnum-scheduler-for-docker

Thanks!

2015-02-09 16:22 GMT+08:00 Sylvain Bauza sba...@redhat.com:

  Hi Magnum team,


 Le 07/02/2015 19:24, Steven Dake (stdake) a écrit :



   From: Eric Windisch e...@windisch.us
 Reply-To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Date: Saturday, February 7, 2015 at 10:09 AM
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Magnum] Scheduling for Magnum


 1) Cherry pick scheduler code from Nova, which already has a working a
 filter scheduler design.


 The Gantt team explored that option by the Icehouse cycle and it failed
 with a lot of problems. I won't list all of those, but I'll just explain
 that we discovered how the Scheduler and the Nova compute manager were
 tighly coupled, which was meaning that a repository fork was really
 difficult to do without reducing the tech debt.

 That said, our concerns were far different from the Magnum team : it was
 about having feature parity and replacing the current Nova scheduler, while
 your team is just saying that they want to have something about containers.


 2) Integrate swarmd to leverage its scheduler[2].


  I see #2 as not an alternative but possibly an also. Swarm uses the
 Docker API, although they're only about 75% compatible at the moment.
 Ideally, the Docker backend would work with both single docker hosts and
 clusters of Docker machines powered by Swarm. It would be nice, however, if
 scheduler hints could be passed from Magnum to Swarm.

  Regards,
 Eric Windisch


  Adrian  Eric,

  I would prefer to keep things simple and just integrate directly with
 swarm and leave out any cherry-picking from Nova. It would be better to
 integrate scheduling hints into Swarm, but I’m sure the swarm upstream is
 busy with requests and this may be difficult to achieve.


 I don't want to give my opinion about which option you should take as I
 don't really know your needs. If I understand correctly, this is about
 having a scheduler providing affinity rules for containers. Do you have a
 document explaining which interfaces you're looking for, which kind of APIs
 you're wanting or what's missing with the current Nova scheduler ?

 MHO is that the technology shouldn't drive your decision : whatever the
 backend is (swarmd or an inherited nova scheduler), your interfaces should
 be the same.

 -Sylvain


   Regards
 -steve



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: 
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Thanks,

Jay Lau (Guangya Liu)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] Mid-Cycle Meetup Planning

2015-02-09 Thread Thierry Carrez
Adrian Otto wrote:
 Team,
 
 Our dates have been set as 2015-03-02 and 2015-03-03.
 
 Wiki (With location, map, calendar links, agenda planning link, and links to 
 tickets):
 https://wiki.openstack.org/wiki/Magnum/Midcycle

You can also add a line to the reference list at:
https://wiki.openstack.org/wiki/Sprints

Cheers,

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [NOVA] security group fails to attach to an instance if port-id is specified during boot.

2015-02-09 Thread Matt Riedemann



On 9/26/2014 3:19 AM, Christopher Yeoh wrote:

On Fri, 26 Sep 2014 11:25:49 +0400
Oleg Bondarev obonda...@mirantis.com wrote:


On Fri, Sep 26, 2014 at 3:30 AM, Day, Phil philip@hp.com wrote:


  I think the expectation is that if a user is already interaction
with Neutron to create ports then they should do the security group
assignment in Neutron as well.



Agree. However what do you think a user expects when he/she boots a
vm (no matter providing port_id or just net_id)
and specifies security_groups? I think the expectation should be that
instance will become a member of the specified groups.
Ignoring security_groups parameter in case port is provided (as it is
now) seems completely unfair to me.


One option would be to return a 400 if both port id and security_groups
is supplied.

Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Coming back to this, we now have a change from Oleg [1] after an initial 
attempt that was reverted because it would break server creates if you 
specified a port (because the original change would blow up when the 
compute API added the 'default' security group to the request').


The new change doesn't add the 'default' security group to the request 
so if you specify a security group and port on the request, you'll now 
get a 400 error response.


Does this break API compatibility?  It seems this falls under the first 
bullet here [2], A change such that a request which was successful 
before now results in an error response (unless the success reported 
previously was hiding an existing error condition).  Does that caveat 
in parenthesis make this OK?


It seems like we've had a lot of talk about warts in the compute v2 API 
for cases where an operation is successful but didn't yield the expected 
result, but we can't change them because of API backwards compatibility 
concerns so I'm hesitant on this.


We also definitely need a Tempest test here, which I'm looking into.  I 
think I can work this into the test_network_basic_ops scenario test.


[1] https://review.openstack.org/#/c/154068/
[2] 
https://wiki.openstack.org/wiki/APIChangeGuidelines#Generally_Not_Acceptable


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [mistral] Team meeting minutes/log - 02/09/2015

2015-02-09 Thread Renat Akhmerov
Thanks for joining us at our weekly meeting,

Meeting minutes: 
http://eavesdrop.openstack.org/meetings/mistral/2015/mistral.2015-02-09-16.00.html
 
http://eavesdrop.openstack.org/meetings/mistral/2015/mistral.2015-02-09-16.00.html
Meeting full log: 
http://eavesdrop.openstack.org/meetings/mistral/2015/mistral.2015-02-09-16.00.html
 
http://eavesdrop.openstack.org/meetings/mistral/2015/mistral.2015-02-09-16.00.html

The next one will be on Feb 16.

Renat Akhmerov
@ Mirantis Inc.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] Incubation request

2015-02-09 Thread Matt Riedemann



On 2/9/2015 9:36 AM, Matt Riedemann wrote:



On 8/11/2014 10:18 AM, Swartzlander, Ben wrote:

I just saw the agenda for tomorrow’s TC meeting and we’re on it. I plan
to be there.

https://wiki.openstack.org/wiki/Meetings#Technical_Committee_meeting

-Ben

*From:*Swartzlander, Ben [mailto:ben.swartzlan...@netapp.com]
*Sent:* Monday, July 28, 2014 9:53 PM
*To:* openstack-dev@lists.openstack.org
*Subject:* [openstack-dev] [Manila] Incubation request

Manila has come a long way since we proposed it for incubation last
autumn. Below are the formal requests.

https://wiki.openstack.org/wiki/Manila/Incubation_Application

https://wiki.openstack.org/wiki/Manila/Program_Application

Anyone have anything to add before I forward these to the TC?

-Ben Swartzlander



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Looks like Manila was accepted for incubation [1] but I don't see
anything in the wiki [2] about Kilo.  What's the latest status on this?

[1] https://review.openstack.org/#/c/113583/
[2] https://wiki.openstack.org/wiki/Manila/Incubation_Application



Oh right, incubation is not a thing anymore [1] due to big tent model 
etc etc.


[1] https://review.openstack.org/#/c/145740/

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] Incubation request

2015-02-09 Thread Matt Riedemann



On 8/11/2014 10:18 AM, Swartzlander, Ben wrote:

I just saw the agenda for tomorrow’s TC meeting and we’re on it. I plan
to be there.

https://wiki.openstack.org/wiki/Meetings#Technical_Committee_meeting

-Ben

*From:*Swartzlander, Ben [mailto:ben.swartzlan...@netapp.com]
*Sent:* Monday, July 28, 2014 9:53 PM
*To:* openstack-dev@lists.openstack.org
*Subject:* [openstack-dev] [Manila] Incubation request

Manila has come a long way since we proposed it for incubation last
autumn. Below are the formal requests.

https://wiki.openstack.org/wiki/Manila/Incubation_Application

https://wiki.openstack.org/wiki/Manila/Program_Application

Anyone have anything to add before I forward these to the TC?

-Ben Swartzlander



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Looks like Manila was accepted for incubation [1] but I don't see 
anything in the wiki [2] about Kilo.  What's the latest status on this?


[1] https://review.openstack.org/#/c/113583/
[2] https://wiki.openstack.org/wiki/Manila/Incubation_Application

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [mistral] mistral actions plugin architecture

2015-02-09 Thread Filip Blaha

Hi all,

regarding to [1] there should be some plugin mechanism for custom 
actions in Mistral. I went through code and I found some introspection 
mechanism [2] generating mistral actions from methods on client classes 
for openstack core projects. E.g. it takes nova client class 
(python-novaclient) and introspects its methods and theirs parameters 
and creates corresponding actions with corresponding parameters. The 
same for other core projects like neutron, cinder, ... However the list 
of  these client classes seems to be hardcoded [3].  So I am not sure 
whether this mechanism can be used for other projects like murano client 
to create murano related actions in mistral? Or is there any other 
pluggable mechanism to get murano actions into mistral without 
hardcoding in mistral project?


[1] 
https://wiki.openstack.org/wiki/Mistral/Blueprints/ActionsDesign#Plugin_Architecture 

[2] 
https://github.com/stackforge/mistral/blob/master/mistral/actions/openstack/action_generator/base.py#L91 

[3] 
https://github.com/stackforge/mistral/blob/master/mistral/actions/generator_factory.py 




Regards
Filip

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack Foundation] Finding people to work on the EC2 API in Nova

2015-02-09 Thread Alexandre Levine
Hey M Ranga Swami Reddy (sorry, I'm not sure how to address you shorter 
:) ),


After conversation in this mailing list with Michael Still I understood 
that I'll do the sub group and meetings stuff, since I lead the ec2-api 
in stackforge anyways. Of course I'm not that familiar with these 
processes in nova yet, so if you're sure that you want to take the lead 
for nova's part of EC2, I won't be objecting much. Please let me know 
what you think.


Best regards,
  Alex Levine

On 2/9/15 4:41 PM, M Ranga Swami Reddy wrote:

Hi All,
I will be creating the a sub group in Nova for EC2 APIs and start the
weekly meetings, reviews, code cleanup, etc tasks.
Will update the same on wiki page also soon..

Thanks
Swami

On Fri, Feb 6, 2015 at 9:27 PM, David Kranz dkr...@redhat.com wrote:

On 02/06/2015 07:49 AM, Sean Dague wrote:

On 02/06/2015 07:39 AM, Alexandre Levine wrote:

Rushi,

We're adding new tempest tests into our stackforge-api/ec2-api. The
review will appear in a couple of days. These tests will be good for
running against both nova/ec2-api and stackforge/ec2-api. As soon as
they are there, you'll be more than welcome to add even more.

Best regards,
Alex Levine


Honestly, I'm more more pro having the ec2 tests in a tree that isn't
Tempest. Most Tempest reviewers aren't familiar with the ec2 API, their
focus has been OpenStack APIs.

Having a place where there is a review team that is dedicated only to
the EC2 API seems much better.

 -Sean


+1

  And once similar coverage to the current tempest ec2 tests is achieved,
either by copying from tempest or creating anew, we should remove the ec2
tests from tempest.

  -David



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Feature Freeze Exception Request - bp/libvirt-kvm-systemz

2015-02-09 Thread Matt Riedemann



On 2/9/2015 10:15 AM, Andreas Maier wrote:


Hello,
I would like to ask for the following feature freeze exceptions in Nova.

The patch sets below are all part of this blueprint:
https://review.openstack.org/#/q/status:open+project:openstack/nova
+branch:master+topic:bp/libvirt-kvm-systemz,n,z
and affect only the kvm/libvirt driver of Nova.

The decision for merging these patch sets by exception can be made one by
one; they are independent of each other.

1. https://review.openstack.org/149242 - FCP support

Title: libvirt: Adjust Nova to support FCP on System z systems

What it does: This patch set enables FCP support for KVM on System z.

Impact if we don't get this: FCP attached storage does not work for KVM
on System z.

Why we need it: We really depend on this particular patch set, because
FCP is our most important storage attachment.

Additional notes: The code in the libvirt driver that is updated by this
patch set is consistent with corresponding code in the Cinder driver,
and it has seen review by the Cinder team.

2. https://review.openstack.org/150505 - Console support

Title: libvirt: Enable serial_console feature for system z

What it does: This patch set enables the backing support in Nova for the
interactive console in Horizon.

Impact if we don't get this: Console in Horizon does not work. The
mitigation for a user would be to use the Log in Horizon (i.e. with
serial_console disabled), or the virsh console command in an ssh
session to the host Linux.

Why we need it: We'd like to have console support. Also, because the
Nova support for the Log in Horizon has been merged in an earlier patch
set as part of this blueprint, this remaining patch set makes the
console/log support consistent for KVM on System z Linux.

3. https://review.openstack.org/150497 - ISO/CDROM support

Title: libvirt: Set SCSI as the default cdrom bus on System z

What it does: This patch set enables that cdrom drives can be attached
to an instance on KVM on System z. This is needed for example for
cloud-init config files, but also for simply attaching ISO images to
instances. The technical reason for this change is that the IDE
attachment is not available on System z, and we need SCSI (just like
Power Linux).

Impact if we don't get this:
   - Cloud-init config files cannot be on a cdrom drive. A mitigation
  for a user would be to have such config files on a cloud-init
  server.
   - ISO images cannot be attached to instances. There is no mitigation.

Why we need it: We would like to avoid having to restrict cloud-init
configuration to just using cloud-init servers. We would like to be able
to support ISO images.

Additional notes: This patch is a one line change (it simply extends
what is already done in a platform specific case for the Power platform,
to be also used for System z).

Andy

Andreas Maier
IBM Senior Technical Staff Member, Systems Management Architecture  Design
IBM Research  Development Laboratory Boeblingen, Germany
mai...@de.ibm.com, +49-7031-16-3654

IBM Deutschland Research  Development GmbH
Vorsitzende des Aufsichtsrats: Martina Koederitz
Geschaeftsfuehrung: Dirk Wittkopp
Sitz der Gesellschaft: Boeblingen
Registergericht: Amtsgericht Stuttgart, HRB 243294


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



FWIW, I'll sponsor these changes.  I'm already +2 on one and have 
reviewed the other two which are very close to a +2 from me, just need a 
little more work (but not drastic changes).


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack Foundation] Finding people to work on the EC2 API in Nova

2015-02-09 Thread M Ranga Swami Reddy
Hi Alex Levine (you can address me  'Swami'),
Thank you. I have been working on this EC2 APIs quite some time. We
will work closely together on this project for reviews, code cleanup,
bug fixing and other critical items. Currently am looking for our sub
team meeting slot. Once I get the meeting slot will update the wiki
with meeting details along with the first meeting agenda. Please feel
free to add more to the meeting agenda.

Thanks
Swami

On Mon, Feb 9, 2015 at 9:50 PM, Alexandre Levine
alev...@cloudscaling.com wrote:
 Hey M Ranga Swami Reddy (sorry, I'm not sure how to address you shorter :)
 ),

 After conversation in this mailing list with Michael Still I understood that
 I'll do the sub group and meetings stuff, since I lead the ec2-api in
 stackforge anyways. Of course I'm not that familiar with these processes in
 nova yet, so if you're sure that you want to take the lead for nova's part
 of EC2, I won't be objecting much. Please let me know what you think.

 Best regards,
   Alex Levine


 On 2/9/15 4:41 PM, M Ranga Swami Reddy wrote:

 Hi All,
 I will be creating the a sub group in Nova for EC2 APIs and start the
 weekly meetings, reviews, code cleanup, etc tasks.
 Will update the same on wiki page also soon..

 Thanks
 Swami

 On Fri, Feb 6, 2015 at 9:27 PM, David Kranz dkr...@redhat.com wrote:

 On 02/06/2015 07:49 AM, Sean Dague wrote:

 On 02/06/2015 07:39 AM, Alexandre Levine wrote:

 Rushi,

 We're adding new tempest tests into our stackforge-api/ec2-api. The
 review will appear in a couple of days. These tests will be good for
 running against both nova/ec2-api and stackforge/ec2-api. As soon as
 they are there, you'll be more than welcome to add even more.

 Best regards,
 Alex Levine

 Honestly, I'm more more pro having the ec2 tests in a tree that isn't
 Tempest. Most Tempest reviewers aren't familiar with the ec2 API, their
 focus has been OpenStack APIs.

 Having a place where there is a review team that is dedicated only to
 the EC2 API seems much better.

  -Sean

 +1

   And once similar coverage to the current tempest ec2 tests is achieved,
 either by copying from tempest or creating anew, we should remove the ec2
 tests from tempest.

   -David




 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Feature Freeze Exception Request - bp/linux-systemz

2015-02-09 Thread Jay S. Bryant

Mike,

A FFE for this has been submitted to Nova and is being sponsored by Matt 
Riedemann:  [1]


Assuming that goes through soon, can we please re-address?

Thanks!
Jay

[1] 
http://www.mail-archive.com/openstack-dev@lists.openstack.org/msg45430.html



On 02/09/2015 12:23 PM, Mike Perez wrote:

On 17:31 Mon 09 Feb , Andreas Maier wrote:

Hello,
I would like to ask for the following feature freeze exception in Cinder.

Cinder is not in a feature freeze at the moment [1].


Additional notes: The code in Nova patch set
https://review.openstack.org/149256 is consistent with this patch set,
but a decision to include them in kilo can be made independently for
each of the two patch sets: The Nova patch set enables FCP storage for a
compute node with KVM on System z, while the Cinder patch set enables
Cinder services to run on System z Linux.

If it's not landing in Nova for Kilo [2], I'd rather not target it for Kilo in
Cinder. We're already really packed for K-3, and it's not useful without the
Nova piece. Please resubmit for L-1 in Cinder.

[1] - 
http://lists.openstack.org/pipermail/openstack-dev/2015-January/055719.html
[2] - https://blueprints.launchpad.net/nova/+spec/libvirt-kvm-systemz




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][neutron] Create VM using port-create vs nova boot only?

2015-02-09 Thread Wanjing Xu
There seemed to be two ways to create a VM via cli:
1) use neutron command to create a port first and then use nova command to 
attach the vm to that port(neutron port-create.. followed by nova boot --nic 
port-id=)2)Just use nova command and a port will implicitly be created for 
you(nova boot --nic net-id=net-uuid).
My question is : is #2 sufficient enough to cover all the scenarios?  In other 
words, if we are not allowed to use #1(can only use #2 to create vm), would we 
miss anything?
Regards!Wanjing Xu__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] Optional Properties in an Entity

2015-02-09 Thread Morgan Fainberg
On February 9, 2015 at 1:25:58 PM, Jay Pipes (jaypi...@gmail.com) wrote:
On 01/20/2015 10:54 AM, Brian Rosmaita wrote: 
 From: Kevin L. Mitchell [kevin.mitch...@rackspace.com] 
 Sent: Monday, January 19, 2015 4:54 PM 
 
 When we look at consistency, we look at everything else in OpenStack. 
 From the standpoint of the nova API (with which I am the most familiar), 
 I am not aware of any property that is ever omitted from any payload 
 without versioning coming in to the picture, even if its value is null. 
 Thus, I would argue that we should encourage the first situation, where 
 all properties are included, even if their value is null. 
 
 That is not the case for the Images API v2: 
 
 An image is always guaranteed to have the following attributes: id, 
 status, visibility, protected, tags, created_at, file and self. The other 
 attributes defined in the image schema below are guaranteed to 
 be defined, but is only returned with an image entity if they have 
 been explicitly set. [1] 

This was a mistake, IMHO. Having entirely extensible schemas means that 
there is little guaranteed consistency across implementations of the API. 

This is the same reason that I think API extensions are an abomination. 


This right here! +1. +more than +1 if I get more votes on this.

Best, 
-jay 

__ 
OpenStack Development Mailing List (not for usage questions) 
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [devstack] configuring https for glance client

2015-02-09 Thread Matt Riedemann



On 2/9/2015 5:40 PM, Andrew Lazarev wrote:

Hi Nova experts,

Some time ago I figured out that devstack fails to stack with
USE_SSL=True option because it doesn't configure nova to work with
secured glace [1]. Support of secured glance was added to nova in Juno
cycle [2], but it looks strange for me.

Glance client takes settings form '[ssl]' section. The same section is
used to set up nova server SSL settings. Other clients have separate
sections in the config file (and switching to session use now),  e.g.
related code for cinder - [3].

I've created quick fix for the devstack - [4], but it would be nice to
shed a light on nova plans around glance config before merging a
workaround for devstack.

So, the questions are:
1. Is it normal that glance client reads from '[ssl]' config section?
2. Is there a plan to move glance client to sessions use and move
corresponding config section to '[glance]'?
3. Are any plans to run CI for USE_SSL=True use case?

[1] - https://bugs.launchpad.net/devstack/+bug/1405484
[2] - https://review.openstack.org/#/c/72974
[3] -
https://github.com/openstack/nova/blob/2015.1.0b2/nova/volume/cinder.py#L73
[4] - https://review.openstack.org/#/c/153737

Thanks,
Andrew.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



This came up in another -dev thread at one point which prompted a series 
from Matthew Gilliard [1] to use [ssl] globally or project-specific 
options since both glance and keystone are currently getting their ssl 
options from the global [ssl] group in nova right now.


I've been a bad citizen and haven't gotten back to the series review yet.

[1] 
https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:ssl-config-options,n,z


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [KEYSTONE] debugging keystone code

2015-02-09 Thread Steve Martinelli
If you are running keystone under apache, and just want to see
what's going on: rpdb - https://pypi.python.org/pypi/rpdb/

Insert `import rpdb; rpdb.set_trace()` into your code,
and in another prompt, type in `nc 127.0.0.1 `

If you are attempting to debug tests, use the debug environment:
`tox -e debug test_case_name`

Steve

Abhishek Talwar/HYD/TCS abhishek.tal...@tcs.com wrote on 02/10/2015 
12:45:53 AM:

 From: Abhishek Talwar/HYD/TCS abhishek.tal...@tcs.com
 To: openstack-dev@lists.openstack.org
 Date: 02/10/2015 12:50 AM
 Subject: [openstack-dev]  [KEYSTONE] debugging keystone code
 
 Hi All,
 
 I am working on bug on keystone (#1392035) and while debugging the 
 code I am having problem. I have inserted pdb at both client side 
 and server side. While it is allowing me to debug the code at client
 side, on server side it gives me a bdb quit error. 
 
 So how can we debug the code for keystone server side. Kindly, help 
 me regarding this.
 
 
 -- 
 Thanks and Regards
 Abhishek Talwar
 Employee ID : 770072
 Assistant System Engineer
 Tata Consultancy Services,Gurgaon
 India
 Contact Details : +918377882003
 =-=-=
 Notice: The information contained in this e-mail
 message and/or attachments to it may contain 
 confidential or privileged information. If you are 
 not the intended recipient, any dissemination, use, 
 review, distribution, printing or copying of the 
 information contained in this e-mail message 
 and/or attachments to it are strictly prohibited. If 
 you have received this communication in error, 
 please notify us by reply e-mail or telephone and 
 immediately and permanently delete the message 
 and any attachments. Thank you
 
__
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] Scheduling for Magnum

2015-02-09 Thread Steven Dake (stdake)


From: Jay Lau jay.lau@gmail.commailto:jay.lau@gmail.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Monday, February 9, 2015 at 11:31 PM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Magnum] Scheduling for Magnum

Steve,

So you mean we should focus on docker and k8s scheduler? I was a bit confused, 
why do we need to care k8s? As the k8s cluster was created by heat and once the 
k8s was created, the k8s has its own scheduler for creating pods/service/rcs.

So seems we only need to care scheduler for native docker and ironic bay, 
comments?

Ya scheduler only matters for native docker.  Ironic bay can be k8s or 
docker+swarm or something similar.

But yup, I understand your point.


Thanks!

2015-02-10 12:32 GMT+08:00 Steven Dake (stdake) 
std...@cisco.commailto:std...@cisco.com:


From: Joe Gordon joe.gord...@gmail.commailto:joe.gord...@gmail.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Monday, February 9, 2015 at 6:41 PM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Magnum] Scheduling for Magnum



On Mon, Feb 9, 2015 at 6:00 AM, Steven Dake (stdake) 
std...@cisco.commailto:std...@cisco.com wrote:


On 2/9/15, 3:02 AM, Thierry Carrez 
thie...@openstack.orgmailto:thie...@openstack.org wrote:

Adrian Otto wrote:
 [...]
 We have multiple options for solving this challenge. Here are a few:

 1) Cherry pick scheduler code from Nova, which already has a working a
filter scheduler design.
 2) Integrate swarmd to leverage its scheduler[2].
 3) Wait for the Gantt, when Nova Scheduler to be moved out of Nova.
This is expected to happen about a year from now, possibly sooner.
 4) Write our own filter scheduler, inspired by Nova.

I haven't looked enough into Swarm to answer that question myself, but
how much would #2 tie Magnum to Docker containers ?

There is value for Magnum to support other container engines / formats
(think Rocket/Appc) in the long run, so we should avoid early design
choices that would prevent such support in the future.

Thierry,
Magnum has an object type of a bay which represents the underlying cluster
architecture used.  This could be kubernetes, raw docker, swarmd, or some
future invention.  This way Magnum can grow independently of the
underlying technology and provide a satisfactory user experience dealing
with the chaos that is the container development world :)

While I don't disagree with anything said here, this does sound a lot like 
https://xkcd.com/927/


Andrew had suggested offering a unified standard user experience and API.  I 
think that matches the 927 comic pretty well.  I think we should offer each 
type of system using APIs that are similar in nature but that offer the native 
features of the system.  In other words, we will offer integration across the 
various container landscape with OpenStack.

We should strive to be conservative and pragmatic in our systems support and 
only support container schedulers and container managers that have become 
strongly emergent systems.  At this point that is docker and kubernetes.  Mesos 
might fit that definition as well.  Swarmd and rocket are not yet strongly 
emergent, but they show promise of becoming so.  As a result, they are clearly 
systems we should be thinking about for our roadmap.  All of these systems 
present very similar operational models.

At some point competition will choke off new system design placing an upper 
bound on the amount of systems we have to deal with.

Regards
-steve



We will absolutely support relevant container technology, likely through
new Bay formats (which are really just heat templates).

Regards
-steve


--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

[openstack-dev] [nova] SR-IOV IRC meeting for 2/10

2015-02-09 Thread Robert Li (baoli)
Hi,

I won’t be able to make it for tomorrow’s meeting. But you guys are welcome to 
have the meeting without me.

—Robert
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] Optional Properties in an Entity

2015-02-09 Thread Joe Gordon
On Mon, Feb 9, 2015 at 1:22 PM, Jay Pipes jaypi...@gmail.com wrote:

 On 01/20/2015 10:54 AM, Brian Rosmaita wrote:

 From: Kevin L. Mitchell [kevin.mitch...@rackspace.com]
 Sent: Monday, January 19, 2015 4:54 PM

  When we look at consistency, we look at everything else in OpenStack.
  From the standpoint of the nova API (with which I am the most familiar),
 I am not aware of any property that is ever omitted from any payload
 without versioning coming in to the picture, even if its value is null.
 Thus, I would argue that we should encourage the first situation, where
 all properties are included, even if their value is null.


 That is not the case for the Images API v2:

 An image is always guaranteed to have the following attributes: id,
 status, visibility, protected, tags, created_at, file and self. The other
 attributes defined in the image schema below are guaranteed to
 be defined, but is only returned with an image entity if they have
 been explicitly set. [1]


 This was a mistake, IMHO. Having entirely extensible schemas means that
 there is little guaranteed consistency across implementations of the API.


+1, Subtle hard to discover differences between clouds is a pain for
interchangeability.



 This is the same reason that I think API extensions are an abomination.

 Best,
 -jay


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] release request for python-novaclient

2015-02-09 Thread melanie witt
On Feb 6, 2015, at 8:17, Matt Riedemann mrie...@linux.vnet.ibm.com wrote:

 We haven't done a release of python-novaclient in awhile (2.20.0 was released 
 on 2014-9-20 before the Juno release).
 
 It looks like there are some important feature adds and bug fixes on master 
 so we should do a release, specifically to pick up the change for keystone v3 
 support [1].
 
 So can this be done now or should this wait until closer to the Kilo release 
 (library releases are cheap so I don't see why we'd wait).

Thanks for bringing this up -- there are indeed a lot of important features and 
fixes on master.

I agree we should do a release as soon as possible, and I don't think there's 
any reason to wait until closer to Kilo.

melanie (melwitt)






signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] nova api.fault notification isn't collected by ceilometer

2015-02-09 Thread yuntong


On 2015年02月10日 05:12, gordon chung wrote:

 In nova api, a nova api.fault notification will be send out when the when 
there
 is en error.
http://git.openstack.org/cgit/openstack/nova/tree/nova/api/openstack/__init__.py#n119
 but i couldn't find where they are  processed  in ceilometer,
 an error notification can be very desired to be collected, do we have plan to
 add this, shall i need a bp to do that ?
there's a patch for review to store error info: 
https://review.openstack.org/#/c/153362/


cheers,
/gord/

Yep, that's what i'm looking for, thanks,
another notification from nova that is missed in ceilometer is info from 
nova api:

https://github.com/openstack/nova/blob/master/nova/notifications.py#L64
this notify_decorator will decorate every nova/ec2 rest api and send out 
a notification for each api actions:

https://github.com/openstack/nova/blob/master/nova/utils.py#L526
from which will send out notification like: %s.%s.%s % (module, key, 
method) ,

and no notification plugin in ceilometer to deal with them.
Let me know if i should file a bug for this.
Thanks,

-yuntong


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [gantt] Scheduler sub-group meeting agenda 2/3

2015-02-09 Thread Dugger, Donald D
Meeting on #openstack-meeting at 1500 UTC (8:00AM MST)


1)  Remove direct nova DB/API access by Scheduler Filters - 
https://review.opernstack.org/138444/

2)  Status on cleanup work - https://wiki.openstack.org/wiki/Gantt/kilo

--
Don Dugger
Censeo Toto nos in Kansa esse decisse. - D. Gale
Ph: 303/443-3786

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] release request for python-novaclient

2015-02-09 Thread Michael Still
The previous policy is that we do a release when requested or when a
critical bug fix merges. I don't see any critical fixes awaiting
release, but I am not opposed to a release.

The reason I didn't do this yesterday is that Joe wanted some time to
pin the stable requirements, which I believe he is still working on.
Let's give him some time unless this is urgent.

Michael

On Tue, Feb 10, 2015 at 2:45 PM, melanie witt melwi...@gmail.com wrote:
 On Feb 6, 2015, at 8:17, Matt Riedemann mrie...@linux.vnet.ibm.com wrote:

 We haven't done a release of python-novaclient in awhile (2.20.0 was 
 released on 2014-9-20 before the Juno release).

 It looks like there are some important feature adds and bug fixes on master 
 so we should do a release, specifically to pick up the change for keystone 
 v3 support [1].

 So can this be done now or should this wait until closer to the Kilo release 
 (library releases are cheap so I don't see why we'd wait).

 Thanks for bringing this up -- there are indeed a lot of important features 
 and fixes on master.

 I agree we should do a release as soon as possible, and I don't think there's 
 any reason to wait until closer to Kilo.

 melanie (melwitt)





 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Rackspace Australia

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] Create VM using port-create vs nova boot only?

2015-02-09 Thread Feodor Tersin
Hi

When you create a port separately, you can specify additional fixed IPs,
extra DHCP options. But with 'nova boot' you cannot.
Also if you need an instance with several nics, and you want that each nic
has its own set of security groups, you should create ports separately.
Because 'nova boot --security-groups ggg' command sets specified security
groups for an each port, which is created during the instance launch.

On Tue, Feb 10, 2015 at 9:21 AM, Wanjing Xu wanjing...@hotmail.com wrote:

 There seemed to be two ways to create a VM via cli:

 1) use neutron command to create a port first and then use nova command to
 attach the vm to that port(neutron port-create.. followed by nova boot
 --nic port-id=)
 2)Just use nova command and a port will implicitly be created for you(nova
 boot --nic net-id=net-uuid).

 My question is : is #2 sufficient enough to cover all the scenarios?  In
 other words, if we are not allowed to use #1(can only use #2 to create vm),
 would we miss anything?

 Regards!
 Wanjing Xu

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] Scheduling for Magnum

2015-02-09 Thread Jay Lau
Thanks Steve, just want to discuss more for this. Then per Andrew's
comments, we need a generic scheduling interface, but if our focus is
native docker, then does this still needed? Thanks!

2015-02-10 14:52 GMT+08:00 Steven Dake (stdake) std...@cisco.com:



   From: Jay Lau jay.lau@gmail.com
 Reply-To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Date: Monday, February 9, 2015 at 11:31 PM
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Magnum] Scheduling for Magnum

Steve,

  So you mean we should focus on docker and k8s scheduler? I was a bit
 confused, why do we need to care k8s? As the k8s cluster was created by
 heat and once the k8s was created, the k8s has its own scheduler for
 creating pods/service/rcs.

  So seems we only need to care scheduler for native docker and ironic bay,
 comments?


  Ya scheduler only matters for native docker.  Ironic bay can be k8s or
 docker+swarm or something similar.

  But yup, I understand your point.


 Thanks!

 2015-02-10 12:32 GMT+08:00 Steven Dake (stdake) std...@cisco.com:



   From: Joe Gordon joe.gord...@gmail.com
 Reply-To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 Date: Monday, February 9, 2015 at 6:41 PM
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Magnum] Scheduling for Magnum



  On Mon, Feb 9, 2015 at 6:00 AM, Steven Dake (stdake) std...@cisco.com
 wrote:



 On 2/9/15, 3:02 AM, Thierry Carrez thie...@openstack.org wrote:

 Adrian Otto wrote:
  [...]
  We have multiple options for solving this challenge. Here are a few:
 
  1) Cherry pick scheduler code from Nova, which already has a working a
 filter scheduler design.
  2) Integrate swarmd to leverage its scheduler[2].
  3) Wait for the Gantt, when Nova Scheduler to be moved out of Nova.
 This is expected to happen about a year from now, possibly sooner.
  4) Write our own filter scheduler, inspired by Nova.
 
 I haven't looked enough into Swarm to answer that question myself, but
 how much would #2 tie Magnum to Docker containers ?
 
 There is value for Magnum to support other container engines / formats
 (think Rocket/Appc) in the long run, so we should avoid early design
 choices that would prevent such support in the future.

 Thierry,
 Magnum has an object type of a bay which represents the underlying
 cluster
 architecture used.  This could be kubernetes, raw docker, swarmd, or some
 future invention.  This way Magnum can grow independently of the
 underlying technology and provide a satisfactory user experience dealing
 with the chaos that is the container development world :)


  While I don't disagree with anything said here, this does sound a lot
 like https://xkcd.com/927/



 Andrew had suggested offering a unified standard user experience and
 API.  I think that matches the 927 comic pretty well.  I think we should
 offer each type of system using APIs that are similar in nature but that
 offer the native features of the system.  In other words, we will offer
 integration across the various container landscape with OpenStack.

  We should strive to be conservative and pragmatic in our systems
 support and only support container schedulers and container managers that
 have become strongly emergent systems.  At this point that is docker and
 kubernetes.  Mesos might fit that definition as well.  Swarmd and rocket
 are not yet strongly emergent, but they show promise of becoming so.  As a
 result, they are clearly systems we should be thinking about for our
 roadmap.  All of these systems present very similar operational models.

  At some point competition will choke off new system design placing an
 upper bound on the amount of systems we have to deal with.

  Regards
 -steve



 We will absolutely support relevant container technology, likely through
 new Bay formats (which are really just heat templates).

 Regards
 -steve

 
 --
 Thierry Carrez (ttx)
 

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 

Re: [openstack-dev] [mistral] mistral actions plugin architecture

2015-02-09 Thread Renat Akhmerov
Hi,

It’s pretty simple and described in 
http://mistral.readthedocs.org/en/master/developer/writing_a_plugin_action.html 
http://mistral.readthedocs.org/en/master/developer/writing_a_plugin_action.html.

Renat Akhmerov
@ Mirantis Inc.



 On 09 Feb 2015, at 21:43, Filip Blaha filip.bl...@hp.com wrote:
 
 Hi all,
 
 regarding to [1] there should be some plugin mechanism for custom actions in 
 Mistral. I went through code and I found some introspection mechanism [2] 
 generating mistral actions from methods on client classes for openstack core 
 projects. E.g. it takes nova client class (python-novaclient) and introspects 
 its methods and theirs parameters and creates corresponding actions with 
 corresponding parameters. The same for other core projects like neutron, 
 cinder, ... However the list of  these client classes seems to be hardcoded 
 [3].  So I am not sure whether this mechanism can be used for other projects 
 like murano client to create murano related actions in mistral? Or is there 
 any other pluggable mechanism to get murano actions into mistral without 
 hardcoding in mistral project?
 
 [1] 
 https://wiki.openstack.org/wiki/Mistral/Blueprints/ActionsDesign#Plugin_Architecture
  
 [2] 
 https://github.com/stackforge/mistral/blob/master/mistral/actions/openstack/action_generator/base.py#L91
  
 [3] 
 https://github.com/stackforge/mistral/blob/master/mistral/actions/generator_factory.py
  
 
 
 Regards
 Filip
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder] Feature Freeze Exception Request - bp/linux-systemz

2015-02-09 Thread Andreas Maier

Hello,
I would like to ask for the following feature freeze exception in Cinder.

The patch set below is part of this blueprint:
https://blueprints.launchpad.net/cinder/+spec/linux-systemz

1. https://review.openstack.org/149256 - FCP support for System z

   Title: Adjust Cinder to support FCP on System z systems

   What it does: This patch set enables FCP support when the Cinder
   services run on System z.

   Impact if we don't get this: Cinder services cannot run on Linux for
   System z. A mitigation is to run Cinder services in on x86 Linux (even
   in an OpenStack installation that includes compute nodes with KVM on
   System z).

   Why we need it: We'd like to be able to support OpenStack installations
   where Cinder services run on System z Linux, which is the case for
   example in an all-in-one topology.

   Additional notes: The code in Nova patch set
   https://review.openstack.org/149256 is consistent with this patch set,
   but a decision to include them in kilo can be made independently for
   each of the two patch sets: The Nova patch set enables FCP storage for a
   compute node with KVM on System z, while the Cinder patch set enables
   Cinder services to run on System z Linux.

Andy

Andreas Maier
IBM Senior Technical Staff Member, Systems Management Architecture  Design
IBM Research  Development Laboratory Boeblingen, Germany
mai...@de.ibm.com, +49-7031-16-3654

IBM Deutschland Research  Development GmbH
Vorsitzende des Aufsichtsrats: Martina Koederitz
Geschaeftsfuehrung: Dirk Wittkopp
Sitz der Gesellschaft: Boeblingen
Registergericht: Amtsgericht Stuttgart, HRB 243294


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] Mid-Cycle Meetup Planning

2015-02-09 Thread Adrian Otto
Thierry,

Done! Thanks for the great suggestion.

Cheers,

Adrian

On Feb 9, 2015, at 1:51 AM, Thierry Carrez thie...@openstack.org wrote:

 Adrian Otto wrote:
 Team,
 
 Our dates have been set as 2015-03-02 and 2015-03-03.
 
 Wiki (With location, map, calendar links, agenda planning link, and links to 
 tickets):
 https://wiki.openstack.org/wiki/Magnum/Midcycle
 
 You can also add a line to the reference list at:
 https://wiki.openstack.org/wiki/Sprints
 
 Cheers,
 
 -- 
 Thierry Carrez (ttx)
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Feature Freeze Exception Request - bp/libvirt-kvm-systemz

2015-02-09 Thread Andreas Maier

Hello,
I would like to ask for the following feature freeze exceptions in Nova.

The patch sets below are all part of this blueprint:
https://review.openstack.org/#/q/status:open+project:openstack/nova
+branch:master+topic:bp/libvirt-kvm-systemz,n,z
and affect only the kvm/libvirt driver of Nova.

The decision for merging these patch sets by exception can be made one by
one; they are independent of each other.

1. https://review.openstack.org/149242 - FCP support

   Title: libvirt: Adjust Nova to support FCP on System z systems

   What it does: This patch set enables FCP support for KVM on System z.

   Impact if we don't get this: FCP attached storage does not work for KVM
   on System z.

   Why we need it: We really depend on this particular patch set, because
   FCP is our most important storage attachment.

   Additional notes: The code in the libvirt driver that is updated by this
   patch set is consistent with corresponding code in the Cinder driver,
   and it has seen review by the Cinder team.

2. https://review.openstack.org/150505 - Console support

   Title: libvirt: Enable serial_console feature for system z

   What it does: This patch set enables the backing support in Nova for the
   interactive console in Horizon.

   Impact if we don't get this: Console in Horizon does not work. The
   mitigation for a user would be to use the Log in Horizon (i.e. with
   serial_console disabled), or the virsh console command in an ssh
   session to the host Linux.

   Why we need it: We'd like to have console support. Also, because the
   Nova support for the Log in Horizon has been merged in an earlier patch
   set as part of this blueprint, this remaining patch set makes the
   console/log support consistent for KVM on System z Linux.

3. https://review.openstack.org/150497 - ISO/CDROM support

   Title: libvirt: Set SCSI as the default cdrom bus on System z

   What it does: This patch set enables that cdrom drives can be attached
   to an instance on KVM on System z. This is needed for example for
   cloud-init config files, but also for simply attaching ISO images to
   instances. The technical reason for this change is that the IDE
   attachment is not available on System z, and we need SCSI (just like
   Power Linux).

   Impact if we don't get this:
  - Cloud-init config files cannot be on a cdrom drive. A mitigation
 for a user would be to have such config files on a cloud-init
 server.
  - ISO images cannot be attached to instances. There is no mitigation.

   Why we need it: We would like to avoid having to restrict cloud-init
   configuration to just using cloud-init servers. We would like to be able
   to support ISO images.

   Additional notes: This patch is a one line change (it simply extends
   what is already done in a platform specific case for the Power platform,
   to be also used for System z).

Andy

Andreas Maier
IBM Senior Technical Staff Member, Systems Management Architecture  Design
IBM Research  Development Laboratory Boeblingen, Germany
mai...@de.ibm.com, +49-7031-16-3654

IBM Deutschland Research  Development GmbH
Vorsitzende des Aufsichtsrats: Martina Koederitz
Geschaeftsfuehrung: Dirk Wittkopp
Sitz der Gesellschaft: Boeblingen
Registergericht: Amtsgericht Stuttgart, HRB 243294


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Theory of Testing Cross Project Spec

2015-02-09 Thread Matthew Booth
On 09/02/15 13:59, Sean Dague wrote:
 Comments are welcomed, please keep typos / grammar as '0' comments to
 separate them from -1 comments. Also, I ask any -1s to be extremely
 clear about the core concern of the -1 so we can figure out how to make
 progress.

Incidentally, I think these are good guidelines for comments on all reviews.

Matt
-- 
Matthew Booth
Red Hat Engineering, Virtualisation Team

Phone: +442070094448 (UK)
GPG ID:  D33C3490
GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] Scheduling for Magnum

2015-02-09 Thread Steven Dake (stdake)


From: Andrew Melton 
andrew.mel...@rackspace.commailto:andrew.mel...@rackspace.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Monday, February 9, 2015 at 10:38 AM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Magnum] Scheduling for Magnum

I think Sylvain is getting at an important point. Magnum is trying to be as 
agnostic as possible when it comes to selecting a backend. Because of that, I 
think the biggest benefit to Magnum would be a generic scheduling interface 
that each pod type would implement. A pod type with a backend providing 
scheduling could implement a thin scheduler that simply translates the generic 
requests into something the backend can understand. And a pod type requiring 
outside scheduling could implement something more heavy.

If we are careful to keep the heavy scheduling generic enough to be shared 
between backends requiring it, we could hopefully swap in an implementation 
using Gantt once that is ready.

Great mid-cycle topic discussion topic.  Can you add it to the planning 
etherpad?

Thanks
-steve

--Andrew


From: Jay Lau [jay.lau@gmail.commailto:jay.lau@gmail.com]
Sent: Monday, February 09, 2015 4:36 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Magnum] Scheduling for Magnum

Thanks Sylvain, we did not work out the API requirement till now but I think 
the requirement should be similar with nova: we need select_destination to 
select the best target host based on filters and weights.

There are also some discussions here 
https://blueprints.launchpad.net/magnum/+spec/magnum-scheduler-for-docker

Thanks!

2015-02-09 16:22 GMT+08:00 Sylvain Bauza 
sba...@redhat.commailto:sba...@redhat.com:
Hi Magnum team,


Le 07/02/2015 19:24, Steven Dake (stdake) a écrit :


From: Eric Windisch e...@windisch.usmailto:e...@windisch.us
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Saturday, February 7, 2015 at 10:09 AM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Magnum] Scheduling for Magnum


1) Cherry pick scheduler code from Nova, which already has a working a filter 
scheduler design.

The Gantt team explored that option by the Icehouse cycle and it failed with a 
lot of problems. I won't list all of those, but I'll just explain that we 
discovered how the Scheduler and the Nova compute manager were tighly coupled, 
which was meaning that a repository fork was really difficult to do without 
reducing the tech debt.

That said, our concerns were far different from the Magnum team : it was about 
having feature parity and replacing the current Nova scheduler, while your team 
is just saying that they want to have something about containers.


2) Integrate swarmd to leverage its scheduler[2].

I see #2 as not an alternative but possibly an also. Swarm uses the Docker 
API, although they're only about 75% compatible at the moment. Ideally, the 
Docker backend would work with both single docker hosts and clusters of Docker 
machines powered by Swarm. It would be nice, however, if scheduler hints could 
be passed from Magnum to Swarm.

Regards,
Eric Windisch

Adrian  Eric,

I would prefer to keep things simple and just integrate directly with swarm and 
leave out any cherry-picking from Nova. It would be better to integrate 
scheduling hints into Swarm, but I’m sure the swarm upstream is busy with 
requests and this may be difficult to achieve.


I don't want to give my opinion about which option you should take as I don't 
really know your needs. If I understand correctly, this is about having a 
scheduler providing affinity rules for containers. Do you have a document 
explaining which interfaces you're looking for, which kind of APIs you're 
wanting or what's missing with the current Nova scheduler ?

MHO is that the technology shouldn't drive your decision : whatever the backend 
is (swarmd or an inherited nova scheduler), your interfaces should be the same.

-Sylvain


Regards
-steve




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribemailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

Re: [openstack-dev] [stable] juno is fubar in the gate

2015-02-09 Thread Matt Riedemann



On 2/9/2015 12:23 PM, Joe Gordon wrote:


On Feb 9, 2015 10:04 AM, Matt Riedemann mrie...@linux.vnet.ibm.com
mailto:mrie...@linux.vnet.ibm.com wrote:
 
  There are at least two blocking bugs:
 
  1. https://bugs.launchpad.net/grenade/+bug/1419913
 
  Sounds like jogo is working a javelin fix for this. I'm not aware of
a patch to review though.

We need to stop trying to install tempest in the same env as stable/* code.

I should be able to revise/respond to comments shortly.

https://review.openstack.org/#/c/153080/

https://review.openstack.org/#/c/153702/

This is also blocking my effort to pin stable dependencies (Dean's
devstack changes are needed before we can pin stable dependencies as well).

 
  2. https://bugs.launchpad.net/ceilometer/+bug/1419919
 
  I'm not sure yet what's going on with this one.
 
  --
 
  Thanks,
 
  Matt Riedemann
 
 
 
__
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Tracking etherpad:

https://etherpad.openstack.org/p/wedged-stable-gate-feb-2015

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo.messaging] Can RPCClient.call() be used with a subset of servers?

2015-02-09 Thread Gravel, Julie Chongcharoen
Hello,
I want to use oslo.messaging.RPCClient.call() to invoke a 
method on multiple servers, but not all of them. Can this be done and how? I 
read the code documentation (client.py and target.py). I only saw either the 
call used for one server at a time, or for all of them using the fanout param. 
Neither options is exactly what I want.
Any response/explanation would be highly appreciated.

Regards,
Julie Gravel

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] Scheduling for Magnum

2015-02-09 Thread Sylvain Bauza

Hi Magnum team,


Le 07/02/2015 19:24, Steven Dake (stdake) a écrit :



From: Eric Windisch e...@windisch.us mailto:e...@windisch.us
Reply-To: OpenStack Development Mailing List (not for usage 
questions) openstack-dev@lists.openstack.org 
mailto:openstack-dev@lists.openstack.org

Date: Saturday, February 7, 2015 at 10:09 AM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org 
mailto:openstack-dev@lists.openstack.org

Subject: Re: [openstack-dev] [Magnum] Scheduling for Magnum


1) Cherry pick scheduler code from Nova, which already has a
working a filter scheduler design.



The Gantt team explored that option by the Icehouse cycle and it failed 
with a lot of problems. I won't list all of those, but I'll just explain 
that we discovered how the Scheduler and the Nova compute manager were 
tighly coupled, which was meaning that a repository fork was really 
difficult to do without reducing the tech debt.


That said, our concerns were far different from the Magnum team : it was 
about having feature parity and replacing the current Nova scheduler, 
while your team is just saying that they want to have something about 
containers.




2) Integrate swarmd to leverage its scheduler[2].


I see #2 as not an alternative but possibly an also. Swarm uses
the Docker API, although they're only about 75% compatible at the
moment. Ideally, the Docker backend would work with both single
docker hosts and clusters of Docker machines powered by Swarm. It
would be nice, however, if scheduler hints could be passed from
Magnum to Swarm.

Regards,
Eric Windisch


Adrian  Eric,

I would prefer to keep things simple and just integrate directly with 
swarm and leave out any cherry-picking from Nova. It would be better 
to integrate scheduling hints into Swarm, but I’m sure the swarm 
upstream is busy with requests and this may be difficult to achieve.




I don't want to give my opinion about which option you should take as I 
don't really know your needs. If I understand correctly, this is about 
having a scheduler providing affinity rules for containers. Do you have 
a document explaining which interfaces you're looking for, which kind of 
APIs you're wanting or what's missing with the current Nova scheduler ?


MHO is that the technology shouldn't drive your decision : whatever the 
backend is (swarmd or an inherited nova scheduler), your interfaces 
should be the same.


-Sylvain



Regards
-steve



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] Scheduling for Magnum

2015-02-09 Thread Adrian Otto
I think it’s fair to assert that our generic scheduling interface should be 
based on Gantt. When that approaches a maturity point where it’s appropriate to 
leverage Gantt for container use cases, we should definitely consider switching 
to that. We should remain engaged in Gantt design decisions along the way to 
provide input.

In the short term we want a solution that works nicely for our Docker handler, 
because that’s an obvious functionality gap. The k8s handler already has a 
scheduler, so it can remain unchanged. Let’s not fall into a trap of 
over-engineering the scheduler, as that can be very tempting but yield limited 
value.

My suggestion is that we focus on the right solution for the Docker backend for 
now, and keep in mind that we want a general purpose scheduler in the future 
that could be adapted to work with a variety of container backends.

I want to recognize that Andrew’s thoughts are well considered to avoid rework 
and remain agnostic about container backends. Further, I think resource 
scheduling is the sort of problem domain that would lend itself well to a 
common solution with numerous use cases. If you look at the various ones that 
exist today, there are lots of similarities. We will find a multitude of 
scheduling algorithms, but probably not uniquely innovative scheduling 
interfaces. The interface to a scheduler will be relatively simple, and we 
could afford to collaborate a bit with the Gantt team to get solid ideas on the 
table for that. Let’s table that pursuit for now, and re-engage at our Midcycle 
meetup to explore that topic further. In the mean time, I’d like us to iterate 
on a suitable point solution for the Docker backend. A final iteration of that 
work may be to yank it completely, and replace it with a common scheduler at a 
later point. I’m willing to accept that tradeoff for a quick delivery of a 
Docker specific scheduler that we can learn from and iterate.

Cheers,

Adrian

On Feb 9, 2015, at 10:57 PM, Jay Lau 
jay.lau@gmail.commailto:jay.lau@gmail.com wrote:

Thanks Steve, just want to discuss more for this. Then per Andrew's comments, 
we need a generic scheduling interface, but if our focus is native docker, then 
does this still needed? Thanks!

2015-02-10 14:52 GMT+08:00 Steven Dake (stdake) 
std...@cisco.commailto:std...@cisco.com:


From: Jay Lau jay.lau@gmail.commailto:jay.lau@gmail.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Monday, February 9, 2015 at 11:31 PM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Magnum] Scheduling for Magnum

Steve,

So you mean we should focus on docker and k8s scheduler? I was a bit confused, 
why do we need to care k8s? As the k8s cluster was created by heat and once the 
k8s was created, the k8s has its own scheduler for creating pods/service/rcs.

So seems we only need to care scheduler for native docker and ironic bay, 
comments?

Ya scheduler only matters for native docker.  Ironic bay can be k8s or 
docker+swarm or something similar.

But yup, I understand your point.


Thanks!

2015-02-10 12:32 GMT+08:00 Steven Dake (stdake) 
std...@cisco.commailto:std...@cisco.com:


From: Joe Gordon joe.gord...@gmail.commailto:joe.gord...@gmail.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Monday, February 9, 2015 at 6:41 PM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Magnum] Scheduling for Magnum



On Mon, Feb 9, 2015 at 6:00 AM, Steven Dake (stdake) 
std...@cisco.commailto:std...@cisco.com wrote:


On 2/9/15, 3:02 AM, Thierry Carrez 
thie...@openstack.orgmailto:thie...@openstack.org wrote:

Adrian Otto wrote:
 [...]
 We have multiple options for solving this challenge. Here are a few:

 1) Cherry pick scheduler code from Nova, which already has a working a
filter scheduler design.
 2) Integrate swarmd to leverage its scheduler[2].
 3) Wait for the Gantt, when Nova Scheduler to be moved out of Nova.
This is expected to happen about a year from now, possibly sooner.
 4) Write our own filter scheduler, inspired by Nova.

I haven't looked enough into Swarm to answer that question myself, but
how much would #2 tie Magnum to Docker containers ?

There is value for Magnum to support other container engines / formats
(think Rocket/Appc) in the long run, so we should avoid early design
choices that would prevent such support in the future.

Thierry,
Magnum has an object type of a bay which represents the underlying cluster
architecture used.  This could be kubernetes, raw docker, swarmd, or some
future invention.  This way Magnum can grow 

Re: [openstack-dev] [NOVA] security group fails to attach to an instance if port-id is specified during boot.

2015-02-09 Thread Oleg Bondarev
On Mon, Feb 9, 2015 at 8:50 PM, Feodor Tersin fter...@cloudscaling.com
wrote:

 nova boot ... --nic port-id=xxx --nic net-id=yyy
 this case is valid, right?
 I.e. i want to boot instance with two ports. The first port is specified,
 but the second one is created at network mapping stage.
 If i specify a security group as well, it will be used for the second port
 (if not - default group will):
 nova boot ... --nic port-id=xxx --nic net-id=yyy --security-groups sg-1
 Thus a port and a security group can be specified together.


The question here is what do you expect for the existing port - it's
security groups updated or not?
Will it be ok to silently (or with warning in logs) ignore security groups
for it?
If it's ok then is it ok to do the same for:
nova boot ... --nic port-id=xxx --security-groups sg-1
where the intention is clear enough?



 On Mon, Feb 9, 2015 at 7:14 PM, Matt Riedemann mrie...@linux.vnet.ibm.com
  wrote:



 On 9/26/2014 3:19 AM, Christopher Yeoh wrote:

 On Fri, 26 Sep 2014 11:25:49 +0400
 Oleg Bondarev obonda...@mirantis.com wrote:

  On Fri, Sep 26, 2014 at 3:30 AM, Day, Phil philip@hp.com wrote:

I think the expectation is that if a user is already interaction
 with Neutron to create ports then they should do the security group
 assignment in Neutron as well.


 Agree. However what do you think a user expects when he/she boots a
 vm (no matter providing port_id or just net_id)
 and specifies security_groups? I think the expectation should be that
 instance will become a member of the specified groups.
 Ignoring security_groups parameter in case port is provided (as it is
 now) seems completely unfair to me.


 One option would be to return a 400 if both port id and security_groups
 is supplied.

 Chris

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 Coming back to this, we now have a change from Oleg [1] after an initial
 attempt that was reverted because it would break server creates if you
 specified a port (because the original change would blow up when the
 compute API added the 'default' security group to the request').

 The new change doesn't add the 'default' security group to the request so
 if you specify a security group and port on the request, you'll now get a
 400 error response.

 Does this break API compatibility?  It seems this falls under the first
 bullet here [2], A change such that a request which was successful before
 now results in an error response (unless the success reported previously
 was hiding an existing error condition).  Does that caveat in parenthesis
 make this OK?

 It seems like we've had a lot of talk about warts in the compute v2 API
 for cases where an operation is successful but didn't yield the expected
 result, but we can't change them because of API backwards compatibility
 concerns so I'm hesitant on this.

 We also definitely need a Tempest test here, which I'm looking into.  I
 think I can work this into the test_network_basic_ops scenario test.

 [1] https://review.openstack.org/#/c/154068/
 [2] https://wiki.openstack.org/wiki/APIChangeGuidelines#
 Generally_Not_Acceptable

 --

 Thanks,

 Matt Riedemann


 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
 unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] Scheduling for Magnum

2015-02-09 Thread Jay Lau
Thanks Adrian for the clear clarification, clear now.

OK, we can focus on the right solution for the Docker back-end for now.

2015-02-10 15:35 GMT+08:00 Adrian Otto adrian.o...@rackspace.com:

  I think it’s fair to assert that our generic scheduling interface should
 be based on Gantt. When that approaches a maturity point where it’s
 appropriate to leverage Gantt for container use cases, we should definitely
 consider switching to that. We should remain engaged in Gantt design
 decisions along the way to provide input.

  In the short term we want a solution that works nicely for our Docker
 handler, because that’s an obvious functionality gap. The k8s handler
 already has a scheduler, so it can remain unchanged. Let’s not fall into a
 trap of over-engineering the scheduler, as that can be very tempting but
 yield limited value.

  My suggestion is that we focus on the right solution for the Docker
 backend for now, and keep in mind that we want a general purpose scheduler
 in the future that could be adapted to work with a variety of container
 backends.

  I want to recognize that Andrew’s thoughts are well considered to avoid
 rework and remain agnostic about container backends. Further, I think
 resource scheduling is the sort of problem domain that would lend itself
 well to a common solution with numerous use cases. If you look at the
 various ones that exist today, there are lots of similarities. We will find
 a multitude of scheduling algorithms, but probably not uniquely innovative
 scheduling interfaces. The interface to a scheduler will be relatively
 simple, and we could afford to collaborate a bit with the Gantt team to get
 solid ideas on the table for that. Let’s table that pursuit for now, and
 re-engage at our Midcycle meetup to explore that topic further. In the mean
 time, I’d like us to iterate on a suitable point solution for the Docker
 backend. A final iteration of that work may be to yank it completely, and
 replace it with a common scheduler at a later point. I’m willing to accept
 that tradeoff for a quick delivery of a Docker specific scheduler that we
 can learn from and iterate.

  Cheers,

  Adrian

  On Feb 9, 2015, at 10:57 PM, Jay Lau jay.lau@gmail.com wrote:

  Thanks Steve, just want to discuss more for this. Then per Andrew's
 comments, we need a generic scheduling interface, but if our focus is
 native docker, then does this still needed? Thanks!

 2015-02-10 14:52 GMT+08:00 Steven Dake (stdake) std...@cisco.com:



   From: Jay Lau jay.lau@gmail.com
 Reply-To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 Date: Monday, February 9, 2015 at 11:31 PM
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Magnum] Scheduling for Magnum

Steve,

  So you mean we should focus on docker and k8s scheduler? I was a bit
 confused, why do we need to care k8s? As the k8s cluster was created by
 heat and once the k8s was created, the k8s has its own scheduler for
 creating pods/service/rcs.

  So seems we only need to care scheduler for native docker and ironic
 bay, comments?


  Ya scheduler only matters for native docker.  Ironic bay can be k8s or
 docker+swarm or something similar.

  But yup, I understand your point.


 Thanks!

 2015-02-10 12:32 GMT+08:00 Steven Dake (stdake) std...@cisco.com:



   From: Joe Gordon joe.gord...@gmail.com
 Reply-To: OpenStack Development Mailing List (not for usage
 questions) openstack-dev@lists.openstack.org
 Date: Monday, February 9, 2015 at 6:41 PM
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Magnum] Scheduling for Magnum



  On Mon, Feb 9, 2015 at 6:00 AM, Steven Dake (stdake) std...@cisco.com
 wrote:



 On 2/9/15, 3:02 AM, Thierry Carrez thie...@openstack.org wrote:

 Adrian Otto wrote:
  [...]
  We have multiple options for solving this challenge. Here are a few:
 
  1) Cherry pick scheduler code from Nova, which already has a working
 a
 filter scheduler design.
  2) Integrate swarmd to leverage its scheduler[2].
  3) Wait for the Gantt, when Nova Scheduler to be moved out of Nova.
 This is expected to happen about a year from now, possibly sooner.
  4) Write our own filter scheduler, inspired by Nova.
 
 I haven't looked enough into Swarm to answer that question myself, but
 how much would #2 tie Magnum to Docker containers ?
 
 There is value for Magnum to support other container engines / formats
 (think Rocket/Appc) in the long run, so we should avoid early design
 choices that would prevent such support in the future.

 Thierry,
 Magnum has an object type of a bay which represents the underlying
 cluster
 architecture used.  This could be kubernetes, raw docker, swarmd, or
 some
 future invention.  This way Magnum can grow independently of the
 underlying technology and provide a satisfactory user 

[openstack-dev] [stable] juno is fubar in the gate

2015-02-09 Thread Matt Riedemann

There are at least two blocking bugs:

1. https://bugs.launchpad.net/grenade/+bug/1419913

Sounds like jogo is working a javelin fix for this. I'm not aware of a 
patch to review though.


2. https://bugs.launchpad.net/ceilometer/+bug/1419919

I'm not sure yet what's going on with this one.

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] system information panel, update with heat-engine status

2015-02-09 Thread Manickam, Kanagaraj
Hi,

I am waiting for the approval for K-3 as on the heat side, this functionality 
already implemented. Could some please approve
https://blueprints.launchpad.net/horizon/+spec/heat-engine-status-report

Thanks.

Regards
Kanagaraj M

From: Manickam, Kanagaraj
Sent: Thursday, February 05, 2015 11:13 PM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev][horizon] system information panel, update with 
heat-engine status

Hello Horizon Cores,

In K-2, Heat is enabled with new REST API to report the running heat-engine 
status, This is in-line with how nova reports nova-compute running status.
To report this feature in horizon under 'System information panel', a new 
blueprint is created at 
https://blueprints.launchpad.net/horizon/+spec/heat-engine-status-report

Can one of you kindly approve it to target in K release, so that admin can view 
the currently running Heat-engine's status from horizon.

Thanks.

Regards
Kanagaraj M
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][oslo.db][nova] TL; DR Things everybody should know about Galera

2015-02-09 Thread Attila Fazekas




- Original Message -
 From: Jay Pipes jaypi...@gmail.com
 To: openstack-dev@lists.openstack.org, Pavel Kholkin pkhol...@mirantis.com
 Sent: Wednesday, February 4, 2015 8:04:10 PM
 Subject: Re: [openstack-dev] [all][oslo.db][nova] TL; DR Things everybody 
 should know about Galera
 
 On 02/04/2015 12:05 PM, Sahid Orentino Ferdjaoui wrote:
  On Wed, Feb 04, 2015 at 04:30:32PM +, Matthew Booth wrote:
  I've spent a few hours today reading about Galera, a clustering solution
  for MySQL. Galera provides multi-master 'virtually synchronous'
  replication between multiple mysql nodes. i.e. I can create a cluster of
  3 mysql dbs and read and write from any of them with certain consistency
  guarantees.
 
  I am no expert[1], but this is a TL;DR of a couple of things which I
  didn't know, but feel I should have done. The semantics are important to
  application design, which is why we should all be aware of them.
 
 
  * Commit will fail if there is a replication conflict
 
  foo is a table with a single field, which is its primary key.
 
  A: start transaction;
  B: start transaction;
  A: insert into foo values(1);
  B: insert into foo values(1); -- 'regular' DB would block here, and
 report an error on A's commit
  A: commit; -- success
  B: commit; -- KABOOM
 
  Confusingly, Galera will report a 'deadlock' to node B, despite this not
  being a deadlock by any definition I'm familiar with.
 
 It is a failure to certify the writeset, which bubbles up as an InnoDB
 deadlock error. See my article here:
 
 http://www.joinfu.com/2015/01/understanding-reservations-concurrency-locking-in-nova/
 
 Which explains this.

I do not see why not to use `FOR UPDATE` even with multi-writer or
Is the retry/swap way really solves anything here.

Using 'FOR UPDATE' in with 'repeatable read' isolation level, seams still more 
efficient
and has several advantages.

* The SELECT with 'FOR UPDATE' will read the committed version, so you do not 
really need to
  worry about when the transaction actually started. You will get fresh data 
before you reaching the
  actual UPDATE.

* In the article the example query will not return 
  new version of data in the same transaction even if you are retrying, so
  you need to restart the transaction anyway.

  When you are using the 'FOR UPDATE' way if any other transaction successfully 
commits conflicting
  row on any other galera writer, your pending transaction will be rolled back 
at your next statement,
  WITHOUT spending any time in certificating that transaction.
  In this perspective the checking the number after the update `Compare and 
swap` or
  handling an exception does not makes any difference.

* Using FOR UPDATE in a galera transaction (multi-writer) is not more evil than 
using UPDATE, 
  concurrent commit invalidates both of them in the same way (DBDeadlock).  

* The 'FOR UPDATE' if you are using just a `single writer` does not lets other 
threads to do useless work
  while wasting resources.

* The swap way also can be rolled back by galera almost anywhere (DBDeadLock).
  At the end the swap way looks like it just replaced  the exception handling,
  with a return code check + manual transaction restart.

Am I missed something ?

  Yes ! and if I can add more information and I hope I do not make
  mistake I think it's a know issue which comes from MySQL, that is why
  we have a decorator to do a retry and so handle this case here:
 
 
  http://git.openstack.org/cgit/openstack/nova/tree/nova/db/sqlalchemy/api.py#n177
 
 It's not an issue with MySQL. It's an issue with any database code that
 is highly contentious.
 
 Almost all highly distributed or concurrent applications need to handle
 deadlock issues, and the most common way to handle deadlock issues on
 database records is using a retry technique. There's nothing new about
 that with Galera.
 
 The issue with our use of the @_retry_on_deadlock decorator is *not*
 that the retry decorator is not needed, but rather it is used too
 frequently. The compare-and-swap technique I describe in the article
 above dramatically* reduces the number of deadlocks that occur (and need
 to be handled by the @_retry_on_deadlock decorator) and dramatically
 reduces the contention over critical database sections.
 
 Best,
 -jay
 
 * My colleague Pavel Kholkin is putting together the results of a
 benchmark run that compares the compare-and-swap method with the raw
 @_retry_on_deadlock decorator method. Spoiler: the compare-and-swap
 method cuts the runtime of the benchmark by almost *half*.
 
  Essentially, anywhere that a regular DB would block, Galera will not
  block transactions on different nodes. Instead, it will cause one of the
  transactions to fail on commit. This is still ACID, but the semantics
  are quite different.
 
  The impact of this is that code which makes correct use of locking may
  still fail with a 'deadlock'. The solution to this is to either fail the
  

Re: [openstack-dev] [cinder] Feature Freeze Exception Request - bp/linux-systemz

2015-02-09 Thread Mike Perez
On 17:31 Mon 09 Feb , Andreas Maier wrote:
 
 Hello,
 I would like to ask for the following feature freeze exception in Cinder.

Cinder is not in a feature freeze at the moment [1].

Additional notes: The code in Nova patch set
https://review.openstack.org/149256 is consistent with this patch set,
but a decision to include them in kilo can be made independently for
each of the two patch sets: The Nova patch set enables FCP storage for a
compute node with KVM on System z, while the Cinder patch set enables
Cinder services to run on System z Linux.

If it's not landing in Nova for Kilo [2], I'd rather not target it for Kilo in
Cinder. We're already really packed for K-3, and it's not useful without the
Nova piece. Please resubmit for L-1 in Cinder.

[1] - 
http://lists.openstack.org/pipermail/openstack-dev/2015-January/055719.html
[2] - https://blueprints.launchpad.net/nova/+spec/libvirt-kvm-systemz

-- 
Mike Perez

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable] juno is fubar in the gate

2015-02-09 Thread Joe Gordon
On Feb 9, 2015 10:04 AM, Matt Riedemann mrie...@linux.vnet.ibm.com
wrote:

 There are at least two blocking bugs:

 1. https://bugs.launchpad.net/grenade/+bug/1419913

 Sounds like jogo is working a javelin fix for this. I'm not aware of a
patch to review though.

We need to stop trying to install tempest in the same env as stable/* code.

I should be able to revise/respond to comments shortly.

https://review.openstack.org/#/c/153080/

https://review.openstack.org/#/c/153702/

This is also blocking my effort to pin stable dependencies (Dean's devstack
changes are needed before we can pin stable dependencies as well).


 2. https://bugs.launchpad.net/ceilometer/+bug/1419919

 I'm not sure yet what's going on with this one.

 --

 Thanks,

 Matt Riedemann


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] Scheduling for Magnum

2015-02-09 Thread Andrew Melton
I think Sylvain is getting at an important point. Magnum is trying to be as 
agnostic as possible when it comes to selecting a backend. Because of that, I 
think the biggest benefit to Magnum would be a generic scheduling interface 
that each pod type would implement. A pod type with a backend providing 
scheduling could implement a thin scheduler that simply translates the generic 
requests into something the backend can understand. And a pod type requiring 
outside scheduling could implement something more heavy.

If we are careful to keep the heavy scheduling generic enough to be shared 
between backends requiring it, we could hopefully swap in an implementation 
using Gantt once that is ready.

--Andrew


From: Jay Lau [jay.lau@gmail.com]
Sent: Monday, February 09, 2015 4:36 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Magnum] Scheduling for Magnum

Thanks Sylvain, we did not work out the API requirement till now but I think 
the requirement should be similar with nova: we need select_destination to 
select the best target host based on filters and weights.

There are also some discussions here 
https://blueprints.launchpad.net/magnum/+spec/magnum-scheduler-for-docker

Thanks!

2015-02-09 16:22 GMT+08:00 Sylvain Bauza 
sba...@redhat.commailto:sba...@redhat.com:
Hi Magnum team,


Le 07/02/2015 19:24, Steven Dake (stdake) a écrit :


From: Eric Windisch e...@windisch.usmailto:e...@windisch.us
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Saturday, February 7, 2015 at 10:09 AM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Magnum] Scheduling for Magnum


1) Cherry pick scheduler code from Nova, which already has a working a filter 
scheduler design.

The Gantt team explored that option by the Icehouse cycle and it failed with a 
lot of problems. I won't list all of those, but I'll just explain that we 
discovered how the Scheduler and the Nova compute manager were tighly coupled, 
which was meaning that a repository fork was really difficult to do without 
reducing the tech debt.

That said, our concerns were far different from the Magnum team : it was about 
having feature parity and replacing the current Nova scheduler, while your team 
is just saying that they want to have something about containers.


2) Integrate swarmd to leverage its scheduler[2].

I see #2 as not an alternative but possibly an also. Swarm uses the Docker 
API, although they're only about 75% compatible at the moment. Ideally, the 
Docker backend would work with both single docker hosts and clusters of Docker 
machines powered by Swarm. It would be nice, however, if scheduler hints could 
be passed from Magnum to Swarm.

Regards,
Eric Windisch

Adrian  Eric,

I would prefer to keep things simple and just integrate directly with swarm and 
leave out any cherry-picking from Nova. It would be better to integrate 
scheduling hints into Swarm, but I’m sure the swarm upstream is busy with 
requests and this may be difficult to achieve.


I don't want to give my opinion about which option you should take as I don't 
really know your needs. If I understand correctly, this is about having a 
scheduler providing affinity rules for containers. Do you have a document 
explaining which interfaces you're looking for, which kind of APIs you're 
wanting or what's missing with the current Nova scheduler ?

MHO is that the technology shouldn't drive your decision : whatever the backend 
is (swarmd or an inherited nova scheduler), your interfaces should be the same.

-Sylvain


Regards
-steve




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribemailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Thanks,

Jay Lau (Guangya Liu)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [NOVA] security group fails to attach to an instance if port-id is specified during boot.

2015-02-09 Thread Feodor Tersin
nova boot ... --nic port-id=xxx --nic net-id=yyy
this case is valid, right?
I.e. i want to boot instance with two ports. The first port is specified,
but the second one is created at network mapping stage.
If i specify a security group as well, it will be used for the second port
(if not - default group will):
nova boot ... --nic port-id=xxx --nic net-id=yyy --security-groups sg-1
Thus a port and a security group can be specified together.


On Mon, Feb 9, 2015 at 7:14 PM, Matt Riedemann mrie...@linux.vnet.ibm.com
wrote:



 On 9/26/2014 3:19 AM, Christopher Yeoh wrote:

 On Fri, 26 Sep 2014 11:25:49 +0400
 Oleg Bondarev obonda...@mirantis.com wrote:

  On Fri, Sep 26, 2014 at 3:30 AM, Day, Phil philip@hp.com wrote:

I think the expectation is that if a user is already interaction
 with Neutron to create ports then they should do the security group
 assignment in Neutron as well.


 Agree. However what do you think a user expects when he/she boots a
 vm (no matter providing port_id or just net_id)
 and specifies security_groups? I think the expectation should be that
 instance will become a member of the specified groups.
 Ignoring security_groups parameter in case port is provided (as it is
 now) seems completely unfair to me.


 One option would be to return a 400 if both port id and security_groups
 is supplied.

 Chris

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 Coming back to this, we now have a change from Oleg [1] after an initial
 attempt that was reverted because it would break server creates if you
 specified a port (because the original change would blow up when the
 compute API added the 'default' security group to the request').

 The new change doesn't add the 'default' security group to the request so
 if you specify a security group and port on the request, you'll now get a
 400 error response.

 Does this break API compatibility?  It seems this falls under the first
 bullet here [2], A change such that a request which was successful before
 now results in an error response (unless the success reported previously
 was hiding an existing error condition).  Does that caveat in parenthesis
 make this OK?

 It seems like we've had a lot of talk about warts in the compute v2 API
 for cases where an operation is successful but didn't yield the expected
 result, but we can't change them because of API backwards compatibility
 concerns so I'm hesitant on this.

 We also definitely need a Tempest test here, which I'm looking into.  I
 think I can work this into the test_network_basic_ops scenario test.

 [1] https://review.openstack.org/#/c/154068/
 [2] https://wiki.openstack.org/wiki/APIChangeGuidelines#
 Generally_Not_Acceptable

 --

 Thanks,

 Matt Riedemann


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable] juno is fubar in the gate

2015-02-09 Thread Matt Riedemann



On 2/9/2015 12:03 PM, Matt Riedemann wrote:

There are at least two blocking bugs:

1. https://bugs.launchpad.net/grenade/+bug/1419913

Sounds like jogo is working a javelin fix for this. I'm not aware of a
patch to review though.

2. https://bugs.launchpad.net/ceilometer/+bug/1419919

I'm not sure yet what's going on with this one.



Looks like the versions haven't been bumped for the integrated release 
projects on stable/juno yet so I did that here:


https://review.openstack.org/#/q/Ib8a29258d99de75b49a9b19aef36bb99bc5fcac0,n,z

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Why not allow deleting volume from a CG ?

2015-02-09 Thread Mike Perez
On 16:34 Mon 09 Feb , Nilesh P Bhosale wrote:
 Adding an ability to Add/Remove existing volumes to/from CG looks fine. 
 But, it does not help the use-case where one would want to directly delete 
 a volume from CG.
 Why do we force him to first remove a volume from CG and then delete?

Xing and I have already explained the reasons for this decision previously in
the thread. Besides it being an accident, you're making an assumption that all
backends will handle directly removing a volume from a consistency group the
same. I see a few ways they can handle it:

1) The backend errors on this, and the end user will never see the error,
   because it just goes to Cinder logs from the Cinder volume service.
2) The backend allows it, but they still see that volume part of the
   consistency group, even if it was deleted. Leaving things in a weird state.
3) The backend allows the delete and updates the consistency group accordingly.

With 72 different drivers, you can't make an assumption here.

 As CG goes along with replication and backends creating a separate pool 
 per CG, removing a volume from CG, just to be able to delete it in the 
 next step, may be an unnecessary expensive operation.

Can you explain more how this is expensive? I would argue making a mistake
in deleting a volume that's part of a consistency group on accident would be an
expensive mistake.

 In fact, I think whatever decision user takes, even to delete a normal 
 volume, is treated as his conscious decision.

We're humans, we make mistakes. I work on an interface that assumes this.

-- 
Mike Perez

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [barbican][grenade][tempest][qa][ceilometer] Database Upgrade Testing for Incubated Projects

2015-02-09 Thread John Wood
Hello folks,

(Apologies for the numerous tags, but my question straddles multiple areas I 
believe)

I’m a core developer on the Barbican team and we are interested in database 
upgrade testing via Grenade. In addition to Grenade documentation I’ve looked 
at two blueprints [1][2] the Ceilometer project merged in last year, and the 
CRs created for them. It would appear that in order to utilize Grenade testing 
for Barbican, we would need submit CRs to both Grenade (to add an 
‘update-barbican’ script) and to Tempest (to add Barbican-centric resources to 
the javeline.py module).

As Barbican is not yet out of incubation, would such Grenade and Tempest CRs 
need to wait until we are out of incubation?  If we do have to wait, is anyone 
aware of an alternative method of such upgrade testing without need to submit 
changes to Grenade and Tempest (similar to the DevStack gate hook back to the 
project)?

[1] 
http://specs.openstack.org/openstack/ceilometer-specs/specs/juno/grenade-upgrade-testing.html
[2] 
http://specs.openstack.org/openstack/ceilometer-specs/specs/juno/grenade-resource-survivability.html

Thanks in advance!,
John

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][oslo.db][nova] TL; DR Things everybody should know about Galera

2015-02-09 Thread Jay Pipes

On 02/09/2015 01:02 PM, Attila Fazekas wrote:

I do not see why not to use `FOR UPDATE` even with multi-writer or
Is the retry/swap way really solves anything here.

snip

Am I missed something ?


Yes. Galera does not replicate the (internal to InnnoDB) row-level locks 
that are needed to support SELECT FOR UPDATE statements across multiple 
cluster nodes.


https://groups.google.com/forum/#!msg/codership-team/Au1jVFKQv8o/QYV_Z_t5YAEJ

Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable] juno is fubar in the gate

2015-02-09 Thread Joe Gordon
On Mon, Feb 9, 2015 at 3:10 PM, Alan Pevec ape...@gmail.com wrote:

   Tracking etherpad:
   https://etherpad.openstack.org/p/wedged-stable-gate-feb-2015

 BTW there is a tracking etherpad updated by
 https://wiki.openstack.org/wiki/StableBranch#Stable_branch_champions
 https://etherpad.openstack.org/p/stable-tracker
 linked in https://wiki.openstack.org/wiki/StableBranch#Gate_Status and
 announced on this list
 http://lists.openstack.org/pipermail/openstack-dev/2015-January/05.html

 From crossed items in Recently closed section you can see that
 branch champions have been busy.


There are two main audiences for stable branches:

* Downstream consumers.
* Upstream developers working on master who need a working stable branch.

I cannot comment on how well the first group is being supported. But as a
member of the second group, I am constantly frustrated by how frequently
broken stable branches ruin my day.


  You are missing the fact that a bunch of us (Matt Treinish, myself and
  others) are frustrated by the fact that we end up fixing stable branches
  whenever they break because we touch tempest, grenade and other projects
  that require working stable branches. But we do not want to be working on
  stable branches ourselves.  I begrudgingly stepped up to work on pinning
 all
  requirements on stable branches, to reduce the number of times stable
  branches break and ruin my day. But my plan to cap dependencies has been
  delayed several times by stable branches breaking again and again, along
  with unwinding undesired behaviors in our testing harnesses.
 
  Most recently, stable/juno grenade broke on February 4th (due to the
 release
  of tempest-lib 0.2.0). This caused bug

 So that's a change in tooling, not stable branch itself. Idea when 15
 months for Icehouse was discussed was that branchless Tempest would
 make it easier, but now it turns out that's making both tooling and
 stable branch unhappy.


I don't think its reasonable to assume maintaining the stable/branch
excludes actively supporting and improving are testing harness and related
tooling. Our tooling is constantly changing and supporting stable branches
means working on our tooling to make sure its functioning as expected for
stable branches.

Also cutting a new release of a library is not a 'change in tooling'.


  What I expect to happen when issues like this arise is interested parties
  work together to fix things and be proactive and make stable testing more
  robust. Instead we currently have people who have no desire to work on
  stable branches maintaining them.

 At least parts of stable team have been pro-active (see above
 etherpad) but I guess we have a communication issue here: has
 anyonetried to contact stable branch champions (juno=Adam,
 icehouse=Ihar) and what exactly do you expect stable team to do?
 AFAICT these are all changes in tooling where stable-maint is not core
 (devstack, tempest)...



Where is it documented that Adam is the Juno branch champion and Ihar is
Icehouse's? I didn't see it anywhere in the wiki.

If something breaks in stable/juno and grenade on master seizes up, what
should we do? When issues are blocking development we should not have to
wait for any one person to respond -- single points of failure are bad. So
I don't think 'has anyone tried to contact us' is the right question to
ask. A better question to ask is 'have stable branches recently prevented
development'

So who should I contact to help me freeze all stable/* dependencies? Or
better yet, someone to drive the effort instead.


 BTW Icehouse 2014.1.4 was planned[*] for Feb 19 with freeze starting
 on Feb 12, I'll delay it for now until we sort the current situation
 out.


 Cheers,
 Alan


 [*]
 https://wiki.openstack.org/wiki/StableBranchRelease#Planned_stable.2Ficehouse_releases

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Manila]Question about gateway-mediated-with-ganesha

2015-02-09 Thread Li, Chen
Hi list,

I'm trying to understand how manila use NFS-Ganesha, and hope to figure out 
what I need to do to use it if all patches been merged (only one patch is under 
reviewing,  right ?).

I have read:
https://wiki.openstack.org/wiki/Manila/Networking/Gateway_mediated
https://blueprints.launchpad.net/manila/+spec/gateway-mediated-with-ganesha

From documents, it is said, within Ganesha, multi-tenancy would be supported:
And later the Ganesha core would be extended to use the infrastructure used by 
generic driver to provide network separated multi-tenancy. The core would 
manage Ganesha service running in the service VMs, and the VMs themselves that 
reside in share networks.


ð  it is said : extended to use the infrastructure used by generic driver to 
provide network separated multi-tenancy
So, when user create a share, a VM (share-server) would be created to run 
Ganesha-server.

ð  I assume this VM should connect the 2 networks : user's share-network and 
the network where Glusterfs cluster is running.

But, in generic driver, it create a manila service network at beginning.
When user create a share, a subnet would be created in manila service network 
corresponding to each user's share-network:
This means every VM(share-server) generic driver has created are living in 
different subnets, they're not able to connect to each other.

If my understanding here is correct, the VMs that running Ganesha are living 
the different subnets too.

ð  Here is my question:
How VMs(share-servers) running Ganesha be able to connect to the single 
Glusterfs cluster ?

Looking forward to hear from you.

Thanks.
-chen
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] release request for python-novaclient

2015-02-09 Thread melanie witt
On Feb 9, 2015, at 19:55, Michael Still mi...@stillhq.com wrote:

 The previous policy is that we do a release when requested or when a
 critical bug fix merges. I don't see any critical fixes awaiting
 release, but I am not opposed to a release.

That's right. I think the keystone v3 support is important and worth putting 
out there.

 The reason I didn't do this yesterday is that Joe wanted some time to
 pin the stable requirements, which I believe he is still working on.
 Let's give him some time unless this is urgent.

Yes, of course. I should have been clearer. I meant after that's done, we 
should do a release.

melanie (melwitt)






signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] Scheduling for Magnum

2015-02-09 Thread Jay Lau
Steve,

So you mean we should focus on docker and k8s scheduler? I was a bit
confused, why do we need to care k8s? As the k8s cluster was created by
heat and once the k8s was created, the k8s has its own scheduler for
creating pods/service/rcs.

So seems we only need to care scheduler for native docker and ironic bay,
comments?

Thanks!

2015-02-10 12:32 GMT+08:00 Steven Dake (stdake) std...@cisco.com:



   From: Joe Gordon joe.gord...@gmail.com
 Reply-To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Date: Monday, February 9, 2015 at 6:41 PM
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Magnum] Scheduling for Magnum



 On Mon, Feb 9, 2015 at 6:00 AM, Steven Dake (stdake) std...@cisco.com
 wrote:



 On 2/9/15, 3:02 AM, Thierry Carrez thie...@openstack.org wrote:

 Adrian Otto wrote:
  [...]
  We have multiple options for solving this challenge. Here are a few:
 
  1) Cherry pick scheduler code from Nova, which already has a working a
 filter scheduler design.
  2) Integrate swarmd to leverage its scheduler[2].
  3) Wait for the Gantt, when Nova Scheduler to be moved out of Nova.
 This is expected to happen about a year from now, possibly sooner.
  4) Write our own filter scheduler, inspired by Nova.
 
 I haven't looked enough into Swarm to answer that question myself, but
 how much would #2 tie Magnum to Docker containers ?
 
 There is value for Magnum to support other container engines / formats
 (think Rocket/Appc) in the long run, so we should avoid early design
 choices that would prevent such support in the future.

 Thierry,
 Magnum has an object type of a bay which represents the underlying cluster
 architecture used.  This could be kubernetes, raw docker, swarmd, or some
 future invention.  This way Magnum can grow independently of the
 underlying technology and provide a satisfactory user experience dealing
 with the chaos that is the container development world :)


  While I don't disagree with anything said here, this does sound a lot
 like https://xkcd.com/927/



 Andrew had suggested offering a unified standard user experience and
 API.  I think that matches the 927 comic pretty well.  I think we should
 offer each type of system using APIs that are similar in nature but that
 offer the native features of the system.  In other words, we will offer
 integration across the various container landscape with OpenStack.

  We should strive to be conservative and pragmatic in our systems support
 and only support container schedulers and container managers that have
 become strongly emergent systems.  At this point that is docker and
 kubernetes.  Mesos might fit that definition as well.  Swarmd and rocket
 are not yet strongly emergent, but they show promise of becoming so.  As a
 result, they are clearly systems we should be thinking about for our
 roadmap.  All of these systems present very similar operational models.

  At some point competition will choke off new system design placing an
 upper bound on the amount of systems we have to deal with.

  Regards
 -steve



 We will absolutely support relevant container technology, likely through
 new Bay formats (which are really just heat templates).

 Regards
 -steve

 
 --
 Thierry Carrez (ttx)
 

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Thanks,

Jay Lau (Guangya Liu)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] Weekly subteam status report

2015-02-09 Thread Ruby Loo
Hi,

Following is the subteam report for Ironic. As usual, this is pulled
directly from the Ironic whiteboard[0] and formatted.

Bugs (dtantsur)

===

(As of Mon, 02 Feb 15:00 UTC)

Open: 127 (+5).

5 new (+1), 36 in progress (+4), 1 critical (0), 14 high (-2) and 7
incomplete (+1)


(As of Mon, 09 Feb 15:30 UTC)

Open: 133 (+6).

9 new (+4), 34 in progress (-2), 0 critical (-1), 17 high (+3) and 7
incomplete (0)

Drivers

===


IPA (jroll/JayF/JoshNang)

---

IPA broke the gate 2/6/2015; the method IPA was using to reload partitions
apparently doesn't work in the gate, so we replaced

partprobe with partx -u $device. We should dig deeper later to see why
partprobe didn't work. -JayF


iLO (wanyen)

---

Submitted FFE request for passing flavor in capabilities to Ironic :
https://review.openstack.org/141012.
http://lists.openstack.org/pipermail/openstack-dev/2015-February/056256.html


Several Ironic Kilo features including secure boot, trusted boot, local
boot support with partition image, and support of mulitple node
capabilities are depending on this feature.  It also has impact on iLO
driver’s hardware property introspection feature. Code changes to support
this spec in Nova ironic virt driver is very small- only 31 lines of code
(including comments) in nova/virt/ironic/patcher.py,  and 22 lines of code
in test_patcher.py.


Still need review and approval for several specs (iLO node cleaning and
zapping specs, secure boot management interface, RAID driver interface,
in-band RAID configuration, get/set boot mode management interface, per
driver sensors, iLO health metrics)


iRMC (naohirot)



For the mid cycle sprint in S.F. this week, iRMC management driver code and
iRMC deploy driver spec are ready for core team's review and approval.


Toward kilo-3, iRMC deploy driver code is solicited for core team's review,
and currently good progress in testing the code.

Until next week,
--ruby

Apologies. Due to a technical glitch, reports for the previous 2 weeks are
unavailable.

*[0] https://etherpad.openstack.org/p/IronicWhiteBoard
https://etherpad.openstack.org/p/IronicWhiteBoard*
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable] juno is fubar in the gate

2015-02-09 Thread John Griffith
On Mon, Feb 9, 2015 at 1:56 PM, Matthew Treinish mtrein...@kortar.org wrote:
 On Mon, Feb 09, 2015 at 01:24:34PM -0600, Matt Riedemann wrote:


 On 2/9/2015 12:23 PM, Joe Gordon wrote:
 
 On Feb 9, 2015 10:04 AM, Matt Riedemann mrie...@linux.vnet.ibm.com
 mailto:mrie...@linux.vnet.ibm.com wrote:
  
   There are at least two blocking bugs:
  
   1. https://bugs.launchpad.net/grenade/+bug/1419913
  
   Sounds like jogo is working a javelin fix for this. I'm not aware of
 a patch to review though.
 
 We need to stop trying to install tempest in the same env as stable/* code.
 
 I should be able to revise/respond to comments shortly.
 
 https://review.openstack.org/#/c/153080/
 
 https://review.openstack.org/#/c/153702/
 
 This is also blocking my effort to pin stable dependencies (Dean's
 devstack changes are needed before we can pin stable dependencies as well).
 
  
   2. https://bugs.launchpad.net/ceilometer/+bug/1419919
  
   I'm not sure yet what's going on with this one.
  

 Tracking etherpad:

 https://etherpad.openstack.org/p/wedged-stable-gate-feb-2015


 So I think it's time we called the icehouse branch and marked it EOL. We
 originally conditioned the longer support window on extra people stepping
 forward to keep things working. I believe this latest issue is just the latest
 indication that this hasn't happened. Issue 1 listed above is being caused by
 the icehouse branch during upgrades. The fact that a stable release was pushed
 at the same time things were wedged on the juno branch is just the latest
 evidence to me that things aren't being maintained as they should be. Looking 
 at
 the #openstack-qa irc log from today or the etherpad about trying to sort this
 issue should be an indication that no one has stepped up to help with the
 maintenance and it shows given the poor state of the branch.

 If I'm not mistaken with our original support window lengths Icehouse would be
 EOL'd around now. So it's time we stopped pretending we'll be maintaining this
 branch for several more months and just go through the normal EOL procedure.


Was this serious?  I mean, we just say; 'sorry, yes we said support
until X; but now it's hard so we're going to drop it'.

Tell me I'm missing something here?

 -Matt Treinish

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] Optional Properties in an Entity

2015-02-09 Thread Jay Pipes

On 01/20/2015 10:54 AM, Brian Rosmaita wrote:

From: Kevin L. Mitchell [kevin.mitch...@rackspace.com]
Sent: Monday, January 19, 2015 4:54 PM


When we look at consistency, we look at everything else in OpenStack.
 From the standpoint of the nova API (with which I am the most familiar),
I am not aware of any property that is ever omitted from any payload
without versioning coming in to the picture, even if its value is null.
Thus, I would argue that we should encourage the first situation, where
all properties are included, even if their value is null.


That is not the case for the Images API v2:

An image is always guaranteed to have the following attributes: id,
status, visibility, protected, tags, created_at, file and self. The other
attributes defined in the image schema below are guaranteed to
be defined, but is only returned with an image entity if they have
been explicitly set. [1]


This was a mistake, IMHO. Having entirely extensible schemas means that 
there is little guaranteed consistency across implementations of the API.


This is the same reason that I think API extensions are an abomination.

Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] Optional Properties in an Entity

2015-02-09 Thread Jay Pipes

On 01/19/2015 02:55 PM, Douglas Mendizabal wrote:

Hi API WG,

I’m curious about something that came up during a bug discussion in one of the 
Barbican weekly meetings.  The question is about optional properties in an 
entity.  e.g. We have a Secret entity that has some properties that are 
optional, such as the Secret’s name.  We were split on what the best approach 
for returning the secret representation would be when an optional property is 
not set.

In one camp, some developers would like to see the properties returned no 
matter what.  That is to say, the Secret dictionary would include a key for 
“name” set to null every single time.  i.e.

{
   …
   “secret”: {
 “name”: null,
 …
   }
   ...
}

On the other camp, some developers would like to see optional properties 
omitted if they were not set by the user.

The advantage of always returning the property is that the response is easier 
to parse, since you don’t have to check for the existence of the optional keys. 
 The argument against it is that it makes the API more rigid, and clients more 
fragile.

I was wondering what the API Working Group’s thoughts are on this?


My opinion is that attributes should always be in the returned result 
(that corresponds to a particular version of an API), set to null when 
there is no value set.


Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable] juno is fubar in the gate

2015-02-09 Thread Matthew Treinish
On Mon, Feb 09, 2015 at 01:24:34PM -0600, Matt Riedemann wrote:
 
 
 On 2/9/2015 12:23 PM, Joe Gordon wrote:
 
 On Feb 9, 2015 10:04 AM, Matt Riedemann mrie...@linux.vnet.ibm.com
 mailto:mrie...@linux.vnet.ibm.com wrote:
  
   There are at least two blocking bugs:
  
   1. https://bugs.launchpad.net/grenade/+bug/1419913
  
   Sounds like jogo is working a javelin fix for this. I'm not aware of
 a patch to review though.
 
 We need to stop trying to install tempest in the same env as stable/* code.
 
 I should be able to revise/respond to comments shortly.
 
 https://review.openstack.org/#/c/153080/
 
 https://review.openstack.org/#/c/153702/
 
 This is also blocking my effort to pin stable dependencies (Dean's
 devstack changes are needed before we can pin stable dependencies as well).
 
  
   2. https://bugs.launchpad.net/ceilometer/+bug/1419919
  
   I'm not sure yet what's going on with this one.
  
 
 Tracking etherpad:
 
 https://etherpad.openstack.org/p/wedged-stable-gate-feb-2015


So I think it's time we called the icehouse branch and marked it EOL. We
originally conditioned the longer support window on extra people stepping
forward to keep things working. I believe this latest issue is just the latest
indication that this hasn't happened. Issue 1 listed above is being caused by
the icehouse branch during upgrades. The fact that a stable release was pushed
at the same time things were wedged on the juno branch is just the latest
evidence to me that things aren't being maintained as they should be. Looking at
the #openstack-qa irc log from today or the etherpad about trying to sort this
issue should be an indication that no one has stepped up to help with the
maintenance and it shows given the poor state of the branch.

If I'm not mistaken with our original support window lengths Icehouse would be
EOL'd around now. So it's time we stopped pretending we'll be maintaining this
branch for several more months and just go through the normal EOL procedure.

-Matt Treinish


pgpwWu0TsiRvn.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.messaging] Can RPCClient.call() be used with a subset of servers?

2015-02-09 Thread Doug Hellmann


On Mon, Feb 9, 2015, at 02:40 PM, Gravel, Julie Chongcharoen wrote:
 Hello,
 I want to use oslo.messaging.RPCClient.call() to invoke a
 method on multiple servers, but not all of them. Can this
 be done and how? I read the code documentation (client.py
 and target.py). I only saw either the call used for one
 server at a time, or for all of them using the fanout
 param. Neither options is exactly what I want.
 Any response/explanation would be highly appreciated.

This isn't a pattern that has come up before. Before we talk about
adding it, I'd like to understand more about your use case. How do you
know which servers should receive the call, for example? And are you
actually calling and expecting a response, or do you just need to send a
message to those servers?

Doug

 
 Regards,
 Julie Gravel
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] nova api.fault notification isn't collected by ceilometer

2015-02-09 Thread gordon chung
 In nova api, a nova api.fault notification will be send out when the when 
 there
 is en error.
 http://git.openstack.org/cgit/openstack/nova/tree/nova/api/openstack/__init__.py#n119
 but i couldn't find where they are  processed  in ceilometer,
 an error notification can be very desired to be collected, do we have plan to
 add this, shall i need a bp to do that ?there's a patch for review to store 
 error info: https://review.openstack.org/#/c/153362/
cheers,
gord  __
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][oslo.db][nova] TL; DR Things everybody should know about Galera

2015-02-09 Thread Jay Pipes

On 02/09/2015 03:10 PM, Clint Byrum wrote:

Excerpts from Jay Pipes's message of 2015-02-09 10:15:10 -0800:

On 02/09/2015 01:02 PM, Attila Fazekas wrote:

I do not see why not to use `FOR UPDATE` even with multi-writer or
Is the retry/swap way really solves anything here.

snip

Am I missed something ?


Yes. Galera does not replicate the (internal to InnnoDB) row-level locks
that are needed to support SELECT FOR UPDATE statements across multiple
cluster nodes.

https://groups.google.com/forum/#!msg/codership-team/Au1jVFKQv8o/QYV_Z_t5YAEJ


Attila acknowledged that. What Attila was saying was that by using it
with Galera, the box that is doing the FOR UPDATE locks will simply fail
upon commit because a conflicting commit has already happened and arrived
from the node that accepted the write. Further what Attila is saying is
that this means there is not such an obvious advantage to the CAS method,
since the rollback and the # updated rows == 0 are effectively equivalent
at this point, seeing as the prior commit has already arrived and thus
will not need to wait to fail certification and be rolled back.


No, that is not correct. In the case of the CAS technique, the frequency 
of rollbacks due to certification failure is demonstrably less than when 
using SELECT FOR UPDATE and relying on the certification timeout error 
to signal a deadlock.



I am not entirely certain that is true though, as I think what will
happen in sequential order is:

writer1: UPDATE books SET genre = 'Scifi' WHERE genre = 'sciencefiction';
writer1: -- send in-progress update to cluster
writer2: SELECT FOR UPDATE books WHERE id=3;
writer1: COMMIT
writer1: -- try to certify commit in cluster
** Here is where I stop knowing for sure what happens **
writer2: certifies writer1's transaction or blocks?


It will certify writer1's transaction. It will only block another thread 
hitting writer2 requesting write locks or write-intent read locks on the 
same records.



writer2: UPDATE books SET genre = 'sciencefiction' WHERE id=3;
writer2: COMMIT -- One of them is rolled back.

So, at that point where I'm not sure (please some Galera expert tell
me):

If what happens is as I suggest, writer1's transaction is certified,
then that just means the lock sticks around blocking stuff on writer2,
but that the data is updated and it is certain that writer2's commit will
be rolled back. However, if it blocks waiting on the lock to resolve,
then I'm at a loss to determine which transaction would be rolled back,
but I am thinking that it makes sense that the transaction from writer2
would be rolled back, because the commit is later.


That is correct. writer2's transaction would be rolled back. The 
difference is that the CAS method would NOT trigger a ROLLBACK. It would 
instead return 0 rows affected, because the UPDATE statement would 
instead look like this:


UPDATE books SET genre = 'sciencefiction' WHERE id = 3 AND genre = 'SciFi';

And the return of 0 rows affected would trigger a simple retry of the 
read and then update attempt on writer2 instead of dealing with ROLLBACK 
semantics on the transaction.


Note that in the CAS method, the SELECT statement and the UPDATE are in 
completely different transactions. This is a very important thing to 
keep in mind.



All this to say that usually the reason for SELECT FOR UPDATE is not
to only do an update (the transactional semantics handle that), but
also to prevent the old row from being seen again, which, as Jay says,
it cannot do.  So I believe you are both correct:

* Attila, yes I think you're right that CAS is not any more efficient
at replacing SELECT FOR UPDATE from a blocking standpoint.


It is more efficient because there are far fewer ROLLBACKs of 
transactions occurring in the system.


If you look at a slow query log (with a 0 slow query time) for a MySQL 
Galera server in a multi-write cluster during a run of Tempest or Rally, 
you will notice that the number of ROLLBACK statements is extraordinary. 
AFAICR, when Peter Boros and I benchmarked a Rally launch and delete 10K 
VM run, we saw nearly 11% of *total* queries executed against the server 
were ROLLBACKs. This, in my opinion, is the main reason that the CAS 
method will show as more efficient.



* Jay, yes I think you're right that SELECT FOR UPDATE is not the right
thing to use to do such reads, because one is relying on locks that are
meaningless on a Galera cluster.

Where I think the CAS ends up being the preferred method for this sort
of thing is where one consideres that it won't hold a meaningless lock
while the transaction is completed and then rolled back.


CAS is preferred because it is measurably faster and more 
obstruction-free than SELECT FOR UPDATE. A colleague of mine is almost 
ready to publish documentation showing a benchmark of this that shows 
nearly a 100% decrease in total amount of lock/wait time using CAS 
versus waiting for the coarser-level certification timeout to retry the 
transactions. As 

Re: [openstack-dev] [stable] juno is fubar in the gate

2015-02-09 Thread Matt Riedemann



On 2/9/2015 2:56 PM, Matthew Treinish wrote:

On Mon, Feb 09, 2015 at 01:24:34PM -0600, Matt Riedemann wrote:



On 2/9/2015 12:23 PM, Joe Gordon wrote:


On Feb 9, 2015 10:04 AM, Matt Riedemann mrie...@linux.vnet.ibm.com
mailto:mrie...@linux.vnet.ibm.com wrote:


There are at least two blocking bugs:

1. https://bugs.launchpad.net/grenade/+bug/1419913

Sounds like jogo is working a javelin fix for this. I'm not aware of

a patch to review though.

We need to stop trying to install tempest in the same env as stable/* code.

I should be able to revise/respond to comments shortly.

https://review.openstack.org/#/c/153080/

https://review.openstack.org/#/c/153702/

This is also blocking my effort to pin stable dependencies (Dean's
devstack changes are needed before we can pin stable dependencies as well).



2. https://bugs.launchpad.net/ceilometer/+bug/1419919

I'm not sure yet what's going on with this one.



Tracking etherpad:

https://etherpad.openstack.org/p/wedged-stable-gate-feb-2015



So I think it's time we called the icehouse branch and marked it EOL. We
originally conditioned the longer support window on extra people stepping
forward to keep things working. I believe this latest issue is just the latest
indication that this hasn't happened. Issue 1 listed above is being caused by
the icehouse branch during upgrades. The fact that a stable release was pushed
at the same time things were wedged on the juno branch is just the latest
evidence to me that things aren't being maintained as they should be. Looking at
the #openstack-qa irc log from today or the etherpad about trying to sort this
issue should be an indication that no one has stepped up to help with the
maintenance and it shows given the poor state of the branch.

If I'm not mistaken with our original support window lengths Icehouse would be
EOL'd around now. So it's time we stopped pretending we'll be maintaining this
branch for several more months and just go through the normal EOL procedure.

-Matt Treinish



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Until we've figured out what's going on and how to unwind it (maybe it's 
just jogo's changes that are blocked right now due to wedged 
stable/icehouse), I'd -1 this.  I think branchless tempest is definitely 
running into some issues here with requirements being uncapped and 
that's throwing a huge wrench into things and EOL'ing stable/icehouse at 
this point kind of gives an easy out w/o first have the full picture of 
what's busted and why we can't work out of it.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] [nailgun] [UI] network_check_status fleild for environments

2015-02-09 Thread Przemyslaw Kaminski

On 02/07/2015 12:09 PM, Dmitriy Shulyak wrote:
 
 On Thu, Jan 15, 2015 at 6:20 PM, Vitaly Kramskikh 
 vkramsk...@mirantis.com mailto:vkramsk...@mirantis.com wrote:
 
 I want to discuss possibility to add network verification status 
 field for environments. There are 2 reasons for this:
 
 1) One of the most frequent reasons of deployment failure is wrong 
 network configuration. In the current UI network verification is 
 completely optional and sometimes users are even unaware that this 
 feature exists. We can warn the user before the start of
 deployment if network check failed of wasn't performed.
 
 2) Currently network verification status is partially tracked by 
 status of the last network verification task. Sometimes its
 results become stale, and the UI removes the task. There are a few
 cases when the UI does this, like changing network settings, adding
 a new node, etc (you can grep removeFinishedNetworkTasks to see
 all the cases). This definitely should be done on backend.
 
 
 
 Additional field on cluster like network_check_status? When it will
 be populated with result? I think it will simply duplicate
 task.status with network_verify name
 
 Network check is not a single task.. Right now there is two, and 
 probably we will need one more right in this release (setup public 
 network and ping gateway). And AFAIK there is a need for other pre 
 deployment verifications..
 
 I would prefer to make a separate tab with pre_deployment
 verifications, similar to ostf. But if you guys want to make smth
 right now, compute status of network verification based on task
 with name network_verify, if you deleted this task from UI (for
 some reason) just add warning that verification wasnt performed. If
 there is more than one task with network_verify for any given
 cluster - pick latest one.

Well, there are some problems with this solution:
1. No 'pick latest one with filtering to network_verify' handler is
available currently.
2. Tasks are ephemeral entities -- they get deleted here and there.
Look at nailgun/task/manager.py for example -- lines 83-88 or lines
108-120 and others
3. Just having network verification status as ready is NOT enough.
From the UI you can fire off network verification for unsaved changes.
Some JSON request is made, network configuration validated by tasks
and RPC call made returing that all is OK for example. But if you
haven't saved your changes then in fact you haven't verified your
current configuration, just some other one. So in this case task
status 'ready' doesn't mean that current cluster config is valid. What
do you propose in this case? Fail the task on purpose? I only see a
solution to this by introducting a new flag and network_check_status
seems to be an appropriate one.

P.

 
 
 
 __

 
OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][nova] GRE Performance Problem, Multi-queue and Bridge MTUs

2015-02-09 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 02/06/2015 06:47 PM, Eren Türkay wrote:
 Hello,
 
 I was having serious network issues using GRE and I have been
 tracking it for a few weeks. Finally, I solved the issue but it
 needs a proper fix. To summarize, I need a way to set MTU settings
 of br-int and br-tun interfaces, enable MQ support in libvirt, and
 run ethtool -L eth0 combined N command in VMs.
 
 The detailed bug report and the explanation for the issue is here: 
 https://bugs.launchpad.net/neutron/+bug/1419069
 
 There is a blueprint [0] on MQ support but it wasn't accepted.
 
 What can we do about this performance problem and a possible fix?
 Most people complain about GRE/VXLAN performance and the solution
 appears to be working. I created a bug report and let the list know
 to work on a better, flexible, and working solution. Please ignore
 the ugly patches as they are just proof of concept.
 
 Regards, Eren
 
 [0]
 https://blueprints.launchpad.net/nova/+spec/libvirt-virtio-net-multiqueue

 
I guess the MTU part should be fixed in Kilo as part of:
https://github.com/openstack/neutron-specs/blob/master/specs/kilo/mtu-selection-and-advertisement.rst

/Ihar
-BEGIN PGP SIGNATURE-
Version: GnuPG v1

iQEcBAEBAgAGBQJU2J47AAoJEC5aWaUY1u57JYMH/jnYCmYfcw/ilQKaF3oCyoM/
WF8leZOCuxWFNenBcJin8rrmyrF+L1Sxvcn5tsFpPOXPcDz0d2NnX4wNDoeyVlUR
ss6TVk+cTvJF73QsxvVaTxwcTIwUad7GYNCyosiAelKVYad7nc7s/M6lZf3mNum9
StsokTnpuPI5Gm+CAKlgDYLVF62+HbZuSnnSbRCUKB4hpa0kNVLG1tTHgSbhdC72
IlAOzh07YItw1jsVw/5SQRJGGTabuZRGdgnJbawFJEG9ZjghFIS3qHFY9nIujXgg
xqIdUGMjIa7c5c9mKLCdpPE63WwgSFPtC0afT8MSvsyAK8gsb93Rd37jB8ae37Y=
=US4b
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] BUG in OpenVSwitch Version ovs-vswitchd (Open vSwitch) 1.4.6

2015-02-09 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 02/07/2015 05:09 AM, masoom alam wrote:
 Hi every one,
 
 Can any one spot why the following bug will appear in Openstack
 leaving all services of Neutron to unusable state?
 
 To give you an idea that I was trying:
 
 I tried to configure 173.39.237.0 ip to a VM, with the CIDR
 173.39.236.0/23 http://173.39.236.0/23, however the OVS gave 
 error and now all the neutron services are completely unusable
 
 2015-02-04 05:25:06.993 TRACE 
 neutron.plugins.openvswitch.agent.ovs_neutron_agent Traceback
 (most recent call last): 2015-02-04 05:25:06.993 TRACE 
 neutron.plugins.openvswitch.agent.ovs_neutron_agent   File 
 /opt/stack/neutron/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py,

 
line 1197, in rpc_loop
 2015-02-04 05:25:06.993 TRACE 
 neutron.plugins.openvswitch.agent.ovs_neutron_agent port_info
 = self.scan_ports(ports, updated_ports_copy) 2015-02-04
 05:25:06.993 TRACE 
 neutron.plugins.openvswitch.agent.ovs_neutron_agent   File 
 /opt/stack/neutron/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py,

 
line 821, in scan_ports
 2015-02-04 05:25:06.993 TRACE 
 neutron.plugins.openvswitch.agent.ovs_neutron_agent 
 updated_ports.update(self.check_changed_vlans(registered_ports)) 
 2015-02-04 05:25:06.993 TRACE 
 neutron.plugins.openvswitch.agent.ovs_neutron_agent   File 
 /opt/stack/neutron/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py,

 
line 848, in check_changed_vlans
 2015-02-04 05:25:06.993 TRACE 
 neutron.plugins.openvswitch.agent.ovs_neutron_agent port_tags
 = self.int_br.get_port_tag_dict() 2015-02-04 05:25:06.993 TRACE 
 neutron.plugins.openvswitch.agent.ovs_neutron_agent   File 
 /opt/stack/neutron/neutron/agent/linux/ovs_lib.py, line 394, in 
 get_port_tag_dict 2015-02-04 05:25:06.993 TRACE 
 neutron.plugins.openvswitch.agent.ovs_neutron_agent result = 
 self.run_vsctl(args, check_error=True) 2015-02-04 05:25:06.993
 TRACE neutron.plugins.openvswitch.agent.ovs_neutron_agent   File 
 /opt/stack/neutron/neutron/agent/linux/ovs_lib.py, line 67, in
 run_vsctl 2015-02-04 05:25:06.993 TRACE 
 neutron.plugins.openvswitch.agent.ovs_neutron_agent return 
 utils.execute(full_args, root_helper=self.root_helper) 2015-02-04
 05:25:06.993 TRACE 
 neutron.plugins.openvswitch.agent.ovs_neutron_agent   File 
 /opt/stack/neutron/neutron/agent/linux/utils.py, line 75, in
 execute 2015-02-04 05:25:06.993 TRACE 
 neutron.plugins.openvswitch.agent.ovs_neutron_agent raise 
 RuntimeError(m) 2015-02-04 05:25:06.993 TRACE 
 neutron.plugins.openvswitch.agent.ovs_neutron_agent RuntimeError: 
 2015-02-04 05:25:06.993 TRACE 
 neutron.plugins.openvswitch.agent.ovs_neutron_agent Command:
 ['sudo', '/usr/local/bin/neutron-rootwrap',
 '/etc/neutron/rootwrap.conf', 'ovs-vsctl', '--timeout=10',
 '--format=json', '--', '--columns=name,tag', 'list', 'Port'] 
 2015-02-04 05:25:06.993 TRACE 
 neutron.plugins.openvswitch.agent.ovs_neutron_agent Exit code: 1 
 2015-02-04 05:25:06.993 TRACE 
 neutron.plugins.openvswitch.agent.ovs_neutron_agent Stdout: '' 
 2015-02-04 05:25:06.993 TRACE 
 neutron.plugins.openvswitch.agent.ovs_neutron_agent Stderr:
 'Traceback (most recent call last):\n  File
 /usr/local/bin/neutron-rootwrap, line 4, in module\n 
 __import__(\'pkg_resources\').require(\'neutron==2013.2.4.dev32\')\n

 
File
 /usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py,
 line 3018, in module\nworking_set =
 WorkingSet._build_master()\n  File 
 /usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py,
 line 614, in _build_master\nreturn 
 cls._build_from_requirements(__requires__)\n  File 
 /usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py,
 line 627, in _build_from_requirements\ndists =
 ws.resolve(reqs, Environment())\n  File 
 /usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py,
 line 805, in resolve\nraise 
 DistributionNotFound(req)\npkg_resources.DistributionNotFound: 
 alembic0.6.4,=0.4.1\n' 2015-02-04 05:25:06.993 TRACE 
 neutron.plugins.openvswitch.agent.ovs_neutron_agent

It seems you're using Neutron Havana. It's not supported anymore.
Also, I would recommend you to upgrade your openvswitch version since
1.4.6 sounds very old to me. There were huge performance optimizations
in recent version of OVS.

/Ihar
-BEGIN PGP SIGNATURE-
Version: GnuPG v1

iQEcBAEBAgAGBQJU2J70AAoJEC5aWaUY1u57VwwIAMBZPH9D0cHUUzdjrhie4dYx
dXTKuo5gAS2FTtsQVjgycmoskft9TX1FGrfHrMg8VEg0m3XnpgCCWD7R7AaTftCA
vfSsNEZJ64mXvjwIBmsQfsX3Ic0mWZQckv2J2ftQ0slXgjhN8cyDiujv6f5Lxav0
yUaSPQpNqmL+bDyxof7YVFjRlq6OlFcnw9SA/9GZvlQhWoBTybVsbdGi161778dT
UyVtv6DvuwPVotZzlWVfhCEkKlHcTN2U+K9O3eHdNrNFYAK3Bznu9zFN7xQOEOKe
YteMU8yWILmA4R4aB8Uir0x0ugOWB5jgmsEMFFTNfPPXR60pkKTtpkF8I2os5nk=
=YVoH
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

Re: [openstack-dev] [Magnum] Scheduling for Magnum

2015-02-09 Thread Thierry Carrez
Adrian Otto wrote:
 [...]
 We have multiple options for solving this challenge. Here are a few:
 
 1) Cherry pick scheduler code from Nova, which already has a working a filter 
 scheduler design. 
 2) Integrate swarmd to leverage its scheduler[2]. 
 3) Wait for the Gantt, when Nova Scheduler to be moved out of Nova. This is 
 expected to happen about a year from now, possibly sooner.
 4) Write our own filter scheduler, inspired by Nova.

I haven't looked enough into Swarm to answer that question myself, but
how much would #2 tie Magnum to Docker containers ?

There is value for Magnum to support other container engines / formats
(think Rocket/Appc) in the long run, so we should avoid early design
choices that would prevent such support in the future.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Why not allow deleting volume from a CG ?

2015-02-09 Thread Nilesh P Bhosale
Adding an ability to Add/Remove existing volumes to/from CG looks fine. 
But, it does not help the use-case where one would want to directly delete 
a volume from CG.
Why do we force him to first remove a volume from CG and then delete?
As CG goes along with replication and backends creating a separate pool 
per CG, removing a volume from CG, just to be able to delete it in the 
next step, may be an unnecessary expensive operation.

I think, we can allow removing volume from a CG with something like 
'--force' option, so that user consciously makes that decision.

In fact, I think whatever decision user takes, even to delete a normal 
volume, is treated as his conscious decision.

Thanks,
Nilesh



From:   yang, xing xing.y...@emc.com
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Date:   02/07/2015 01:54 AM
Subject:Re: [openstack-dev] [cinder] Why not allow deleting volume 
from a CG ?



As Mike said, allowing deletion of a single volume from a CG is error 
prone.  User could be deleting a single volume without knowing that it is 
part of a CG.  The new Modify CG feature for Kilo allows you to remove a 
volume from CG and you can delete it as a separate operation.  When user 
removes a volume from a CG, at least he/she is making a conscious decision 
knowing that the volume is currently part of the CG.

Thanks,
Xing


-Original Message-
From: Mike Perez [mailto:thin...@gmail.com] 
Sent: Friday, February 06, 2015 1:47 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [cinder] Why not allow deleting volume from a 
CG ?

On 15:51 Fri 06 Feb , Nilesh P Bhosale wrote:
snip
 I understand this is as per design, but curious to understand logic 
 behind this.
snip
 Why not allow deletion of volumes form the CG? at least when there are 
 no dependent snapshots.

From the review [1], this is because allowing a volume that's part of a 
consistency group to be deleted is error prone for both the user and the 
storage backend. It assumes the storage backend will register the volume 
not being part of the consistency group. It also assumes the user is 
keeping tracking of what's part of a consistency group.

 With the current implementation, only way to delete the volume is to 
 delete the complete CG, deleting all the volumes in that, which I feel 
 is not right.

The plan in Kilo is to allow adding/removing volumes from a consistency 
group [2][3]. The user now has to explicitly remove the volume from a 
consistency group, which in my opinion is better than implicit with 
delete.

I'm open to rediscussing this issue with vendors and seeing about making 
sure things in the backend to be cleaned up properly, but I think this 
solution helps prevent the issue for both users and backends.

[1] - https://review.openstack.org/#/c/149095/
[2] - 
https://blueprints.launchpad.net/cinder/+spec/consistency-groups-kilo-update

[3] - https://review.openstack.org/#/c/144561/

--
Mike Perez

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Why not allow deleting volume from a CG ?

2015-02-09 Thread Duncan Thomas
On 9 February 2015 at 13:04, Nilesh P Bhosale nilesh.bhos...@in.ibm.com
wrote:

 Adding an ability to Add/Remove existing volumes to/from CG looks fine.
 But, it does not help the use-case where one would want to directly delete
 a volume from CG.
 Why do we force him to first remove a volume from CG and then delete?


Because the risk of a user accidentally deleting a volume that is part of a
CG and making their CG useless was considered greater than the cost of
having to make two API calls. This was discussed during the design phase of
CGs, at length.

Many things are possible, be are trying to choose a subset that can be:

- Implemented on as many different backends as possible
- Don't limit backend architectures from doing novel new things
- Allow a rich tenant experience
- Guide a tenant away from operating in a high-risk manner

--
Duncan Thomas
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] [nailgun] [UI] network_check_status fleild for environments

2015-02-09 Thread Przemyslaw Kaminski

On 02/09/2015 12:06 PM, Dmitriy Shulyak wrote:
 
 On Mon, Feb 9, 2015 at 12:51 PM, Przemyslaw Kaminski 
 pkamin...@mirantis.com mailto:pkamin...@mirantis.com wrote:
 
 Well, there are some problems with this solution: 1. No 'pick
 latest one with filtering to network_verify' handler is available
 currently.
 
 
 Well i think there should be finished_at field anyway, why not to
 add it for this purpose?

So you're suggesting to add another column and modify all tasks for
this one feature?

 
 2. Tasks are ephemeral entities -- they get deleted here and
 there. Look at nailgun/task/manager.py for example -- lines 83-88
 or lines 108-120 and others
 
 
 I dont actually recall what was the reason to delete them, but if
 it happens imo it is ok to show right now that network verification
 wasnt performed.

Is this how one does predictible and easy to understand software?
Sometimes we'll say that verification is OK, othertimes that it wasn't
performed?

 
 3. Just having network verification status as ready is NOT enough. 
 From the UI you can fire off network verification for unsaved
 changes. Some JSON request is made, network configuration validated
 by tasks and RPC call made returing that all is OK for example. But
 if you haven't saved your changes then in fact you haven't verified
 your current configuration, just some other one. So in this case
 task status 'ready' doesn't mean that current cluster config is
 valid. What do you propose in this case? Fail the task on purpose?
 I only see a
 
 solution to this by introducting a new flag and
 network_check_status seems to be an appropriate one.
 
 
 My point that it has very limited UX. Right now network check is: -
 l2 with vlans verication - dhcp verification
 
 When we will have time we will add: - multicast routing
 verification - public gateway Also there is more stuff that
 different users was asking about.
 
 Then i know that vmware team also wants to implement
 pre_deployment verifications.
 
 So what this net_check_status will refer to at that point?

Issue #3 I described is still valid -- what is your solution in this case?

If someone implements pre-deployment network verifications and doesn't
add the procedures to network verification task then really no
solution can prevent the user from being able to deploy a cluster with
some invalid configuration. It's not an issue with providing info that
network checks were or weren't made.

As far as I understand, there's one supertask 'verify_networks'
(called in nailgu/task/manager.py line 751). It spawns other tasks
that do verification. When all is OK verify_networks calls RPC's
'verify_networks_resp' method and returns a 'ready' status and at that
point I can inject code to also set the DB column in cluster saying
that network verification was OK for the saved configuration. Adding
other tasks should in no way affect this behavior since they're just
subtasks of this task -- or am I wrong?

P.

 
 
 
 
 
 
 __

 
OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] [nailgun] [UI] network_check_status fleild for environments

2015-02-09 Thread Dmitriy Shulyak
On Mon, Feb 9, 2015 at 12:51 PM, Przemyslaw Kaminski pkamin...@mirantis.com
 wrote:

 Well, there are some problems with this solution:
 1. No 'pick latest one with filtering to network_verify' handler is
 available currently.


Well i think there should be finished_at field anyway, why not to add it
for this purpose?

 2. Tasks are ephemeral entities -- they get deleted here and there.
 Look at nailgun/task/manager.py for example -- lines 83-88 or lines
 108-120 and others


I dont actually recall what was the reason to delete them, but if it
happens imo it is ok to show right now
that network verification wasnt performed.

 3. Just having network verification status as ready is NOT enough.
 From the UI you can fire off network verification for unsaved changes.
 Some JSON request is made, network configuration validated by tasks
 and RPC call made returing that all is OK for example. But if you
 haven't saved your changes then in fact you haven't verified your
 current configuration, just some other one. So in this case task
 status 'ready' doesn't mean that current cluster config is valid. What
 do you propose in this case? Fail the task on purpose? I only see a

solution to this by introducting a new flag and network_check_status
 seems to be an appropriate one.


My point that it has very limited UX. Right now network check is:
- l2 with vlans verication
- dhcp verification

When we will have time we will add:
- multicast routing verification
- public gateway
Also there is more stuff that different users was asking about.

Then i know that vmware team also wants to implement pre_deployment
verifications.

So what this net_check_status will refer to at that point?
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] Theory of Testing Cross Project Spec

2015-02-09 Thread Sean Dague
Culminating many many email threads and discussions over the last year,
I'm trying to boil down the overall OpenStack philosophy of testing into
a single cross project spec - https://review.openstack.org/#/c/150653

This is an attempt to both provide a current baseline of where we are,
and a future view of where we should get to. This is extremely important
as the current co-gate everything model does not scale beyond where we
currently stand with number of projects involved.

Comments are welcomed, please keep typos / grammar as '0' comments to
separate them from -1 comments. Also, I ask any -1s to be extremely
clear about the core concern of the -1 so we can figure out how to make
progress.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] Scheduling for Magnum

2015-02-09 Thread Steven Dake (stdake)


On 2/9/15, 3:02 AM, Thierry Carrez thie...@openstack.org wrote:

Adrian Otto wrote:
 [...]
 We have multiple options for solving this challenge. Here are a few:
 
 1) Cherry pick scheduler code from Nova, which already has a working a
filter scheduler design.
 2) Integrate swarmd to leverage its scheduler[2].
 3) Wait for the Gantt, when Nova Scheduler to be moved out of Nova.
This is expected to happen about a year from now, possibly sooner.
 4) Write our own filter scheduler, inspired by Nova.

I haven't looked enough into Swarm to answer that question myself, but
how much would #2 tie Magnum to Docker containers ?

There is value for Magnum to support other container engines / formats
(think Rocket/Appc) in the long run, so we should avoid early design
choices that would prevent such support in the future.

Thierry,
Magnum has an object type of a bay which represents the underlying cluster
architecture used.  This could be kubernetes, raw docker, swarmd, or some
future invention.  This way Magnum can grow independently of the
underlying technology and provide a satisfactory user experience dealing
with the chaos that is the container development world :)

We will absolutely support relevant container technology, likely through
new Bay formats (which are really just heat templates).

Regards
-steve


-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] [nailgun] [UI] network_check_status fleild for environments

2015-02-09 Thread Vitaly Kramskikh
Hi, my opinion on this:

Yes, it is technically possible to implement this feature using only tasks.
This may require adding new field to tasks to distinguish whether task was
run for saved or unsaved changes. But I'm against this approach because:

1) It will require handling more than 1 task of a single type which
automatically leads to increase of complexity of the code. We will need 2
tasks in the following case: there are 1 task for unsaved data and 1 for
saved. We need to show result of the first task on the network tab and use
status of the second task to determine whether the network check was
performed.

2) We have 2 similar tasks: for deploying a cluster and settings up a
release. Both cluster and release models have status field which
represent status of these entities so we don't perform complex checks with
tasks. So I think the same approach should be used for network verification
status.

As for tasks deletion, there are 2 reasons for this:

1) If we don't delete old tasks, it increases the traffic between backend
and UI. There are still no way to fetch the latest task or 2 latest tasks
using our API.

2) We delete tasks manually when their results are not needed anymore or
become invalid. For example, when user adds another node, we remove network
check task as its result is not valid anymore. Yet another example - when
user clicks X button on the message with deployment result, we remove this
task so it won't be shown anymore. If you want us not to delete these
tasks, please provide us with another way to cover these cases.

2015-02-09 15:51 GMT+03:00 Przemyslaw Kaminski pkamin...@mirantis.com:



 On 02/09/2015 01:18 PM, Dmitriy Shulyak wrote:
 
  On Mon, Feb 9, 2015 at 1:35 PM, Przemyslaw Kaminski
  pkamin...@mirantis.com mailto:pkamin...@mirantis.com wrote:
 
  Well i think there should be finished_at field anyway, why not
  to add it for this purpose?
 
  So you're suggesting to add another column and modify all tasks
  for this one feature?
 
 
  Such things as time stamps should be on all tasks anyway.
 
 
  I dont actually recall what was the reason to delete them, but
  if it happens imo it is ok to show right now that network
  verification wasnt performed.
 
  Is this how one does predictible and easy to understand software?
  Sometimes we'll say that verification is OK, othertimes that it
  wasn't performed?
 
  In my opinion the questions that needs to be answered - what is
  the reason or event to remove verify_networks tasks history?
 
 
  3. Just having network verification status as ready is NOT
  enough. From the UI you can fire off network verification for
  unsaved changes. Some JSON request is made, network configuration
  validated by tasks and RPC call made returing that all is OK for
  example. But if you haven't saved your changes then in fact you
  haven't verified your current configuration, just some other one.
  So in this case task status 'ready' doesn't mean that current
  cluster config is valid. What do you propose in this case? Fail
  the task on purpose?
 
  Issue #3 I described is still valid -- what is your solution in
  this case?
 
  Ok, sorry. What do you think if in such case we will remove old
  tasks? It seems to me that is correct event in which old
  verify_networks is invalid anyway, and there is no point to store
  history.

 Well, not exactly. Configure networks, save settings, do network check
 all assume that all went fine. Now change one thing without saving,
 check settings, didn't pass but it doesn't affect the flag because
 that's some different configuration from the saved one. And your
 original cluster is OK still. So in this case user will have to yet
 again run the original check. The plus of the network_check_status
 column is actually you don't need to store any history -- task can be
 deleted or whatever and still last checked saved configuration
 matters. User can perform other checks 'for free' and is not required
 to rerun the working configuration checks.

 With data depending on tasks you actually have to store a lot of
 history because you need to keep last working saved configuration --
 otherwise user will have to rerun original configuration. So from
 usability point of view this is a worse solution.

 
 
  As far as I understand, there's one supertask 'verify_networks'
  (called in nailgu/task/manager.py line 751). It spawns other tasks
  that do verification. When all is OK verify_networks calls RPC's
  'verify_networks_resp' method and returns a 'ready' status and at
  that point I can inject code to also set the DB column in cluster
  saying that network verification was OK for the saved
  configuration. Adding other tasks should in no way affect this
  behavior since they're just subtasks of this task -- or am I
  wrong?
 
 
  It is not that smooth, but in general yes - it can be done when
  state of verify_networks is changed. But lets say we have
  some_settings_verify task? Would be it valid to add one more field

[openstack-dev] [magnum][heat] resourcegroup not behaving as expected during update

2015-02-09 Thread Steven Dake (stdake)
Hongbin  Lars,

I spoke with Steve Hardy from Heat, who has been deep into improving resource 
groups.  He indicated the problem Hongbin saw, where the resource group kills 
all Vms and then restarts them on a stack update to the count in a resource 
group is fixed in master and stable/juno.

See reviews:
https://review.openstack.org/#/c/141820/

More important is this review
https://review.openstack.org/#/c/131538/

Which implies a replace policy of auto is required for ports, or the vms could 
potentially be ripped out.
A good thread about why that is needed:
http://lists.openstack.org/pipermail/openstack-dev/2014-October/049376.html

I think adding the replacement_policy:auto for ports should do the trick, but 
read through the thread and the last review to make your own conclusions.

Thanks shardy for providing all the info in this email (I am just documenting)

Regards
-steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Feature Freeze Exception Request (libvirt vhostuser vif driver)

2015-02-09 Thread Czesnowicz, Przemyslaw
Hi,

I would like to request FFE for vhostuser vif driver.

2 reviews : 
https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bp/libvirt-vif-vhost-user,n,z

BP: https://blueprints.launchpad.net/nova/+spec/libvirt-vif-vhost-user
Spec: https://review.openstack.org/138736

Blueprint was approved but it's status was changed because of FF.
Vhostuser is a Qemu feature that allows fastpath into the VM for userspace 
vSwitches.
The changes are small and mostly contained to libvirt driver.
Vhostuser support was proposed for Juno by Snabb switch guys but didn't make it,
this implementation supports their usecase as well .

Thanks
Przemek

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][oslo.db][nova] TL; DR Things everybody should know about Galera

2015-02-09 Thread Clint Byrum
Excerpts from Jay Pipes's message of 2015-02-09 12:36:45 -0800:
 CAS is preferred because it is measurably faster and more 
 obstruction-free than SELECT FOR UPDATE. A colleague of mine is almost 
 ready to publish documentation showing a benchmark of this that shows 
 nearly a 100% decrease in total amount of lock/wait time using CAS 
 versus waiting for the coarser-level certification timeout to retry the 
 transactions. As mentioned above, I believe this is due to the dramatic 
 decrease in ROLLBACKs.
 

I think the missing piece of the puzzle for me was that each ROLLBACK is
an expensive operation. I figured it was like a non-local return (i.e.
'raise' in python or 'throw' in java) and thus not measurably different.
But now that I think of it, there is likely quite a bit of optimization
around the query path, and not so much around the rollback path.

The bottom of this rabbit hole is simply exquisite, isn't it? :)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable] juno is fubar in the gate

2015-02-09 Thread Joe Gordon
On Mon, Feb 9, 2015 at 2:20 PM, Joe Gordon joe.gord...@gmail.com wrote:



 On Mon, Feb 9, 2015 at 1:02 PM, John Griffith john.griffi...@gmail.com
 wrote:

 On Mon, Feb 9, 2015 at 1:56 PM, Matthew Treinish mtrein...@kortar.org
 wrote:
  On Mon, Feb 09, 2015 at 01:24:34PM -0600, Matt Riedemann wrote:
 
 
  On 2/9/2015 12:23 PM, Joe Gordon wrote:
  
  On Feb 9, 2015 10:04 AM, Matt Riedemann mrie...@linux.vnet.ibm.com
  mailto:mrie...@linux.vnet.ibm.com wrote:
   
There are at least two blocking bugs:
   
1. https://bugs.launchpad.net/grenade/+bug/1419913
   
Sounds like jogo is working a javelin fix for this. I'm not aware
 of
  a patch to review though.
  
  We need to stop trying to install tempest in the same env as stable/*
 code.
  
  I should be able to revise/respond to comments shortly.
  
  https://review.openstack.org/#/c/153080/
  
  https://review.openstack.org/#/c/153702/
  
  This is also blocking my effort to pin stable dependencies (Dean's
  devstack changes are needed before we can pin stable dependencies as
 well).
  
   
2. https://bugs.launchpad.net/ceilometer/+bug/1419919
   
I'm not sure yet what's going on with this one.
   
 
  Tracking etherpad:
 
  https://etherpad.openstack.org/p/wedged-stable-gate-feb-2015
 
 
  So I think it's time we called the icehouse branch and marked it EOL. We
  originally conditioned the longer support window on extra people
 stepping
  forward to keep things working. I believe this latest issue is just the
 latest
  indication that this hasn't happened. Issue 1 listed above is being
 caused by
  the icehouse branch during upgrades. The fact that a stable release was
 pushed
  at the same time things were wedged on the juno branch is just the
 latest
  evidence to me that things aren't being maintained as they should be.
 Looking at
  the #openstack-qa irc log from today or the etherpad about trying to
 sort this
  issue should be an indication that no one has stepped up to help with
 the
  maintenance and it shows given the poor state of the branch.
 
  If I'm not mistaken with our original support window lengths Icehouse
 would be
  EOL'd around now. So it's time we stopped pretending we'll be
 maintaining this
  branch for several more months and just go through the normal EOL
 procedure.
 


 Was this serious?  I mean, we just say; 'sorry, yes we said support
 until X; but now it's hard so we're going to drop it'.

 Tell me I'm missing something here?


 You are missing the fact that a bunch of us (Matt Treinish, myself and
 others) are frustrated by the fact that we end up fixing stable branches
 whenever they break because we touch tempest, grenade and other projects
 that require working stable branches. But we do not want to be working on
 stable branches ourselves.  I begrudgingly stepped up to work on pinning
 all requirements on stable branches, to reduce the number of times stable
 branches break and ruin my day. But my plan to cap dependencies has been
 delayed several times by stable branches breaking again and again, along
 with unwinding undesired behaviors in our testing harnesses.


Note: At least 3 of us just spent most of the day working on this instead
of working on developing on other things.



 Most recently, stable/juno grenade broke on February 4th (due to the
 release of tempest-lib 0.2.0). This caused bug
 https://bugs.launchpad.net/grenade/+bug/1419913
 https://bugs.launchpad.net/grenade/+bug/1419913
  ( pkg_resources.ContextualVersionConflict: (oslo.config 1.4.0
 (/usr/local/lib/python2.7/dist-packages),
 Requirement.parse('oslo.config=1.6.0'), set(['tempest-lib'])). This
 specific bug is caused because we install master tempest (due to branchless
 tempest) on stable/icehouse and sync in stable/icehouse global requirements
 which not surprisingly has a conflict with tempest's requirements.  So the
 solution here is stop installing tempest and requiring it  to work with
 stable/icehouse, stable/juno and master's version of global-requirements.
 But that doesn't work because master tempest has an uncapped version of
 boto but nova stable/icehouse only works with the capped version of
 Icehouse. So we get this https://review.openstack.org/#/c/154217/1/. So
 now we are exploring dropping the EC2 tests on stable/icehouse. If that
 works, we still need to land roughly 4 more patches to unwedge this
 stable/juno grenade and prevent this type of issue from happening in the
 future.

 Lets say we EOL Icehouse, we stop running grenade on stable/juno patches.
 Meaning this bug goes away all together and stable/juno is unwedged and I
 can move forward with pinning all stable/juno requirements.

 What I expect to happen when issues like this arise is interested parties
 work together to fix things and be proactive and make stable testing more
 robust. Instead we currently have people who have no desire to work on
 stable branches maintaining them. Pinning all direct stable/* requirements
 isn't enough to make sure 

Re: [openstack-dev] [oslo.messaging] Can RPCClient.call() be used with a subset of servers?

2015-02-09 Thread Russell Bryant
On 02/09/2015 04:04 PM, Doug Hellmann wrote:
 
 
 On Mon, Feb 9, 2015, at 02:40 PM, Gravel, Julie Chongcharoen wrote:
 Hello,
 I want to use oslo.messaging.RPCClient.call() to invoke a
 method on multiple servers, but not all of them. Can this
 be done and how? I read the code documentation (client.py
 and target.py). I only saw either the call used for one
 server at a time, or for all of them using the fanout
 param. Neither options is exactly what I want.
 Any response/explanation would be highly appreciated.
 
 This isn't a pattern that has come up before. Before we talk about
 adding it, I'd like to understand more about your use case. How do you
 know which servers should receive the call, for example? And are you
 actually calling and expecting a response, or do you just need to send a
 message to those servers?

If no response is needed, you might be able to use whatever is done for
notifications.  Notifications are sent out to a topic, and N servers may
be subscribed to receive those notifications.  IIRC, that isn't done
through the RPC classes, but would result in the pattern desired here.

-- 
Russell Bryant

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable] juno is fubar in the gate

2015-02-09 Thread Joe Gordon
On Mon, Feb 9, 2015 at 1:02 PM, John Griffith john.griffi...@gmail.com
wrote:

 On Mon, Feb 9, 2015 at 1:56 PM, Matthew Treinish mtrein...@kortar.org
 wrote:
  On Mon, Feb 09, 2015 at 01:24:34PM -0600, Matt Riedemann wrote:
 
 
  On 2/9/2015 12:23 PM, Joe Gordon wrote:
  
  On Feb 9, 2015 10:04 AM, Matt Riedemann mrie...@linux.vnet.ibm.com
  mailto:mrie...@linux.vnet.ibm.com wrote:
   
There are at least two blocking bugs:
   
1. https://bugs.launchpad.net/grenade/+bug/1419913
   
Sounds like jogo is working a javelin fix for this. I'm not aware of
  a patch to review though.
  
  We need to stop trying to install tempest in the same env as stable/*
 code.
  
  I should be able to revise/respond to comments shortly.
  
  https://review.openstack.org/#/c/153080/
  
  https://review.openstack.org/#/c/153702/
  
  This is also blocking my effort to pin stable dependencies (Dean's
  devstack changes are needed before we can pin stable dependencies as
 well).
  
   
2. https://bugs.launchpad.net/ceilometer/+bug/1419919
   
I'm not sure yet what's going on with this one.
   
 
  Tracking etherpad:
 
  https://etherpad.openstack.org/p/wedged-stable-gate-feb-2015
 
 
  So I think it's time we called the icehouse branch and marked it EOL. We
  originally conditioned the longer support window on extra people stepping
  forward to keep things working. I believe this latest issue is just the
 latest
  indication that this hasn't happened. Issue 1 listed above is being
 caused by
  the icehouse branch during upgrades. The fact that a stable release was
 pushed
  at the same time things were wedged on the juno branch is just the latest
  evidence to me that things aren't being maintained as they should be.
 Looking at
  the #openstack-qa irc log from today or the etherpad about trying to
 sort this
  issue should be an indication that no one has stepped up to help with the
  maintenance and it shows given the poor state of the branch.
 
  If I'm not mistaken with our original support window lengths Icehouse
 would be
  EOL'd around now. So it's time we stopped pretending we'll be
 maintaining this
  branch for several more months and just go through the normal EOL
 procedure.
 


 Was this serious?  I mean, we just say; 'sorry, yes we said support
 until X; but now it's hard so we're going to drop it'.

 Tell me I'm missing something here?


You are missing the fact that a bunch of us (Matt Treinish, myself and
others) are frustrated by the fact that we end up fixing stable branches
whenever they break because we touch tempest, grenade and other projects
that require working stable branches. But we do not want to be working on
stable branches ourselves.  I begrudgingly stepped up to work on pinning
all requirements on stable branches, to reduce the number of times stable
branches break and ruin my day. But my plan to cap dependencies has been
delayed several times by stable branches breaking again and again, along
with unwinding undesired behaviors in our testing harnesses.

Most recently, stable/juno grenade broke on February 4th (due to the
release of tempest-lib 0.2.0). This caused bug
https://bugs.launchpad.net/grenade/+bug/1419913
https://bugs.launchpad.net/grenade/+bug/1419913
 ( pkg_resources.ContextualVersionConflict: (oslo.config 1.4.0
(/usr/local/lib/python2.7/dist-packages),
Requirement.parse('oslo.config=1.6.0'), set(['tempest-lib'])). This
specific bug is caused because we install master tempest (due to branchless
tempest) on stable/icehouse and sync in stable/icehouse global requirements
which not surprisingly has a conflict with tempest's requirements.  So the
solution here is stop installing tempest and requiring it  to work with
stable/icehouse, stable/juno and master's version of global-requirements.
But that doesn't work because master tempest has an uncapped version of
boto but nova stable/icehouse only works with the capped version of
Icehouse. So we get this https://review.openstack.org/#/c/154217/1/. So now
we are exploring dropping the EC2 tests on stable/icehouse. If that works,
we still need to land roughly 4 more patches to unwedge this stable/juno
grenade and prevent this type of issue from happening in the future.

Lets say we EOL Icehouse, we stop running grenade on stable/juno patches.
Meaning this bug goes away all together and stable/juno is unwedged and I
can move forward with pinning all stable/juno requirements.

What I expect to happen when issues like this arise is interested parties
work together to fix things and be proactive and make stable testing more
robust. Instead we currently have people who have no desire to work on
stable branches maintaining them. Pinning all direct stable/* requirements
isn't enough to make sure stable/* doesn't break. There are transitive
dependencies that can change (I have a plan on how to pin those too, but it
will take time and I can use some help), and changing packages etc. can
break things as well.  Having a reactive stable 

Re: [openstack-dev] [horizon] JavaScript docs?

2015-02-09 Thread Radomir Dopieralski
On 02/05/2015 06:20 PM, Matthew Farina wrote:
 I'd like to step back for a moment as to the purpose of different kinds
 of documentation. Sphinx is great and it provides some forms of
 documentation. But, why do we document methods, classes, or functions in
 python? Should we drop that and rely on Sphinx? I don't think anyone
 would argue for that.

We actually rely on Sphinx for documenting methods, classes or
functions. Not sure what your point is here.

-- 
Radomir Dopieralski


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] fuel-client and Nailgun API

2015-02-09 Thread Nikolay Markov
Hello colleagues,

They say, there is some kind of holywar around the topic on if
fuel-client tests should rely on working Nailgun API without mocking
it. This is also connected with API stabilizing and finally moving
fuel-client to a separate library which may be used by any third-party
projects.

I just wanted to start this thread so everyone can share his opinion
on both Nailgun API stabilizing and further fate of fuel-client as a
separate library (how they do it in OpenStack projects).

Everyone is welcome to participate.

-- 
Best regards,
Nick Markov

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] fuel-client and Nailgun API

2015-02-09 Thread Kamil Sambor
Hi all,

I don't know nothing about 'holywar' so I'm interested where it had place.
According to the fuel-client tests I think that it will be good idea to run
some integration tests on nailgun API to check if client really works with
nailgun and if it works as expected, but unit test can have mocks if it is
necessary. If we will have test run on nailgun we can be sure that even if
responses has been  changed, client can still work or when we add new API
our client realy works with API, not only with mocked responses.

Best regards,
Kamil Sambor

On Mon, Feb 9, 2015 at 1:57 PM, Nikolay Markov nmar...@mirantis.com wrote:

 Hello colleagues,

 They say, there is some kind of holywar around the topic on if
 fuel-client tests should rely on working Nailgun API without mocking
 it. This is also connected with API stabilizing and finally moving
 fuel-client to a separate library which may be used by any third-party
 projects.

 I just wanted to start this thread so everyone can share his opinion
 on both Nailgun API stabilizing and further fate of fuel-client as a
 separate library (how they do it in OpenStack projects).

 Everyone is welcome to participate.

 --
 Best regards,
 Nick Markov

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] fuel-client and Nailgun API

2015-02-09 Thread Roman Prykhodchenko
Actually it was not a hollywar but a small discussion which get no continuation 
due to low priority and some technical problems.

The point is that ATM unit tests in python-fuelclient act like integration 
tests because they require a live instance of the Nailgun API and certain data 
in Nailgun’s DB. The straightforward solution is of course to mock everything 
and pretend to be happy. However, being happy wont last for too long because 
once Nailgun’s API is changed, the tests will start returning false-positive 
results.

Mocking all invocations ATM will require someone to sit and watch for any 
changes to Nailgun API and update mocks.

Basically I stand for simplification of unit testing in python-fuelclient so 
standard python-jobs will be able to run them. However, two things should be 
done in order to do painless mocking of API calls:

 - Nailgun should have a precisely documented API, probably in machine-readable 
format.
 - There should be a precisely defined mechanism of changing and versioning 
Nailgun API.

Before that is done I see no point in mocking API calls in unit-tests.


- romcheg

 9 лют. 2015 о 14:03 Sebastian Kalinowski skalinow...@mirantis.com 
 написав(ла):
 
 Hi,
 
 2015-02-09 13:57 GMT+01:00 Nikolay Markov nmar...@mirantis.com:
 They say, there is some kind of holywar around the topic on if
 fuel-client tests should rely on working Nailgun API without mocking
 it.
 
 Could you point us where was such hollywar was, so we could get some 
 background on the topic?
 
  Best,
 Sebastian
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack Foundation] Finding people to work on the EC2 API in Nova

2015-02-09 Thread M Ranga Swami Reddy
Hi All,
I will be creating the a sub group in Nova for EC2 APIs and start the
weekly meetings, reviews, code cleanup, etc tasks.
Will update the same on wiki page also soon..

Thanks
Swami

On Fri, Feb 6, 2015 at 9:27 PM, David Kranz dkr...@redhat.com wrote:
 On 02/06/2015 07:49 AM, Sean Dague wrote:

 On 02/06/2015 07:39 AM, Alexandre Levine wrote:

 Rushi,

 We're adding new tempest tests into our stackforge-api/ec2-api. The
 review will appear in a couple of days. These tests will be good for
 running against both nova/ec2-api and stackforge/ec2-api. As soon as
 they are there, you'll be more than welcome to add even more.

 Best regards,
Alex Levine

 Honestly, I'm more more pro having the ec2 tests in a tree that isn't
 Tempest. Most Tempest reviewers aren't familiar with the ec2 API, their
 focus has been OpenStack APIs.

 Having a place where there is a review team that is dedicated only to
 the EC2 API seems much better.

 -Sean

 +1

  And once similar coverage to the current tempest ec2 tests is achieved,
 either by copying from tempest or creating anew, we should remove the ec2
 tests from tempest.

  -David



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Infra] Meeting Tuesday February 10th at 19:00 UTC

2015-02-09 Thread Elizabeth K. Joseph
Hi everyone,

The OpenStack Infrastructure (Infra) team is having our next weekly
meeting on Tuesday February 10th, at 19:00 UTC in #openstack-meeting

Meeting agenda available here:
https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting (anyone is
welcome to to add agenda items)

Everyone interested in infrastructure and process surrounding
automated testing and deployment is encouraged to attend.

And in case you missed it or would like a refresher, meeting logs and
minutes from our last meeting are available here:

Minutes: 
http://eavesdrop.openstack.org/meetings/infra/2015/infra.2015-02-03-19.01.html
Minutes (text):
http://eavesdrop.openstack.org/meetings/infra/2015/infra.2015-02-03-19.01.txt
Log: 
http://eavesdrop.openstack.org/meetings/infra/2015/infra.2015-02-03-19.01.log.html


-- 
Elizabeth Krumbach Joseph || Lyz || pleia2

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.messaging] Can RPCClient.call() be used with a subset of servers?

2015-02-09 Thread Denis Makogon
On Monday, February 9, 2015, Gravel, Julie Chongcharoen julie.gra...@hp.com
wrote:

  Hello,

 I want to use oslo.messaging.RPCClient.call() to invoke a
 method on multiple servers, but not all of them. Can this be done and how?
 I read the code documentation (client.py and target.py). I only saw either
 the call used for one server at a time, or for all of them using the fanout
 param. Neither options is exactly what I want.

 Any response/explanation would be highly appreciated.



Hello, I would say that there's no need to have such ability since since
oslo.messaging is unaware of your servers, so everything you need is to
write your own code to accomplish your mission. Even if you want to execute
call procedures at the same time you can parallelize your code. Would that
work for you?



 Regards,

 Julie Gravel




Kind regards,
DenisM.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][oslo.db][nova] TL; DR Things everybody should know about Galera

2015-02-09 Thread Clint Byrum
Excerpts from Jay Pipes's message of 2015-02-09 10:15:10 -0800:
 On 02/09/2015 01:02 PM, Attila Fazekas wrote:
  I do not see why not to use `FOR UPDATE` even with multi-writer or
  Is the retry/swap way really solves anything here.
 snip
  Am I missed something ?
 
 Yes. Galera does not replicate the (internal to InnnoDB) row-level locks 
 that are needed to support SELECT FOR UPDATE statements across multiple 
 cluster nodes.
 
 https://groups.google.com/forum/#!msg/codership-team/Au1jVFKQv8o/QYV_Z_t5YAEJ
 

Attila acknowledged that. What Attila was saying was that by using it
with Galera, the box that is doing the FOR UPDATE locks will simply fail
upon commit because a conflicting commit has already happened and arrived
from the node that accepted the write. Further what Attila is saying is
that this means there is not such an obvious advantage to the CAS method,
since the rollback and the # updated rows == 0 are effectively equivalent
at this point, seeing as the prior commit has already arrived and thus
will not need to wait to fail certification and be rolled back.

I am not entirely certain that is true though, as I think what will
happen in sequential order is:

writer1: UPDATE books SET genre = 'Scifi' WHERE genre = 'sciencefiction';
writer1: -- send in-progress update to cluster
writer2: SELECT FOR UPDATE books WHERE id=3;
writer1: COMMIT
writer1: -- try to certify commit in cluster
** Here is where I stop knowing for sure what happens **
writer2: certifies writer1's transaction or blocks?
writer2: UPDATE books SET genre = 'sciencefiction' WHERE id=3;
writer2: COMMIT -- One of them is rolled back.

So, at that point where I'm not sure (please some Galera expert tell
me):

If what happens is as I suggest, writer1's transaction is certified,
then that just means the lock sticks around blocking stuff on writer2,
but that the data is updated and it is certain that writer2's commit will
be rolled back. However, if it blocks waiting on the lock to resolve,
then I'm at a loss to determine which transaction would be rolled back,
but I am thinking that it makes sense that the transaction from writer2
would be rolled back, because the commit is later.

All this to say that usually the reason for SELECT FOR UPDATE is not
to only do an update (the transactional semantics handle that), but
also to prevent the old row from being seen again, which, as Jay says,
it cannot do.  So I believe you are both correct:

* Attila, yes I think you're right that CAS is not any more efficient
at replacing SELECT FOR UPDATE from a blocking standpoint.

* Jay, yes I think you're right that SELECT FOR UPDATE is not the right
thing to use to do such reads, because one is relying on locks that are
meaningless on a Galera cluster.

Where I think the CAS ends up being the preferred method for this sort
of thing is where one consideres that it won't hold a meaningless lock
while the transaction is completed and then rolled back.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] [devstack] configuring https for glance client

2015-02-09 Thread Andrew Lazarev
Hi Nova experts,

Some time ago I figured out that devstack fails to stack with USE_SSL=True
option because it doesn't configure nova to work with secured glace [1].
Support of secured glance was added to nova in Juno cycle [2], but it looks
strange for me.

Glance client takes settings form '[ssl]' section. The same section is used
to set up nova server SSL settings. Other clients have separate sections in
the config file (and switching to session use now),  e.g. related code for
cinder - [3].

I've created quick fix for the devstack - [4], but it would be nice to shed
a light on nova plans around glance config before merging a workaround for
devstack.

So, the questions are:
1. Is it normal that glance client reads from '[ssl]' config section?
2. Is there a plan to move glance client to sessions use and move
corresponding config section to '[glance]'?
3. Are any plans to run CI for USE_SSL=True use case?

[1] - https://bugs.launchpad.net/devstack/+bug/1405484
[2] - https://review.openstack.org/#/c/72974
[3] -
https://github.com/openstack/nova/blob/2015.1.0b2/nova/volume/cinder.py#L73
[4] - https://review.openstack.org/#/c/153737

Thanks,
Andrew.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] [devstack] configuring https for glance client

2015-02-09 Thread Andrew Lazarev
Hi Nova experts,

Some time ago I figured out that devstack fails to stack with USE_SSL=True
option because it doesn't configure nova to work with secured glace [1].
Support of secured glance was added to nova in Juno cycle [2], but it looks
strange for me.

Glance client takes settings form '[ssl]' section. The same section is used
to set up nova server SSL settings. Other clients have separate sections in
the config file (and switching to session use now),  e.g. related code for
cinder - [3].

I've created quick fix for the devstack - [4], but it would be nice to shed
a light on nova plans around glance config before merging a workaround for
devstack.

So, the questions are:
1. Is it normal that glance client reads from '[ssl]' config section?
2. Is there a plan to move glance client to sessions use and move
corresponding config section to '[glance]'?
3. Are any plans to run CI for USE_SSL=True use case?

[1] - https://bugs.launchpad.net/devstack/+bug/1405484
[2] - https://review.openstack.org/#/c/72974
[3] -
https://github.com/openstack/nova/blob/2015.1.0b2/nova/volume/cinder.py#L73
[4] - https://review.openstack.org/#/c/153737

Thanks,
Andrew.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] Scheduling for Magnum

2015-02-09 Thread Adrian Otto
Steve,

On Feb 9, 2015, at 9:54 AM, Steven Dake (stdake) 
std...@cisco.commailto:std...@cisco.com wrote:



From: Andrew Melton 
andrew.mel...@rackspace.commailto:andrew.mel...@rackspace.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Monday, February 9, 2015 at 10:38 AM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Magnum] Scheduling for Magnum

I think Sylvain is getting at an important point. Magnum is trying to be as 
agnostic as possible when it comes to selecting a backend. Because of that, I 
think the biggest benefit to Magnum would be a generic scheduling interface 
that each pod type would implement. A pod type with a backend providing 
scheduling could implement a thin scheduler that simply translates the generic 
requests into something the backend can understand. And a pod type requiring 
outside scheduling could implement something more heavy.

If we are careful to keep the heavy scheduling generic enough to be shared 
between backends requiring it, we could hopefully swap in an implementation 
using Gantt once that is ready.

Great mid-cycle topic discussion topic.  Can you add it to the planning 
etherpad?

Yes, it was listed as #5 here:
https://etherpad.openstack.org/p/magnum-midcycle-topics

We will arrange that further up the priority list as soon as we feel that list 
is complete, and ready for sorting.

Adrian


Thanks
-steve

--Andrew


From: Jay Lau [jay.lau@gmail.commailto:jay.lau@gmail.com]
Sent: Monday, February 09, 2015 4:36 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Magnum] Scheduling for Magnum

Thanks Sylvain, we did not work out the API requirement till now but I think 
the requirement should be similar with nova: we need select_destination to 
select the best target host based on filters and weights.

There are also some discussions here 
https://blueprints.launchpad.net/magnum/+spec/magnum-scheduler-for-docker

Thanks!

2015-02-09 16:22 GMT+08:00 Sylvain Bauza 
sba...@redhat.commailto:sba...@redhat.com:
Hi Magnum team,


Le 07/02/2015 19:24, Steven Dake (stdake) a écrit :


From: Eric Windisch e...@windisch.usmailto:e...@windisch.us
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Saturday, February 7, 2015 at 10:09 AM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Magnum] Scheduling for Magnum


1) Cherry pick scheduler code from Nova, which already has a working a filter 
scheduler design.

The Gantt team explored that option by the Icehouse cycle and it failed with a 
lot of problems. I won't list all of those, but I'll just explain that we 
discovered how the Scheduler and the Nova compute manager were tighly coupled, 
which was meaning that a repository fork was really difficult to do without 
reducing the tech debt.

That said, our concerns were far different from the Magnum team : it was about 
having feature parity and replacing the current Nova scheduler, while your team 
is just saying that they want to have something about containers.


2) Integrate swarmd to leverage its scheduler[2].

I see #2 as not an alternative but possibly an also. Swarm uses the Docker 
API, although they're only about 75% compatible at the moment. Ideally, the 
Docker backend would work with both single docker hosts and clusters of Docker 
machines powered by Swarm. It would be nice, however, if scheduler hints could 
be passed from Magnum to Swarm.

Regards,
Eric Windisch

Adrian  Eric,

I would prefer to keep things simple and just integrate directly with swarm and 
leave out any cherry-picking from Nova. It would be better to integrate 
scheduling hints into Swarm, but I’m sure the swarm upstream is busy with 
requests and this may be difficult to achieve.


I don't want to give my opinion about which option you should take as I don't 
really know your needs. If I understand correctly, this is about having a 
scheduler providing affinity rules for containers. Do you have a document 
explaining which interfaces you're looking for, which kind of APIs you're 
wanting or what's missing with the current Nova scheduler ?

MHO is that the technology shouldn't drive your decision : whatever the backend 
is (swarmd or an inherited nova scheduler), your interfaces should be the same.

-Sylvain


Regards
-steve




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

Re: [openstack-dev] [Fuel] fuel-client and Nailgun API

2015-02-09 Thread Sebastian Kalinowski
Hi,

2015-02-09 13:57 GMT+01:00 Nikolay Markov nmar...@mirantis.com:

 They say, there is some kind of holywar around the topic on if
 fuel-client tests should rely on working Nailgun API without mocking
 it.


Could you point us where was such hollywar was, so we could get some
background on the topic?

 Best,
Sebastian
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] [nailgun] [UI] network_check_status fleild for environments

2015-02-09 Thread Dmitriy Shulyak
On Mon, Feb 9, 2015 at 1:35 PM, Przemyslaw Kaminski pkamin...@mirantis.com
wrote:

  Well i think there should be finished_at field anyway, why not to
  add it for this purpose?

 So you're suggesting to add another column and modify all tasks for
 this one feature?


Such things as time stamps should be on all tasks anyway.


  I dont actually recall what was the reason to delete them, but if
  it happens imo it is ok to show right now that network verification
  wasnt performed.

 Is this how one does predictible and easy to understand software?
 Sometimes we'll say that verification is OK, othertimes that it wasn't
 performed?

 In my opinion the questions that needs to be answered - what is the reason
or event to remove verify_networks tasks history?


  3. Just having network verification status as ready is NOT enough.
  From the UI you can fire off network verification for unsaved
  changes. Some JSON request is made, network configuration validated
  by tasks and RPC call made returing that all is OK for example. But
  if you haven't saved your changes then in fact you haven't verified
  your current configuration, just some other one. So in this case
  task status 'ready' doesn't mean that current cluster config is
  valid. What do you propose in this case? Fail the task on purpose?

 Issue #3 I described is still valid -- what is your solution in this case?

 Ok, sorry.
What do you think if in such case we will remove old tasks?
It seems to me that is correct event in which old verify_networks is
invalid anyway,
and there is no point to store history.


 As far as I understand, there's one supertask 'verify_networks'
 (called in nailgu/task/manager.py line 751). It spawns other tasks
 that do verification. When all is OK verify_networks calls RPC's
 'verify_networks_resp' method and returns a 'ready' status and at that
 point I can inject code to also set the DB column in cluster saying
 that network verification was OK for the saved configuration. Adding
 other tasks should in no way affect this behavior since they're just
 subtasks of this task -- or am I wrong?


It is not that smooth, but in general yes - it can be done when state of
verify_networks is changed.
But lets say we have some_settings_verify task? Would be it valid to add
one more field on cluster model, like some_settings_status?
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] [nailgun] [UI] network_check_status fleild for environments

2015-02-09 Thread Przemyslaw Kaminski


On 02/09/2015 01:18 PM, Dmitriy Shulyak wrote:
 
 On Mon, Feb 9, 2015 at 1:35 PM, Przemyslaw Kaminski 
 pkamin...@mirantis.com mailto:pkamin...@mirantis.com wrote:
 
 Well i think there should be finished_at field anyway, why not
 to add it for this purpose?
 
 So you're suggesting to add another column and modify all tasks
 for this one feature?
 
 
 Such things as time stamps should be on all tasks anyway.
 
 
 I dont actually recall what was the reason to delete them, but
 if it happens imo it is ok to show right now that network
 verification wasnt performed.
 
 Is this how one does predictible and easy to understand software? 
 Sometimes we'll say that verification is OK, othertimes that it
 wasn't performed?
 
 In my opinion the questions that needs to be answered - what is
 the reason or event to remove verify_networks tasks history?
 
 
 3. Just having network verification status as ready is NOT
 enough. From the UI you can fire off network verification for
 unsaved changes. Some JSON request is made, network configuration
 validated by tasks and RPC call made returing that all is OK for
 example. But if you haven't saved your changes then in fact you
 haven't verified your current configuration, just some other one.
 So in this case task status 'ready' doesn't mean that current
 cluster config is valid. What do you propose in this case? Fail
 the task on purpose?
 
 Issue #3 I described is still valid -- what is your solution in
 this case?
 
 Ok, sorry. What do you think if in such case we will remove old
 tasks? It seems to me that is correct event in which old
 verify_networks is invalid anyway, and there is no point to store
 history.

Well, not exactly. Configure networks, save settings, do network check
all assume that all went fine. Now change one thing without saving,
check settings, didn't pass but it doesn't affect the flag because
that's some different configuration from the saved one. And your
original cluster is OK still. So in this case user will have to yet
again run the original check. The plus of the network_check_status
column is actually you don't need to store any history -- task can be
deleted or whatever and still last checked saved configuration
matters. User can perform other checks 'for free' and is not required
to rerun the working configuration checks.

With data depending on tasks you actually have to store a lot of
history because you need to keep last working saved configuration --
otherwise user will have to rerun original configuration. So from
usability point of view this is a worse solution.

 
 
 As far as I understand, there's one supertask 'verify_networks' 
 (called in nailgu/task/manager.py line 751). It spawns other tasks 
 that do verification. When all is OK verify_networks calls RPC's 
 'verify_networks_resp' method and returns a 'ready' status and at
 that point I can inject code to also set the DB column in cluster
 saying that network verification was OK for the saved
 configuration. Adding other tasks should in no way affect this
 behavior since they're just subtasks of this task -- or am I
 wrong?
 
 
 It is not that smooth, but in general yes - it can be done when
 state of verify_networks is changed. But lets say we have
 some_settings_verify task? Would be it valid to add one more field
 on cluster model, like some_settings_status?

Well, why not? Cluster deployment is a task and it's status is saved
in cluster colum and not fetched from tasks. As you see the logic of
network task verification is not simply based on ready/error status
reading but more subtle. What other settings you have in mind? I guess
when we have more of them one can create a separate table to keep
them, but for now I don't see a point in doing this.

P.

 
 
 
 
 
 __

 
OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] fuel-client and Nailgun API

2015-02-09 Thread Nikolay Markov
Sebastian, it was mostly on some internal meetings. I think Roman
Prykhodchenko was going to participate and shine some light on topic.

On Mon, Feb 9, 2015 at 4:03 PM, Sebastian Kalinowski
skalinow...@mirantis.com wrote:
 Hi,

 2015-02-09 13:57 GMT+01:00 Nikolay Markov nmar...@mirantis.com:

 They say, there is some kind of holywar around the topic on if
 fuel-client tests should rely on working Nailgun API without mocking
 it.


 Could you point us where was such hollywar was, so we could get some
 background on the topic?

  Best,
 Sebastian

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Best regards,
Nick Markov

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable] juno is fubar in the gate

2015-02-09 Thread Alan Pevec
  Tracking etherpad:
  https://etherpad.openstack.org/p/wedged-stable-gate-feb-2015

BTW there is a tracking etherpad updated by
https://wiki.openstack.org/wiki/StableBranch#Stable_branch_champions
https://etherpad.openstack.org/p/stable-tracker
linked in https://wiki.openstack.org/wiki/StableBranch#Gate_Status and
announced on this list
http://lists.openstack.org/pipermail/openstack-dev/2015-January/05.html

From crossed items in Recently closed section you can see that
branch champions have been busy.

 You are missing the fact that a bunch of us (Matt Treinish, myself and
 others) are frustrated by the fact that we end up fixing stable branches
 whenever they break because we touch tempest, grenade and other projects
 that require working stable branches. But we do not want to be working on
 stable branches ourselves.  I begrudgingly stepped up to work on pinning all
 requirements on stable branches, to reduce the number of times stable
 branches break and ruin my day. But my plan to cap dependencies has been
 delayed several times by stable branches breaking again and again, along
 with unwinding undesired behaviors in our testing harnesses.

 Most recently, stable/juno grenade broke on February 4th (due to the release
 of tempest-lib 0.2.0). This caused bug

So that's a change in tooling, not stable branch itself. Idea when 15
months for Icehouse was discussed was that branchless Tempest would
make it easier, but now it turns out that's making both tooling and
stable branch unhappy.

 What I expect to happen when issues like this arise is interested parties
 work together to fix things and be proactive and make stable testing more
 robust. Instead we currently have people who have no desire to work on
 stable branches maintaining them.

At least parts of stable team have been pro-active (see above
etherpad) but I guess we have a communication issue here: has
anyonetried to contact stable branch champions (juno=Adam,
icehouse=Ihar) and what exactly do you expect stable team to do?
AFAICT these are all changes in tooling where stable-maint is not core
(devstack, tempest)...

BTW Icehouse 2014.1.4 was planned[*] for Feb 19 with freeze starting
on Feb 12, I'll delay it for now until we sort the current situation
out.


Cheers,
Alan


[*] 
https://wiki.openstack.org/wiki/StableBranchRelease#Planned_stable.2Ficehouse_releases

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >