[openstack-dev] [OpenStack][Nova][Cold Migration] What about enable cold migration with taraget host

2014-01-13 Thread Jay Lau
Greetings,

Now cold migration do not support migrate a VM instance with target host,
what about add this feature to enable cold migration with a target host?

I encounter this issue is because I was creating a HA service and the
service will monitor if there are some hosts failure, and the HA service
can enable customer write some plugins to predict host status.

If the host is going down, then the customized plugin will report the
status to HA service. Then HA service will do live migration for VMs in
ACTIVE state and do cold migration for VMs in STOPPED status. The problem
is that HA service can help select the target host for both cold migration
and live migration. live migration support migrate VM with target host so I
can transfer the target host returned by HA service to live migration; cold
migration does not support migrate with target host.

So what about adding this feature to nova?

Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [qa] api schema validation pattern changes

2014-01-13 Thread Ken'ichi Ohmichi
Hi David,

2014/1/13 David Kranz dkr...@redhat.com:
 On 01/12/2014 10:14 PM, Matthew Treinish wrote:

 snip

 Last, is a question, is it possible to currently run the full API an
 build a
 json schema for it all? Or to query these validating schemas? We
 *really*
 want that over in tempest, so we can completely drop the manual creation
 of
 negative testing, and just fuzz known bad against the schema
 definitions.

 Sorry, I'm not sure I understand this question correctly.
 We need to define schemas for each API with the separated schema patches.
 It is impossible to define one schema for all APIs.


 Hi  David, Marc,

 I guess the negative test generator of Tempest would need each API
 definition.
 Glance can provide API definitions through API with jsonschema format,
 but
 Nova does not have such feature.
 We need to port these API schema from Nova to Tempest, I guess. right?

 As I understand things after reviewing the first versions of the negative
 test
 generator patch is that we have to hard code the schema into a file (right
 now
 it's in the test file, but eventually it'll be an external input file).
 One of
 my issues with doing that is it's highly manual process, essentially a
 copy and
 paste from the nova tree. I think what we're looking for from this
 jsonschema
 validation work is an API which we can query the API and get the
 jsonschema
 definitions; similar to what the glance API offers.

 I looked at the glance schema api and it seemed to just return the schema
 for the  json returned by the call, not for the arguments to the request. Am
 I wrong about that?

I think you are right. Glance feature provides the schemas of request body only.
That does not contain the http method type and the endpoint.

 Additionally, the schema for the request json in these patches is not enough
 for the negative test generator. The schema for negative generation also
 needs the http method type and a description of  the resources that are part
 of the url of the request.

Thanks for your explanation, I see. Current Nova's validation feature does not
seem enough for the negative test generator. It would be necessary to hard-code
API schemas in Tempest.


Thanks
Ken'ichi Ohmichi

---

 The negative test patch in the shared fork
 https://github.com/mkoderer/tempest/tree/feature/negative_tests defines such
 a schema. The tempest patch will be updated from this fork on Monday.


 I wouldn't advocate making the negative test generator be fully dynamic
 for the
 same reason we don't autodetect which API versions and extensions/features
 are
 enabled in tempest, but rather rely on the config file. But, instead have
 an
 additional tool which could query the schema for all the endpoints in nova
 and
 generate an input file for the negative test generator. That way we'll
 still
 catch breaking API changes in the gate, but it's not a manual process to
 update
 the input file with the schema definitions in tempest when there is a
 breaking
 API change. (Which hopefully should almost never happen)

 I agree, it would be ideal if there were a way for tempest to grab the
 appropriate schemas from somewhere else rather than hard-coding them in
 tempest itself. But as far as I can see,  most of the services don't even
 have json schemas defined.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [wsme] Undefined attributes in WSME

2014-01-13 Thread Ken'ichi Ohmichi
Hi Doug,

2014/1/11 Doug Hellmann doug.hellm...@dreamhost.com:
 On Thu, Jan 9, 2014 at 12:02 AM, Jamie Lennox jamielen...@redhat.com
 wrote:

 Is there any way to have WSME pass through arbitrary attributes to the
 created object? There is nothing that i can see in the documentation or code
 that would seem to support this.

 In keystone we have the situation where arbitrary data was able to be
 attached to our resources. For example there are a certain number of
 predefined attributes for a user including name, email but if you want to
 include an address you just add an 'address': 'value' to the resource
 creation and it will be saved and returned to you when you request the
 resource.

 Ignoring whether this is a good idea or not (it's done), is the option
 there that i missed - or is there any plans/way to support something like
 this?


 There's a change in WSME trunk (I don't think we've released it yet) that
 allows the schema for a type to be changed after the class is defined. There
 isn't any facility for allowing the caller to pass arbitrary data, though.
 Part of the point of WSME is to define the inputs and outputs of the API for
 validation.

Is there a plan to release new WSME which includes new type classes?
I'd like to try applying these classes to Ceilometer after the release because
Ceilometer is the best for showing these classes' usage.


Thanks
Ken'ichi Ohmichi

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [qa] api schema validation pattern changes

2014-01-13 Thread Koderer, Marc
 Von: David Kranz [mailto:dkr...@redhat.com]
 Gesendet: Montag, 13. Januar 2014 05:04
 An: openstack-dev@lists.openstack.org
 Betreff: Re: [openstack-dev] [nova] [qa] api schema validation pattern
 changes
 
 On 01/12/2014 10:14 PM, Matthew Treinish wrote:
  snip
  Last, is a question, is it possible to currently run the full API
 an
  build a json schema for it all? Or to query these validating
  schemas? We *really* want that over in tempest, so we can
 completely
  drop the manual creation of negative testing, and just fuzz known
 bad against the schema definitions.
  Sorry, I'm not sure I understand this question correctly.
  We need to define schemas for each API with the separated schema
 patches.
  It is impossible to define one schema for all APIs.
 
 
  Hi  David, Marc,
 
  I guess the negative test generator of Tempest would need each API
 definition.
  Glance can provide API definitions through API with jsonschema
  format, but Nova does not have such feature.
  We need to port these API schema from Nova to Tempest, I guess.
 right?
 
  As I understand things after reviewing the first versions of the
  negative test generator patch is that we have to hard code the schema
  into a file (right now it's in the test file, but eventually it'll be
  an external input file). One of my issues with doing that is it's
  highly manual process, essentially a copy and paste from the nova
  tree. I think what we're looking for from this jsonschema validation
  work is an API which we can query the API and get the jsonschema
 definitions; similar to what the glance API offers.
 I looked at the glance schema api and it seemed to just return the
 schema for the  json returned by the call, not for the arguments to the
 request. Am I wrong about that?
 Additionally, the schema for the request json in these patches is not
 enough for the negative test generator. The schema for negative
 generation also needs the http method type and a description of  the
 resources that are part of the url of the request. The negative test
 patch in the shared fork
 https://github.com/mkoderer/tempest/tree/feature/negative_tests
 defines such a schema. The tempest patch will be updated from this fork
 on Monday.
 
  I wouldn't advocate making the negative test generator be fully
  dynamic for the same reason we don't autodetect which API versions
 and
  extensions/features are enabled in tempest, but rather rely on the
  config file. But, instead have an additional tool which could query
  the schema for all the endpoints in nova and generate an input file
  for the negative test generator. That way we'll still catch breaking
  API changes in the gate, but it's not a manual process to update the
  input file with the schema definitions in tempest when there is a
  breaking API change. (Which hopefully should almost never happen)
 I agree, it would be ideal if there were a way for tempest to grab the
 appropriate schemas from somewhere else rather than hard-coding them in
 tempest itself. But as far as I can see,  most of the services don't
 even have json schemas defined.

I think at the end the hard coded json schema should be optional.
If the endpoint supports an export of the schema it simply takes it and
put the result automatically into the description of the negative test.
For me this is a second step since currently we already duplicating
somehow this in our manual negative tests. What we would need is
at least one interface that exposes the schema.

 Marc


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [climate][cafe] Climate and Café teams meeting

2014-01-13 Thread Nikolay Starodubtsev
Hi, all!
 Guys, both our teams (Climate and Cafe) want to make a cross-team meeting
to discuss our future plans. If you want to participate please tip us with
good time for you.

Useful links:
* Climate https://launchpad.net/climate
* Cafe https://wiki.openstack.org/wiki/Cafe




Nikolay Starodubtsev

Software Engineer

Mirantis Inc.


Skype: dark_harlequine1
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystoneclient] old keystone-client package on pypi

2014-01-13 Thread Dina Belova
Guys, that's true :) Is it possible to update keystone client package in
pypi?

Thank you!

Dina


On Fri, Dec 27, 2013 at 1:24 PM, Nikolay Starodubtsev 
nstarodubt...@mirantis.com wrote:

 Hi all,
 Guys, I want to say that keystoneclient package on pypi is too old. For
 example it hadn't Client func in keystoneclient/client.py. May be someone
 can help me with this?



 Nikolay Starodubtsev

 Software Engineer

 Mirantis Inc.


 Skype: dark_harlequine1

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 

Best regards,

Dina Belova

Software Engineer

Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystoneclient] old keystone-client package on pypi

2014-01-13 Thread Sylvain Bauza

Le 27/12/2013 10:24, Nikolay Starodubtsev a écrit :

Hi all,
Guys, I want to say that keystoneclient package on pypi is too old. 
For example it hadn't Client func in keystoneclient/client.py. May be 
someone can help me with this?





Speaking of python-keystoneclient, the latest release is 0.4.1, which is 
indeed pretty old (Havana release timeframe).
Any chance to get a fresher release soon ? The only solution as of now 
is pointing to the master eggfile, which is really bad...


Thanks,
-Sylvain
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [neutron] PCI pass-through network support

2014-01-13 Thread Irena Berezovsky
Hi,
After having a lot of discussions both on IRC and mailing list, I would like to 
suggest to define basic use cases for PCI pass-through network support with 
agreed list of limitations and assumptions  and implement it.  By doing this 
Proof of Concept we will be able to deliver basic PCI pass-through network 
support in Icehouse timeframe and understand better how to provide complete 
solution starting from  tenant /admin API enhancement, enhancing nova-neutron 
communication and eventually provide neutron plugin  supporting the PCI 
pass-through networking.
We can try to split tasks between currently involved participants and bring up 
the basic case. Then we can enhance the implementation.
Having more knowledge and experience with neutron parts, I would like  to start 
working on neutron mechanism driver support.  I have already started to arrange 
the following blueprint doc based on everyone's ideas:
https://docs.google.com/document/d/1RfxfXBNB0mD_kH9SamwqPI8ZM-jg797ky_Fze7SakRc/edit#https://docs.google.com/document/d/1RfxfXBNB0mD_kH9SamwqPI8ZM-jg797ky_Fze7SakRc/edit

For the basic PCI pass-through networking case we can assume the following:

1.   Single provider network (PN1)

2.   White list of available SRIOV PCI devices for allocation as NIC for 
neutron networks on provider network  (PN1) is defined on each compute node

3.   Support directly assigned SRIOV PCI pass-through device as vNIC. (This 
will limit the number of tests)

4.   More 


If my suggestion seems reasonable to you, let's try to reach an agreement and 
split the work during our Monday IRC meeting.

BR,
Irena

From: Jiang, Yunhong [mailto:yunhong.ji...@intel.com]
Sent: Saturday, January 11, 2014 8:36 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] [neutron] PCI pass-through network support

Comments with prefix [yjiang5_2] , including the double confirm.

I think we (you and me) is mostly on the same page, would you please give a 
summary, and then we can have community , including Irena/Robert, to check it. 
We need Cores to sponsor it. We should check with John to see if this is 
different with his mentor picture, and we may need a neutron core (I assume 
Cisco has a bunch of Neutron cores :) )to sponsor it?

And, will anyone from Cisco can help on the implementation? After this long 
discussion, we are in half bottom of I release and I'm not sure if Yongli and I 
alone can finish them in I release.

Thanks
--jyh

From: Ian Wells [mailto:ijw.ubu...@cack.org.uk]
Sent: Friday, January 10, 2014 6:34 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] [neutron] PCI pass-through network support



 OK - so if this is good then I think the question is how we could change the 
 'pci_whitelist' parameter we have - which, as you say, should either *only* 
 do whitelisting or be renamed - to allow us to add information.  Yongli has 
 something along those lines but it's not flexible and it distinguishes poorly 
 between which bits are extra information and which bits are matching 
 expressions (and it's still called pci_whitelist) - but even with those 
 criticisms it's very close to what we're talking about.  When we have that I 
 think a lot of the rest of the arguments should simply resolve themselves.



 [yjiang5_1] The reason that not easy to find a flexible/distinguishable 
 change to pci_whitelist is because it combined two things. So a stupid/naive 
 solution in my head is, change it to VERY generic name, 
 'pci_devices_information',

 and change schema as an array of {'devices_property'=regex exp, 'group_name' 
 = 'g1'} dictionary, and the device_property expression can be 'address ==xxx, 
 vendor_id == xxx' (i.e. similar with current white list),  and we can squeeze 
 more into the pci_devices_information in future, like 'network_information' 
 = xxx or Neutron specific information you required in previous mail.


We're getting to the stage that an expression parser would be useful, 
annoyingly, but if we are going to try and squeeze it into JSON can I suggest:

{ match = { class = Acme inc. discombobulator }, info = { group = we like 
teh groups, volume = 11 } }

[yjiang5_2] Double confirm that 'match' is whitelist, and info is 'extra info', 
right?  Can the key be more meaningful, for example, 
s/match/pci_device_property,  s/info/pci_device_info, or s/match/pci_devices/  
etc.
Also assume the class should be the class code in the configuration space, and 
be digital, am I right? Otherwise, it's not easy to get the 'Acme inc. 
discombobulator' information.



 All keys other than 'device_property' becomes extra information, i.e. 
 software defined property. These extra information will be carried with the 
 PCI devices,. Some implementation details, A)we can limit the acceptable 
 keys, like we only support 'group_name', 'network_id', or we can accept any 
 keys other than reserved (vendor_id, 

Re: [openstack-dev] [TripleO] [Tuskar] Deployment Management section - Wireframes

2014-01-13 Thread Jaromir Coufal

Hi Jeff,

many thanks for great feedback. Few comments are inline:

On 2014/10/01 16:16, Walls, Jeffrey Joel (Cloud OS RD) wrote:

Jarda,

I love how this is progressing.  It will be very nice once it's implemented!

The iconography seems to be inconsistent.  The ! triangle is used for error conditions 
and warning conditions; and the x hexagon is also used for error conditions.
You are right, in one version I used first set, in other I used other 
set, I need to get it consistent. Due to rapid wireframing I missed this 
one (and I am sure there will be more such mistakes). Thanks for 
pointing this out :)



For the Roles usage, will the user be able to scroll back and see more than the 
last month's usage?
On the overview pages, I wanted to keep it simple and show just last 
month. But if you click on the usage percentage (or detailed statistics 
icon), you should get modal window with detailed statistics, were you 
should be able to set timeframe you are interested in (with more details 
of course)



For configuration, will it be possible to supply default values to these and 
let the user change them only if they want to?  For some values it's probably 
not possible, but for others it will be.  The fewer things the user has to 
enter the better.
I completely agree here. If we can provide default values, we should set 
it up. Unfortunately for the fields which are listed in wireframes, I 
think we can estimate them. Anyway, I guess that the whole set of 
configurable items will change in time based on feedback.


-- Jarda

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [storyboard] Storyboard sprint around FOSDEM

2014-01-13 Thread Thierry Carrez
Thierry Carrez wrote:
 In case you're not familiar with it, Storyboard[1] is a cross-project
 task tracking tool that we are building to replace our usage of
 Launchpad bugs  blueprints.
 
 We plan to have a 2-day sprint just before FOSDEM in Brussels to make
 the few design and architectural hard calls that are needed to bring
 this from POC state to a dogfoodable, continuously-deployed system.
 
 We already have 4/6 people signed up, so if you're interested to join,
 please reply to thread ASAP so that we can book relevant space.
 
 Date/Location: January 30-31 in Brussels, Belgium
 
 (FOSDEM[2] is February 1-2 in the same city, so you can combine the two)
 
 [1] http://git.openstack.org/cgit/openstack-infra/storyboard
 [2] http://fosdem.org/

More details on the sprint are now available at:
https://wiki.openstack.org/wiki/StoryBoard/Brussels_Sprint

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Tuskar] Deployment Management section - Wireframes

2014-01-13 Thread Jaromir Coufal

Hi Jay,

thanks for your questions, they are great. I am going to answer inline:

On 2014/10/01 17:18, Jay Dobies wrote:

Thanks for recording this. A few questions:

- I'm guessing the capacity metrics will come from Ceilometer. Will
Ceilometer provide the averages for the role or is that calculated by
Tuskar?
The usage of roles is new metric which doesn't exist. It is the most 
consumed HW resource (which means if CPU is consumed by 60 % and RAM or 
disk are less, then the role usage is 60 %). It would be great to have 
such a metric from Ceilometer. However, I don't know how much support 
they will give us. We can get partial metrics (CPU, RAM, Disk) from 
Ceilometer, but the final Role usage is questionable.



- When on the change deployments screen, after making a change but not
yet applying it, how are the projected capacity changes calculated?
At the moment, I am working with one time change. Which means - it 
appears in modal window, and if I don't apply the change, it will get 
canceled. So we don't have to store 'change' values anyhow. At least for 
now.



- For editing a role, does it make a new image with the changes to what
services are deployed each time it's saved?
So, there are two things - one thing is provisioning image. We are not 
dealing with image builder at the moment. So the image already contains 
services which we should be able to discover (what OpenStack services 
are included there). And then you go to service tab and enable/disable 
which services are provided within a role + their configuration.


I would expect, that each time I change some Role settings, it gets 
applied (which might mean re-provisioning nodes if needed). However, I 
think it is only the case when you change provisioning image.



- When a role is edited, if it has existing nodes deployed with the old
version, are the automatically/immediately updated? If not, how do we
reflect that there's a difference between how the role is currently
configured and the nodes that were previously created from it?
I would expect any Role change to be applied immediately. If there is 
some change where I want to keep older nodes how they are set up and 
apply new settings only to new added nodes, I would create new Role then.



- I don't see any indication that the role scaling process is taking
place. That's a potentially medium/long running operation, we should
have some sort of way to inform the user it's running and if any errors
took place.
That's correct, I didn't provide that view yet. I was more focusing on 
views with settings and cofnig then the flow. But I will add this view 
as well. I completely agree it is needed.



That last point is a bit of a concern for me. I like the simplicity of
what the UI presents, but the nature of what we're doing doesn't really
fit with that. I can click the count button to add 20 nodes in a few
seconds, but the execution of that is a long running, asynchronous
operation. We have no means of reflecting that it's running, nor finding
any feedback on it as it runs or completes.

As I mentioned above, yeah, you are right here. I will reflect that.


Related question. If I have 20 instances and I press the button to scale
it out to 50, if I immediately return to the My Deployment screen what
do I see? 20, 50, or the current count as they are stood up?

I'll try to send the screen soon.

Related question is - when send heat change, are the nodes immediately 
ready for use once each node is provisioned? Or... when node is 
provisioned, it waits for the heat template to get finished and then 
they all get to operation together?



It could all be written off as a future feature, but I think we should
at least start to account for it in the wireframes. The initial user
experience could be off putting if it's hard to discern the difference
between what I told the UI to do and when it's actually finished being
done.

It's also likely to influence the ultimate design as we figure out who
keeps track of the running operations and their results (for both simple
display purposes to the user and auditing reasons).


One more time, thanks a lot for all the question Jay, they help to 
clarify lot of details in the background.


-- Jarda

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Partially Shared Networks

2014-01-13 Thread Mathieu Rohon
Hi,

This is something that we potentially could implement during the
implementation of the isolated-network bp [1]
Basically, on an isolated network, an ARP responder will respond to
ARP request. For an L2 network which is totally isolated, ARP
responder will only respond to arp-request of the gateway, other
broadcast requests will be dropped (except for DHCP requests)

We could enhance this feature to populate the arp-responder so that if
tenant A and tenant B wants to be able to communicate on this shared
and isolated network, ARP responder for the VM of tenant A will be
populated with Mac address of VM of the Tenant B, and vice versa.

[1] https://blueprints.launchpad.net/neutron/+spec/isolated-network

On Fri, Jan 10, 2014 at 10:00 PM, Jay Pipes jaypi...@gmail.com wrote:
 On Fri, 2014-01-10 at 17:06 +, CARVER, PAUL wrote:
 If anyone is giving any thought to networks that are available to
 multiple tenants (controlled by a configurable list of tenants) but
 not visible to all tenants I’d like to hear about it.

 I’m especially thinking of scenarios where specific networks exist
 outside of OpenStack and have specific purposes and rules for who can
 deploy servers on them. We’d like to enable the use of OpenStack to
 deploy to these sorts of networks but we can’t do that with the
 current “shared or not shared” binary choice.

 Hi Paul :) Please see here:

 https://www.mail-archive.com/openstack-dev@lists.openstack.org/msg07268.html

 for a similar discussion.

 best,
 -jay



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Partially Shared Networks

2014-01-13 Thread Stephen Gran

Hi,

I don't think that's what's being asked for. Just that there be more 
than the current check for '(isowner of network) or (shared)'


If the data point could be 'enabled for network' for a given tenant, 
that would be more flexible.


Cheers,

On 13/01/14 10:06, Mathieu Rohon wrote:

Hi,

This is something that we potentially could implement during the
implementation of the isolated-network bp [1]
Basically, on an isolated network, an ARP responder will respond to
ARP request. For an L2 network which is totally isolated, ARP
responder will only respond to arp-request of the gateway, other
broadcast requests will be dropped (except for DHCP requests)

We could enhance this feature to populate the arp-responder so that if
tenant A and tenant B wants to be able to communicate on this shared
and isolated network, ARP responder for the VM of tenant A will be
populated with Mac address of VM of the Tenant B, and vice versa.

[1] https://blueprints.launchpad.net/neutron/+spec/isolated-network

On Fri, Jan 10, 2014 at 10:00 PM, Jay Pipes jaypi...@gmail.com wrote:

On Fri, 2014-01-10 at 17:06 +, CARVER, PAUL wrote:

If anyone is giving any thought to networks that are available to
multiple tenants (controlled by a configurable list of tenants) but
not visible to all tenants I’d like to hear about it.

I’m especially thinking of scenarios where specific networks exist
outside of OpenStack and have specific purposes and rules for who can
deploy servers on them. We’d like to enable the use of OpenStack to
deploy to these sorts of networks but we can’t do that with the
current “shared or not shared” binary choice.


Hi Paul :) Please see here:

https://www.mail-archive.com/openstack-dev@lists.openstack.org/msg07268.html

for a similar discussion.

best,
-jay



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Stephen Gran
Senior Systems Integrator - theguardian.com
Please consider the environment before printing this email.
--
Visit theguardian.com   

On your mobile, download the Guardian iPhone app theguardian.com/iphone and our iPad edition theguardian.com/iPad   
Save up to 33% by subscribing to the Guardian and Observer - choose the papers you want and get full digital access.

Visit subscribe.theguardian.com

This e-mail and all attachments are confidential and may also
be privileged. If you are not the named recipient, please notify
the sender and delete the e-mail and all attachments immediately.
Do not disclose the contents to another person. You may not use
the information for any purpose, or store, or copy, it in any way.

Guardian News  Media Limited is not liable for any computer
viruses or other material transmitted with or as part of this
e-mail. You should employ virus checking software.

Guardian News  Media Limited

A member of Guardian Media Group plc
Registered Office
PO Box 68164
Kings Place
90 York Way
London
N1P 2AP

Registered in England Number 908396

--


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Tuskar] Deployment Management section - Wireframes

2014-01-13 Thread Jaromir Coufal

On 2014/10/01 19:02, Dougal Matthews wrote:

Hi,

Thanks for the wireframes and the walkthrough. Very useful. I've a few
comments.

- I'd like to echo the comments from the recording about Role I think
the term probably isn't specific enough but I don't have a great
suggestion. However, this is probably suited better to the other thread.



- We will have a number of long processes, for example, when a deploy or
re-size is happening. How do we keep the user informed of the progress
and errors? I don't see anything in the wireframes, but maybe there is a
Horizon standard approach I'm less familiar with. For example, I have 50
compute nodes, then I add 10 but I want to know how many are ready etc.
That's correct, I already replied to Jay Dobies' mail on similar topic. 
I will try to visualize that process as well in wireframes in next days.



- If I remove some instances, do I as the administrator need to care
which are removed? Do we need to choose or be informed at the end?
This is great question on which we have long debates. I am convinced 
that I as administrator, do care which nodes I want to free up.


But current TripleO approach is using heat template and there we can 
just specify number of nodes of that specific role. So it means that I 
decrease from 10 to 9 instances and app will take care for us for some 
node to be removed (AFAIK heat removes the last added node).


So what we can do at the moment (until there is some way to specify 
which node to remove) is to inform user, which nodes were removed in the 
end... at least.


In the future, I'd like to enable user to have both ways available - 
just decrease number and let system to decide which nodes are going to 
be removed for him (but at least inform in advance which nodes are the 
chosen ones). Or, let user to choose by himself.


Thanks
-- Jarda

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Tuskar] Deployment Management section - Wireframes

2014-01-13 Thread Jaromir Coufal

On 2014/10/01 21:17, Jay Dobies wrote:

Another question:

- A Role (sounds like we're moving away from that so I'll call it
Resource Category) can have multiple Node Profiles defined (assuming I'm
interpretting the + and the tabs in the Create a Role wireframe
correctly). But I don't see anywhere where a profile is selected when
scaling the Resource Category. Is the idea behind the profiles that you
can select how much power you want to provide in addition to how many
nodes?


Yes, that is correct, Jay. I mentioned that in walkthrough and in 
wireframes with the note More views needed (for deploying, scaling, 
managing roles).


I would say there might be two approaches - one is to specify which node 
profile you want to scale in order to select how much power you want to add.


The other approach is just to scale the number of nodes in a role and 
let system decide the best match (which node profile is chosen will be 
decided on the best fit, probably).


I lean towards the first approach, where you specify what role and which 
node profile you want to use for scaling. However this is just 
introduction of the idea and I believe we can get answers until we get 
to that step.


Any preferences for one of above mentioned approaches?

-- Jarda

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Tuskar] Deployment Management section - Wireframes

2014-01-13 Thread Ladislav Smola

Hello,

some answers below:

On 01/10/2014 05:18 PM, Jay Dobies wrote:

Thanks for recording this. A few questions:

- I'm guessing the capacity metrics will come from Ceilometer. Will 
Ceilometer provide the averages for the role or is that calculated by 
Tuskar?


Definitely Ceilometer, though right now it can't do this aggregate 
query. Though it is in progress and it 'should land' early I3. Not sure 
if the backup plan should be computing it in the Tuskar.


- When on the change deployments screen, after making a change but not 
yet applying it, how are the projected capacity changes calculated?


I believe we wanted to make just simple algorithm, that was considering, 
that adding a new node would would spread the load between the nodes 
equally. Though this depends on how overcloud is set.
It would have to be set the way, that after adding new hardware, 
overcloud would migrate VM's over them equally(which is not always 
achievable).

So I am not really sure about this part.



- For editing a role, does it make a new image with the changes to 
what services are deployed each time it's saved?


I would say no, image should be created during heat stack update/create, 
right? So we would need to track it has been changed but not yet deployed.




- When a role is edited, if it has existing nodes deployed with the 
old version, are the automatically/immediately updated? If not, how do 
we reflect that there's a difference between how the role is currently 
configured and the nodes that were previously created from it?




We will have to store image metadata in tuskar probably, that would map 
to glance, once the image is generated. I would say we need to store the 
list of the elements and probably the commit hashes (because elements 
can change). Also it should be versioned, as the images in glance will 
be also versioned.
We can't probably store it in the Glance, cause we will first store the 
metadata, then generate image. Right?


Then we could see whether image was created from the metadata and 
whether that image was used in the heat-template. With versions we could 
also see what has changed.


But there was also idea that there will be some generic image, 
containing all services, we would just configure which services to 
start. In that case we would need to version also this.


- I don't see any indication that the role scaling process is taking 
place. That's a potentially medium/long running operation, we should 
have some sort of way to inform the user it's running and if any 
errors took place.


That last point is a bit of a concern for me. I like the simplicity of 
what the UI presents, but the nature of what we're doing doesn't 
really fit with that. I can click the count button to add 20 nodes in 
a few seconds, but the execution of that is a long running, 
asynchronous operation. We have no means of reflecting that it's 
running, nor finding any feedback on it as it runs or completes.


Related question. If I have 20 instances and I press the button to 
scale it out to 50, if I immediately return to the My Deployment 
screen what do I see? 20, 50, or the current count as they are stood up?



Agree, this is missing. I think right now we are able to show that stack 
is being deployed and how many nodes are already deployed. Page like 
this will be quite important for I.





It could all be written off as a future feature, but I think we should 
at least start to account for it in the wireframes. The initial user 
experience could be off putting if it's hard to discern the difference 
between what I told the UI to do and when it's actually finished being 
done.


It's also likely to influence the ultimate design as we figure out who 
keeps track of the running operations and their results (for both 
simple display purposes to the user and auditing reasons).



On 01/10/2014 09:58 AM, Jaromir Coufal wrote:

Hi everybody,

there is first stab of Deployment Management section with future
direction (note that it was discussed as a scope for Icehouse).

I tried to add functionality in time and break it down to steps. This
will help us to focus on one functionality at a time and if we will be
in time pressure for Icehouse release, we can cut off last steps.

Wireframes:
http://people.redhat.com/~jcoufal/openstack/tripleo/2014-01-10_tripleo-ui_deployment-management.pdf 




Recording of walkthrough:
https://www.youtube.com/watch?v=9ROxyc85IyE

We sare about to start with first step as soon as possible, so please
focus on our initial steps the most (which doesn't mean that we should
neglect the direction).

Every feedback is very welcome, thanks
-- Jarda

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org

Re: [openstack-dev] [tempest] testr duplicate test id detected

2014-01-13 Thread Bhuvan Arumugam
On Mon, Jan 13, 2014 at 12:04 PM, Robert Collins
robe...@robertcollins.netwrote:


 On 13 Jan 2014 12:15, Bhuvan Arumugam bhu...@apache.org wrote:

  It behave differently when I list tempest tests. All tests are listed in
 single line with unicode characters. The command exit with non-zero exit
 code, with import errors. Any idea on whats going on?

 The import errors are due to modules not importing. Try importing them by
 hand to see the  specific error for a specific import.


That was precisely the case, Robert!
Few tests couldn't import oslo.config, as the configuration file
tempest.conf don't exist in default location. The import errors were too
cryptic to identify real problem, unless we verify explicitly.

The issue is resolved after copying tempest.conf file in ./etc/ directory.

-Bhuvan
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [neutron] PCI pass-through network support

2014-01-13 Thread Ian Wells
Irena, have a word with Bob (rkukura on IRC, East coast), he was talking
about what would be needed already and should be able to help you.
Conveniently he's also core. ;)
-- 
Ian.


On 12 January 2014 22:12, Irena Berezovsky ire...@mellanox.com wrote:

 Hi John,
 Thank you for taking an initiative and summing up the work that need to be
 done to provide PCI pass-through network support.
 The only item I think is missing is the neutron support for PCI
 pass-through. Currently we have Mellanox Plugin that supports PCI
 pass-through assuming Mellanox Adapter card embedded switch technology. But
 in order to have fully integrated  PCI pass-through networking support for
 the use cases Robert listed on previous mail, the generic neutron PCI
 pass-through support is required. This can be enhanced with vendor specific
 task that may differ (Mellanox Embedded switch vs Cisco 802.1BR), but there
 is still common part of being PCI aware mechanism driver.
 I have already started with definition for this part:

 https://docs.google.com/document/d/1RfxfXBNB0mD_kH9SamwqPI8ZM-jg797ky_Fze7SakRc/edit#
 I also plan to start coding soon.

 Depends on how it goes, I can take also nova parts that integrate with
 neutron APIs from item 3.

 Regards,
 Irena

 -Original Message-
 From: John Garbutt [mailto:j...@johngarbutt.com]
 Sent: Friday, January 10, 2014 4:34 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [nova] [neutron] PCI pass-through network
 support

 Apologies for this top post, I just want to move this discussion towards
 action.

 I am traveling next week so it is unlikely that I can make the meetings.
 Sorry.

 Can we please agree on some concrete actions, and who will do the coding?
 This also means raising new blueprints for each item of work.
 I am happy to review and eventually approve those blueprints, if you email
 me directly.

 Ideas are taken from what we started to agree on, mostly written up here:
 https://wiki.openstack.org/wiki/Meetings/Passthrough#Definitions


 What doesn't need doing...
 

 We have PCI whitelist and PCI alias at the moment, let keep those names
 the same for now.
 I personally prefer PCI-flavor, rather than PCI-alias, but lets discuss
 any rename separately.

 We seemed happy with the current system (roughly) around GPU passthrough:
 nova flavor-key three_GPU_attached_30GB set pci_passthrough:alias=
 large_GPU:1,small_GPU:2
 nova boot --image some_image --flavor three_GPU_attached_30GB some_name

 Again, we seemed happy with the current PCI whitelist.

 Sure, we could optimise the scheduling, but again, please keep that a
 separate discussion.
 Something in the scheduler needs to know how many of each PCI alias are
 available on each host.
 How that information gets there can be change at a later date.

 PCI alias is in config, but its probably better defined using host
 aggregates, or some custom API.
 But lets leave that for now, and discuss it separately.
 If the need arrises, we can migrate away from the config.


 What does need doing...
 ==

 1) API  CLI changes for nic-type, and associated tempest tests

 * Add a user visible nic-type so users can express on of several network
 types.
 * We need a default nic-type, for when the user doesn't specify one (might
 default to SRIOV in some cases)
 * We can easily test the case where the default is virtual and the user
 expresses a preference for virtual
 * Above is much better than not testing it at all.

 nova boot --flavor m1.large --image image_id
   --nic net-id=net-id-1
   --nic net-id=net-id-2,nic-type=fast
   --nic net-id=net-id-3,nic-type=fast vm-name

 or

 neutron port-create
   --fixed-ip subnet_id=subnet-id,ip_address=192.168.57.101
   --nic-type=slow | fast | foobar
   net-id
 nova boot --flavor m1.large --image image_id --nic port-id=port-id

 Where nic-type is just an extra bit metadata string that is passed to nova
 and the VIF driver.


 2) Expand PCI alias information

 We need extensions to PCI alias so we can group SRIOV devices better.

 I still think we are yet to agree on a format, but I would suggest this as
 a starting point:

 {
  name:GPU_fast,
  devices:[
   {vendor_id:1137,product_id:0071, address:*,
 attach-type:direct},
   {vendor_id:1137,product_id:0072, address:*,
 attach-type:direct}  ],
  sriov_info: {}
 }

 {
  name:NIC_fast,
  devices:[
   {vendor_id:1137,product_id:0071, address:0:[1-50]:2:*,
 attach-type:macvtap}
   {vendor_id:1234,product_id:0081, address:*,
 attach-type:direct}  ],
  sriov_info: {
   nic_type:fast,
   network_ids: [net-id-1, net-id-2]  } }

 {
  name:NIC_slower,
  devices:[
   {vendor_id:1137,product_id:0071, address:*,
 attach-type:direct}
   {vendor_id:1234,product_id:0081, address:*,
 attach-type:direct}  ],
  sriov_info: {
   nic_type:fast,
   network_ids: [*]  # this means could attach to any network  } }

 The idea being the VIF driver gets passed this info, when 

Re: [openstack-dev] [TripleO] [Tuskar] Deployment Management section - Wireframes

2014-01-13 Thread Ladislav Smola

Hi,

some answers inline:

On 01/13/2014 10:47 AM, Jaromir Coufal wrote:

Hi Jay,

thanks for your questions, they are great. I am going to answer inline:

On 2014/10/01 17:18, Jay Dobies wrote:

Thanks for recording this. A few questions:

- I'm guessing the capacity metrics will come from Ceilometer. Will
Ceilometer provide the averages for the role or is that calculated by
Tuskar?
The usage of roles is new metric which doesn't exist. It is the most 
consumed HW resource (which means if CPU is consumed by 60 % and RAM 
or disk are less, then the role usage is 60 %). It would be great to 
have such a metric from Ceilometer. However, I don't know how much 
support they will give us. We can get partial metrics (CPU, RAM, Disk) 
from Ceilometer, but the final Role usage is questionable.


We will be able to get 3 meters avg. in one query, so we should be able 
to easily determine which metrics we want to show. As I am thinking 
about it, I would like to show what metric we are showing anyway. Cause 
naming 'capacity' as max(CPU, RAM, Disk) might be confusing.





- When on the change deployments screen, after making a change but not
yet applying it, how are the projected capacity changes calculated?
At the moment, I am working with one time change. Which means - it 
appears in modal window, and if I don't apply the change, it will get 
canceled. So we don't have to store 'change' values anyhow. At least 
for now.



- For editing a role, does it make a new image with the changes to what
services are deployed each time it's saved?
So, there are two things - one thing is provisioning image. We are not 
dealing with image builder at the moment. So the image already 
contains services which we should be able to discover (what OpenStack 
services are included there). And then you go to service tab and 
enable/disable which services are provided within a role + their 
configuration.


I would expect, that each time I change some Role settings, it gets 
applied (which might mean re-provisioning nodes if needed). However, I 
think it is only the case when you change provisioning image.



- When a role is edited, if it has existing nodes deployed with the old
version, are the automatically/immediately updated? If not, how do we
reflect that there's a difference between how the role is currently
configured and the nodes that were previously created from it?
I would expect any Role change to be applied immediately. If there is 
some change where I want to keep older nodes how they are set up and 
apply new settings only to new added nodes, I would create new Role then.




Hmm, I would rather see preview page, because it's quite dangerous 
operation. Though that's future talking.


If there is some change where I want to keep older nodes how they are 
set up and apply new settings only to new added nodes this should not 
be ever possible. All nodes under the Role has to be the same.


I believe Jay was asking about the preview page. So if it won't be 
immediately updated, you would store what you want to update. Then you 
could even see it all summarized on a preview page before you hit 'update'.



- I don't see any indication that the role scaling process is taking
place. That's a potentially medium/long running operation, we should
have some sort of way to inform the user it's running and if any errors
took place.
That's correct, I didn't provide that view yet. I was more focusing on 
views with settings and cofnig then the flow. But I will add this view 
as well. I completely agree it is needed.



That last point is a bit of a concern for me. I like the simplicity of
what the UI presents, but the nature of what we're doing doesn't really
fit with that. I can click the count button to add 20 nodes in a few
seconds, but the execution of that is a long running, asynchronous
operation. We have no means of reflecting that it's running, nor finding
any feedback on it as it runs or completes.

As I mentioned above, yeah, you are right here. I will reflect that.


Related question. If I have 20 instances and I press the button to scale
it out to 50, if I immediately return to the My Deployment screen what
do I see? 20, 50, or the current count as they are stood up?

I'll try to send the screen soon.

Related question is - when send heat change, are the nodes immediately 
ready for use once each node is provisioned? Or... when node is 
provisioned, it waits for the heat template to get finished and then 
they all get to operation together?




I would say that it depends on node. E.g. once compute node is 
registered to overcloud nova scheduler, you can start to use it. So it 
should be similar for others. This applies only for stack-update.



It could all be written off as a future feature, but I think we should
at least start to account for it in the wireframes. The initial user
experience could be off putting if it's hard to discern the difference
between what I told the UI to do and when it's actually finished 

Re: [openstack-dev] [TripleO] [Tuskar] Deployment Management section - Wireframes

2014-01-13 Thread Dougal Matthews

On 13/01/14 10:37, Jaromir Coufal wrote:

So what we can do at the moment (until there is some way to specify
which node to remove) is to inform user, which nodes were removed in the
end... at least.


Sounds like a good compromise for now. I think we probably then need 
feedback from administrators about this.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Common SSH

2014-01-13 Thread Bhuvan Arumugam
On Fri, Jan 10, 2014 at 11:24 PM, Sergey Skripnick
sskripn...@mirantis.comwrote:


  I appreciate that we want to fix the ssh client. I'm not certain that
 writing our own is the best answer.


 I was supposed to fix oslo.processutils.ssh with this class, but it may
 be fixed without it, not big deal.




 In his comments on your pull request, the paramiko author recommended
 looking at Fabric. I know that Fabric has a long history in production.
 Does it provide the required features?


 Fabric is too much for just command execution on remote server. Spur seems
 like
 good choice for this.


I'd go with Fabric. It support several remote server operations, file
upload/download among them. We could just import the methods we are
interested. It in turn use paramiko supporting most of ssh client options.
If we begin using fabric for file upload/download, it'll open door for more
remote server operations. Bringing in fabric as part of oslo will be cool.

A quick demo script to upload/download files using fabric.

from fabric.api import get, put, run, settings



remote_host = 'localhost'

with settings(host_string=remote_host):

# what is the remote hostname?

run('hostname -f')

# download /etc/hosts file

get('/etc/hosts')

# upload /etc/bashrc to /tmp directory

put('/etc/bashrc', '/tmp/bashrc')

The output may look like:

[localhost] run: hostname -f
[localhost] out: rainbow.local
[localhost] out:

[localhost] download: /Users/bhuvan/localhost/hosts - /etc/hosts
[localhost] put: /etc/bashrc - /tmp/bashrc

-- 
Regards,
Bhuvan Arumugam
www.livecipher.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystoneclient] old keystone-client package on pypi

2014-01-13 Thread Thierry Carrez
Sylvain Bauza wrote:
 Le 27/12/2013 10:24, Nikolay Starodubtsev a écrit :
 Hi all,
 Guys, I want to say that keystoneclient package on pypi is too old.
 For example it hadn't Client func in keystoneclient/client.py. May be
 someone can help me with this?
 
 Speaking of python-keystoneclient, the latest release is 0.4.1, which is
 indeed pretty old (Havana release timeframe).
 Any chance to get a fresher release soon ? The only solution as of now
 is pointing to the master eggfile, which is really bad...

The solution is for the PTL to tag a new version, and then it will
appear on PyPI.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [neutron] PCI pass-through network support

2014-01-13 Thread Irena Berezovsky
Ian,
It's great news.
Thank you for bringing Bob's attention to this effort. I'll look for Bob on IRC 
to get the details.
And of course, core support raises our chances to make PCI pass-through 
networking into icehouse.

BR,
Irena

From: Ian Wells [mailto:ijw.ubu...@cack.org.uk]
Sent: Monday, January 13, 2014 2:02 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] [neutron] PCI pass-through network support

Irena, have a word with Bob (rkukura on IRC, East coast), he was talking about 
what would be needed already and should be able to help you.  Conveniently he's 
also core. ;)
--
Ian.

On 12 January 2014 22:12, Irena Berezovsky 
ire...@mellanox.commailto:ire...@mellanox.com wrote:
Hi John,
Thank you for taking an initiative and summing up the work that need to be done 
to provide PCI pass-through network support.
The only item I think is missing is the neutron support for PCI pass-through. 
Currently we have Mellanox Plugin that supports PCI pass-through assuming 
Mellanox Adapter card embedded switch technology. But in order to have fully 
integrated  PCI pass-through networking support for the use cases Robert listed 
on previous mail, the generic neutron PCI pass-through support is required. 
This can be enhanced with vendor specific task that may differ (Mellanox 
Embedded switch vs Cisco 802.1BR), but there is still common part of being PCI 
aware mechanism driver.
I have already started with definition for this part:
https://docs.google.com/document/d/1RfxfXBNB0mD_kH9SamwqPI8ZM-jg797ky_Fze7SakRc/edit#https://docs.google.com/document/d/1RfxfXBNB0mD_kH9SamwqPI8ZM-jg797ky_Fze7SakRc/edit
I also plan to start coding soon.

Depends on how it goes, I can take also nova parts that integrate with neutron 
APIs from item 3.

Regards,
Irena

-Original Message-
From: John Garbutt [mailto:j...@johngarbutt.commailto:j...@johngarbutt.com]
Sent: Friday, January 10, 2014 4:34 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] [neutron] PCI pass-through network support
Apologies for this top post, I just want to move this discussion towards action.

I am traveling next week so it is unlikely that I can make the meetings. Sorry.

Can we please agree on some concrete actions, and who will do the coding?
This also means raising new blueprints for each item of work.
I am happy to review and eventually approve those blueprints, if you email me 
directly.

Ideas are taken from what we started to agree on, mostly written up here:
https://wiki.openstack.org/wiki/Meetings/Passthrough#Definitions


What doesn't need doing...


We have PCI whitelist and PCI alias at the moment, let keep those names the 
same for now.
I personally prefer PCI-flavor, rather than PCI-alias, but lets discuss any 
rename separately.

We seemed happy with the current system (roughly) around GPU passthrough:
nova flavor-key three_GPU_attached_30GB set pci_passthrough:alias= 
large_GPU:1,small_GPU:2
nova boot --image some_image --flavor three_GPU_attached_30GB some_name

Again, we seemed happy with the current PCI whitelist.

Sure, we could optimise the scheduling, but again, please keep that a separate 
discussion.
Something in the scheduler needs to know how many of each PCI alias are 
available on each host.
How that information gets there can be change at a later date.

PCI alias is in config, but its probably better defined using host aggregates, 
or some custom API.
But lets leave that for now, and discuss it separately.
If the need arrises, we can migrate away from the config.


What does need doing...
==

1) API  CLI changes for nic-type, and associated tempest tests

* Add a user visible nic-type so users can express on of several network 
types.
* We need a default nic-type, for when the user doesn't specify one (might 
default to SRIOV in some cases)
* We can easily test the case where the default is virtual and the user 
expresses a preference for virtual
* Above is much better than not testing it at all.

nova boot --flavor m1.large --image image_id
  --nic net-id=net-id-1
  --nic net-id=net-id-2,nic-type=fast
  --nic net-id=net-id-3,nic-type=fast vm-name

or

neutron port-create
  --fixed-ip subnet_id=subnet-id,ip_address=192.168.57.101
  --nic-type=slow | fast | foobar
  net-id
nova boot --flavor m1.large --image image_id --nic port-id=port-id

Where nic-type is just an extra bit metadata string that is passed to nova and 
the VIF driver.


2) Expand PCI alias information

We need extensions to PCI alias so we can group SRIOV devices better.

I still think we are yet to agree on a format, but I would suggest this as a 
starting point:

{
 name:GPU_fast,
 devices:[
  {vendor_id:1137,product_id:0071, address:*, attach-type:direct},
  {vendor_id:1137,product_id:0072, address:*, attach-type:direct} 
 ],
 sriov_info: {}
}

{
 name:NIC_fast,
 devices:[
  {vendor_id:1137,product_id:0071, 

Re: [openstack-dev] [infra] javascript templating library choice for status pages

2014-01-13 Thread Sean Dague

On 01/12/2014 09:56 PM, Michael Krotscheck wrote:

If all you're looking for is a javascript-based in-browser templating
system, then handlebars is a fine choice. I'm not certain on how complex
status.html/status.js is, however if you expect it to grow to something
more like an application then perhaps looking at angular as a full
application framework might help you avoid both this growing pain and
future ones (alternatives: Ember, backbone, etc).


Honestly, I've not done enough large scale js projects to know whether 
we'd consider status.js to be big or not. I just know it's definitely 
getting too big for += all the html together and doing document.writes.


I guess the real question I had is is there an incremental path towards 
any of the other frameworks? I can see how to incrementally bring in 
templates, but again my personal lack of experience on these others 
means I don't know.



Quick warning though, a lot of the javascript community out there uses
tooling that is built on top of Node.js, for which current official
packages for Centos/Ubuntu don't exist, and therefore infra won't
support it for openstack. Storyboard is able to get around this because
it's not actually part of openstack proper, but you might be forced to
manage your code manually. That's not a deal breaker in my opinion -
it's just more tedious (though I think it might be less tedious than
what you're doing right now).


I'd ideally like to be able to function without node, mostly because 
it's another development environment to have to manager. But I realize 
that's pushing against the current at this point. So I agree, not a deal 
breaker.


-Sean

--
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystoneclient] old keystone-client package on pypi

2014-01-13 Thread Sylvain Bauza

Le 13/01/2014 13:49, Thierry Carrez a écrit :

Sylvain Bauza wrote:

Le 27/12/2013 10:24, Nikolay Starodubtsev a écrit :

Hi all,
Guys, I want to say that keystoneclient package on pypi is too old.
For example it hadn't Client func in keystoneclient/client.py. May be
someone can help me with this?

Speaking of python-keystoneclient, the latest release is 0.4.1, which is
indeed pretty old (Havana release timeframe).
Any chance to get a fresher release soon ? The only solution as of now
is pointing to the master eggfile, which is really bad...

The solution is for the PTL to tag a new version, and then it will
appear on PyPI.



Thanks Thierry, my question was indeed when a new tag would be delivered 
(and consequently a package) ?


0.4.1 is 3 months old, and a lot of features have been implemented 
meanwhile :

https://github.com/openstack/python-keystoneclient/compare/0.4.1...master

In particular, Climate would use the keystoneclient.client.Client class 
for automatic discovery of the Keystone API, which is not part of the 
latest release.


-Sylvain

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [climate][cafe] Climate and Café teams meeting

2014-01-13 Thread Sylvain Bauza
I can propose anytime at 0600 UTC, I can make efforts for waking up earlier
:-)
I can yet understand that other EU people couldn't attend the call, so I
will aggregate and feedback all the topics for my French peers.

-Sylvain


2014/1/13 Nikolay Starodubtsev nstarodubt...@mirantis.com

 Hi, all!
  Guys, both our teams (Climate and Cafe) want to make a cross-team meeting
 to discuss our future plans. If you want to participate please tip us with
 good time for you.

 Useful links:
 * Climate https://launchpad.net/climate
 * Cafe https://wiki.openstack.org/wiki/Cafe




 Nikolay Starodubtsev

 Software Engineer

 Mirantis Inc.


 Skype: dark_harlequine1

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack][Nova][Docker] Devstack with docker driver

2014-01-13 Thread Daniel Kuffner
Hello,

I just had the same problem today with a openstack installation form
packstack. The problem was the firewall so I had to open 9292, 35357

iptables -I INPUT -p tcp --dport 9292 -j ACCEPT
iptables -I INPUT -p tcp --dport 35357 -j ACCEPT

Daniel

On Thu, Jan 9, 2014 at 3:24 PM, Swapnil Kulkarni
swapnilkulkarni2...@gmail.com wrote:
 Hi Daniel,

 I think it was some proxy issue prohibiting from the download. I downloaded
 the correct docker-registry.tar.gz and stack.sh completed but with some
 issues remain related to image availability. The docker push is getting
 HTTP code 500 while uploading metadata: invalid character '' looking for
 beginning of value error. I did a search for the error and found [1] on
 ask.openstack which states firewall could be an issue, which is not I guess
 in my situation. Any ideas?

 [1] https://ask.openstack.org/en/question/8320/docker-push-error/

 Best Regards,
 Swapnil


 On Thu, Jan 9, 2014 at 7:37 PM, Daniel Kuffner daniel.kuff...@gmail.com
 wrote:

 The tar file seem to be corrupt, the tools script downloads it to:

 ./files/docker-registry.tar.gz

 On Thu, Jan 9, 2014 at 11:17 AM, Swapnil Kulkarni
 swapnilkulkarni2...@gmail.com wrote:
  Hi Daniel,
 
  I removed the existing images and executed
  ./tools/docker/install_docker.sh.
  I am facing new issue related to docker-registry,
 
  Error: exit status 2: tar: This does not look like a tar archive
 
  is the size 945 bytes correct for docker-registry image?
 
 
  Best Regards,
  Swapnil
 
  On Thu, Jan 9, 2014 at 1:35 PM, Daniel Kuffner
  daniel.kuff...@gmail.com
  wrote:
 
  Hi Swapnil,
 
  Looks like the docker-registry image is broken, since it cannot find
  run.sh inside the container.
 
   2014/01/09 06:36:15 Unable to locate ./docker-registry/run.sh
 
  Maybe you could try to remove and re-import image
 
 
  docker rmi docker-registry
 
  and then execute
 
  ./tools/docker/install_docker.sh
 
  again.
 
 
 
  On Thu, Jan 9, 2014 at 7:42 AM, Swapnil Kulkarni
  swapnilkulkarni2...@gmail.com wrote:
   Hi Eric,
  
   I tried running the 'docker run' command without -d and it gets
   following
   error
  
   $ sudo docker run -d=false -p 5042:5000 -e SETTINGS_FLAVOR=openstack
   -e
   OS_USERNAME=admin -e OS_PASSWORD=password -e OS_TENANT_NAME=admin -e
   OS_GLANCE_URL=http://127.0.0.1:9292 -e
   OS_AUTH_URL=http://127.0.0.1:35357/v2.0 docker-registry
   ./docker-registry/run.sh
   lxc-start: No such file or directory -
   stat(/proc/16438/root/dev//console)
   2014/01/09 06:36:15 Unable to locate ./docker-registry/run.sh
  
   On the other hand,
  
   If I run the failing command just after stack.sh fails with -d,  it
   works
   fine,
  
   sudo docker run -d -p 5042:5000 -e SETTINGS_FLAVOR=openstack -e
   OS_USERNAME=admin -e OS_PASSWORD=password -e OS_TENANT_NAME=admin -e
   OS_GLANCE_URL=http://127.0.0.1:9292 -e
   OS_AUTH_URL=http://127.0.0.1:35357/v2.0 docker-registry
   ./docker-registry/run.sh
   5b737f8d2282114c1a0cfc4f25bc7c9ef8c5da7e0d8fa7ed9ccee0be81cddafc
  
   Best Regards,
   Swapnil
  
  
   On Wed, Jan 8, 2014 at 8:29 PM, Eric Windisch e...@windisch.us
   wrote:
  
   On Tue, Jan 7, 2014 at 11:13 PM, Swapnil Kulkarni
   swapnilkulkarni2...@gmail.com wrote:
  
   Let me know in case I can be of any help getting this resolved.
  
  
   Please try running the failing 'docker run' command manually and
   without
   the '-d' argument. I've been able to reproduce  an error myself, but
   wish to
   confirm that this matches the error you're seeing.
  
   Regards,
   Eric Windisch
  
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
  
  
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystoneclient] old keystone-client package on pypi

2014-01-13 Thread Dolph Mathews
Ooh, I meant to get this done last week as I agree that keystoneclient
needed to see a new release, but it totally slipped my mind.

python-keystoneclient 0.4.2 is now available on pypi!

  https://pypi.python.org/pypi/python-keystoneclient/0.4.2

What's included in the milestone:

  https://launchpad.net/python-keystoneclient/+milestone/0.4.2



On Mon, Jan 13, 2014 at 7:17 AM, Sylvain Bauza sylvain.ba...@bull.netwrote:

  Le 13/01/2014 13:49, Thierry Carrez a écrit :

 Sylvain Bauza wrote:

  Le 27/12/2013 10:24, Nikolay Starodubtsev a écrit :

  Hi all,
 Guys, I want to say that keystoneclient package on pypi is too old.
 For example it hadn't Client func in keystoneclient/client.py. May be
 someone can help me with this?

  Speaking of python-keystoneclient, the latest release is 0.4.1, which is
 indeed pretty old (Havana release timeframe).
 Any chance to get a fresher release soon ? The only solution as of now
 is pointing to the master eggfile, which is really bad...

  The solution is for the PTL to tag a new version, and then it will
 appear on PyPI.



 Thanks Thierry, my question was indeed when a new tag would be delivered
 (and consequently a package) ?

 0.4.1 is 3 months old, and a lot of features have been implemented
 meanwhile :
 https://github.com/openstack/python-keystoneclient/compare/0.4.1...master

 In particular, Climate would use the keystoneclient.client.Client class
 for automatic discovery of the Keystone API, which is not part of the
 latest release.

 -Sylvain


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystoneclient] old keystone-client package on pypi

2014-01-13 Thread Sylvain Bauza

Le 13/01/2014 15:00, Dolph Mathews a écrit :
Ooh, I meant to get this done last week as I agree that keystoneclient 
needed to see a new release, but it totally slipped my mind.


python-keystoneclient 0.4.2 is now available on pypi!

https://pypi.python.org/pypi/python-keystoneclient/0.4.2

What's included in the milestone:

https://launchpad.net/python-keystoneclient/+milestone/0.4.2





Great ! Thanks Dolph for your reactivity. Much appreciated :-)

-Sylvain

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystoneclient] old keystone-client package on pypi

2014-01-13 Thread Dina Belova
Dolph, thank you so much :) You fixed lots of our problems :)


On Mon, Jan 13, 2014 at 6:00 PM, Dolph Mathews dolph.math...@gmail.comwrote:

 Ooh, I meant to get this done last week as I agree that keystoneclient
 needed to see a new release, but it totally slipped my mind.

 python-keystoneclient 0.4.2 is now available on pypi!

   https://pypi.python.org/pypi/python-keystoneclient/0.4.2

 What's included in the milestone:

   https://launchpad.net/python-keystoneclient/+milestone/0.4.2



 On Mon, Jan 13, 2014 at 7:17 AM, Sylvain Bauza sylvain.ba...@bull.netwrote:

  Le 13/01/2014 13:49, Thierry Carrez a écrit :

 Sylvain Bauza wrote:

  Le 27/12/2013 10:24, Nikolay Starodubtsev a écrit :

  Hi all,
 Guys, I want to say that keystoneclient package on pypi is too old.
 For example it hadn't Client func in keystoneclient/client.py. May be
 someone can help me with this?

  Speaking of python-keystoneclient, the latest release is 0.4.1, which is
 indeed pretty old (Havana release timeframe).
 Any chance to get a fresher release soon ? The only solution as of now
 is pointing to the master eggfile, which is really bad...

  The solution is for the PTL to tag a new version, and then it will
 appear on PyPI.



 Thanks Thierry, my question was indeed when a new tag would be delivered
 (and consequently a package) ?

 0.4.1 is 3 months old, and a lot of features have been implemented
 meanwhile :
 https://github.com/openstack/python-keystoneclient/compare/0.4.1...master

 In particular, Climate would use the keystoneclient.client.Client class
 for automatic discovery of the Keystone API, which is not part of the
 latest release.

 -Sylvain


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 

Best regards,

Dina Belova

Software Engineer

Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] api schema validation pattern changes

2014-01-13 Thread Jay Pipes
On Sun, 2014-01-12 at 19:52 -0800, Christopher Yeoh wrote:
 On my phone so will be very brief but perhaps the extensions extension
 could publish the jsonschema(s) for the extension. I think the only
 complicating  factor would be where extensions extend extensions but I
 think it's all doable.

Am I the only one that sees the above statement as another indication of
why API extensions should eventually find their way into the dustbin of
OpenStack history?

-jay



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystoneclient] old keystone-client package on pypi

2014-01-13 Thread Nikolay Starodubtsev
Thank you, Dolph! That's great!



Nikolay Starodubtsev

Software Engineer

Mirantis Inc.


Skype: dark_harlequine1


2014/1/13 Dina Belova dbel...@mirantis.com

 Dolph, thank you so much :) You fixed lots of our problems :)


 On Mon, Jan 13, 2014 at 6:00 PM, Dolph Mathews dolph.math...@gmail.comwrote:

 Ooh, I meant to get this done last week as I agree that keystoneclient
 needed to see a new release, but it totally slipped my mind.

 python-keystoneclient 0.4.2 is now available on pypi!

   https://pypi.python.org/pypi/python-keystoneclient/0.4.2

 What's included in the milestone:

   https://launchpad.net/python-keystoneclient/+milestone/0.4.2



 On Mon, Jan 13, 2014 at 7:17 AM, Sylvain Bauza sylvain.ba...@bull.netwrote:

  Le 13/01/2014 13:49, Thierry Carrez a écrit :

 Sylvain Bauza wrote:

  Le 27/12/2013 10:24, Nikolay Starodubtsev a écrit :

  Hi all,
 Guys, I want to say that keystoneclient package on pypi is too old.
 For example it hadn't Client func in keystoneclient/client.py. May be
 someone can help me with this?

  Speaking of python-keystoneclient, the latest release is 0.4.1, which is
 indeed pretty old (Havana release timeframe).
 Any chance to get a fresher release soon ? The only solution as of now
 is pointing to the master eggfile, which is really bad...

  The solution is for the PTL to tag a new version, and then it will
 appear on PyPI.



 Thanks Thierry, my question was indeed when a new tag would be delivered
 (and consequently a package) ?

 0.4.1 is 3 months old, and a lot of features have been implemented
 meanwhile :
 https://github.com/openstack/python-keystoneclient/compare/0.4.1...master

 In particular, Climate would use the keystoneclient.client.Client class
 for automatic discovery of the Keystone API, which is not part of the
 latest release.

 -Sylvain


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --

 Best regards,

 Dina Belova

 Software Engineer

 Mirantis Inc.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] The extra_resource in compute node object

2014-01-13 Thread Dan Smith
 This patch set makes the extra_resources a list of object, instead of
 opaque json string. How do you think about that?

Sounds better to me, I'll go have a look.

 However, the compute resource object is different with current
 NovaObject, a) it has no corresponding table, but just a field in
 another table, and I assume it will have no save/update functions. b)
 it defines the functions for the object like alloc/free etc. Not sure
 if this is correct direction.

Having a NovaObject that isn't backed by a conventional SQLAlchemy model
is fine with me, FWIW.

--Dan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] api schema validation pattern changes

2014-01-13 Thread Ryan Petrello
Jay, I’ll +1 to that.  As I’ve been tinkering with potential Pecan support for 
the Nova API, I’ve run into the same issue w/ the API composition being 
seriously complicated (to the point where I’ve realized Pecan isn’t the hard 
part, it’s the bunch of cruft you need to tie in Pecan/Routes/Falcon/fill in 
WSGI framework here).

---
Ryan Petrello
Senior Developer, DreamHost
ryan.petre...@dreamhost.com

On Jan 13, 2014, at 9:23 AM, Jay Pipes jaypi...@gmail.com wrote:

 On Sun, 2014-01-12 at 19:52 -0800, Christopher Yeoh wrote:
 On my phone so will be very brief but perhaps the extensions extension
 could publish the jsonschema(s) for the extension. I think the only
 complicating  factor would be where extensions extend extensions but I
 think it's all doable.
 
 Am I the only one that sees the above statement as another indication of
 why API extensions should eventually find their way into the dustbin of
 OpenStack history?
 
 -jay
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] [rfc] drop XML from v3 API entirely

2014-01-13 Thread Sean Dague
I know we've been here before, but I want to raise this again while 
there is still time left in icehouse.


I would like to propose that the Nova v3 API removes the XML payload 
entirely. It adds complexity to the Nova code, and it requires 
duplicating all our test resources, because we need to do everything 
onces for JSON and once for XML. Even worse, the dual payload strategy 
that nova employed leaked out to a lot of other projects, so they now 
think maintaining 2 payloads is a good thing (which I would argue it is 
not).


As we started talking about reducing tempest concurrency in the gate, I 
was starting to think a lot about what we could shed that would let us 
keep up a high level of testing, but bring our overall time back down. 
The fact that Nova provides an extremely wide testing surface makes this 
challenging.


I think it would be a much better situation if the Nova API is a single 
payload type. The work on the jsonschema validation is also something 
where I think we could get to a fully discoverable API, which would be huge.


If we never ship v3 API with XML as stable, we can deprecate it 
entirely, and let it die with v2 ( probably a year out ).


-Sean

--
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gantt[ Patches to sync gantt up to the current nova tree

2014-01-13 Thread Russell Bryant
On 01/12/2014 05:30 PM, Dugger, Donald D wrote:
 So I have 25 patches that I need to push to backport changes that have
 been made to the nova tree that apply to the gantt tree.  The problem is
 how do we want to approve these patches?  Given that they have already
 been reviewed and approved in the nova tree do we have to go through the
 overhead of doing new reviews in the gantt tree and, if not, how do we
 bypass that mechanism?

For sync commits, how about we just allow +A with a single +2.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack][Nova][Cold Migration] What about enable cold migration with taraget host

2014-01-13 Thread Russell Bryant
On 01/13/2014 03:16 AM, Jay Lau wrote:
 Greetings,
 
 Now cold migration do not support migrate a VM instance with target
 host, what about add this feature to enable cold migration with a target
 host?

Sounds reasonable to me.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [rfc] drop XML from v3 API entirely

2014-01-13 Thread Davanum Srinivas
+1 to drop XML from v3 API

On Mon, Jan 13, 2014 at 9:38 AM, Sean Dague s...@dague.net wrote:
 I know we've been here before, but I want to raise this again while there is
 still time left in icehouse.

 I would like to propose that the Nova v3 API removes the XML payload
 entirely. It adds complexity to the Nova code, and it requires duplicating
 all our test resources, because we need to do everything onces for JSON and
 once for XML. Even worse, the dual payload strategy that nova employed
 leaked out to a lot of other projects, so they now think maintaining 2
 payloads is a good thing (which I would argue it is not).

 As we started talking about reducing tempest concurrency in the gate, I was
 starting to think a lot about what we could shed that would let us keep up a
 high level of testing, but bring our overall time back down. The fact that
 Nova provides an extremely wide testing surface makes this challenging.

 I think it would be a much better situation if the Nova API is a single
 payload type. The work on the jsonschema validation is also something where
 I think we could get to a fully discoverable API, which would be huge.

 If we never ship v3 API with XML as stable, we can deprecate it entirely,
 and let it die with v2 ( probably a year out ).

 -Sean

 --
 Sean Dague
 Samsung Research America
 s...@dague.net / sean.da...@samsung.com
 http://dague.net

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: http://davanum.wordpress.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Tuskar] Deployment Management section - Wireframes

2014-01-13 Thread Walls, Jeffrey Joel (Cloud OS RD)
 From: Jaromir Coufal [mailto:jcou...@redhat.com]
 On 2014/10/01 19:02, Dougal Matthews wrote:


  - If I remove some instances, do I as the administrator need to care
  which are removed? Do we need to choose or be informed at the end?
 This is great question on which we have long debates. I am convinced that I as
 administrator, do care which nodes I want to free up.
 
 But current TripleO approach is using heat template and there we can just
 specify number of nodes of that specific role. So it means that I decrease 
 from
 10 to 9 instances and app will take care for us for some node to be removed
 (AFAIK heat removes the last added node).
 
 So what we can do at the moment (until there is some way to specify which
 node to remove) is to inform user, which nodes were removed in the end... at
 least.
 
 In the future, I'd like to enable user to have both ways available - just 
 decrease
 number and let system to decide which nodes are going to be removed for him
 (but at least inform in advance which nodes are the chosen ones). Or, let 
 user to
 choose by himself.

Should a defect be filed against Heat then?  If I have a system that is 
currently running
my app server and heat comes along and deprovisions it (simply because it 
happened
to be running on the system that was spun up last), I'm going to be quite upset.

Jeff

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Tuskar] Deployment Management section - Wireframes

2014-01-13 Thread Walls, Jeffrey Joel (Cloud OS RD)
 From: Jaromir Coufal [mailto:jcou...@redhat.com]
 On 2014/10/01 16:16, Walls, Jeffrey Joel (Cloud OS RD) wrote:

  For configuration, will it be possible to supply default values to these 
  and let
 the user change them only if they want to?  For some values it's probably not
 possible, but for others it will be.  The fewer things the user has to enter 
 the
 better.
 I completely agree here. If we can provide default values, we should set it 
 up.
 Unfortunately for the fields which are listed in wireframes, I think we can
 estimate them. Anyway, I guess that the whole set of configurable items will
 change in time based on feedback.

Cool.  It would be great to have the ability for a service to not only specify 
which
attributes should be exposed to the tuskar ui, but also their default values
and/or relationships with other attributes.  Nothing crazy, but basic (and
reasonable) defaults help a user so much.

Jeff

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [climate][cafe] Climate and Café teams meeting

2014-01-13 Thread Sergey Lukjanov
Are there any guys from PST timezone?


On Mon, Jan 13, 2014 at 5:32 PM, Sylvain Bauza sylvain.ba...@gmail.comwrote:

 I can propose anytime at 0600 UTC, I can make efforts for waking up
 earlier :-)
 I can yet understand that other EU people couldn't attend the call, so I
 will aggregate and feedback all the topics for my French peers.

 -Sylvain


 2014/1/13 Nikolay Starodubtsev nstarodubt...@mirantis.com

 Hi, all!
  Guys, both our teams (Climate and Cafe) want to make a cross-team
 meeting to discuss our future plans. If you want to participate please tip
 us with good time for you.

 Useful links:
 * Climate https://launchpad.net/climate
 * Cafe https://wiki.openstack.org/wiki/Cafe




 Nikolay Starodubtsev

 Software Engineer

 Mirantis Inc.


 Skype: dark_harlequine1

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [rfc] drop XML from v3 API entirely

2014-01-13 Thread Greg Hill
I'm not really an active nova contributor as of yet, but I'll +1 this if nova's 
XML support is anything like what I see in trove (which I believe just cloned 
how nova did it in the first place).  XML without a schema is terrible for a 
serialization format.  In my experience, the only people who actually use XML 
really want SOAP or XMLRPC (mostly .net and Java developers), which both give 
mechanisms for defining the schema of the request/response so data types like 
arrays and dates and booleans are sane to work with.  Doing XML in a generic 
fashion never adequately deals with those problems and is then rarely, if ever, 
actually used.

Greg

On Jan 13, 2014, at 8:38 AM, Sean Dague s...@dague.net wrote:

 I know we've been here before, but I want to raise this again while there is 
 still time left in icehouse.
 
 I would like to propose that the Nova v3 API removes the XML payload 
 entirely. It adds complexity to the Nova code, and it requires duplicating 
 all our test resources, because we need to do everything onces for JSON and 
 once for XML. Even worse, the dual payload strategy that nova employed leaked 
 out to a lot of other projects, so they now think maintaining 2 payloads is a 
 good thing (which I would argue it is not).
 
 As we started talking about reducing tempest concurrency in the gate, I was 
 starting to think a lot about what we could shed that would let us keep up a 
 high level of testing, but bring our overall time back down. The fact that 
 Nova provides an extremely wide testing surface makes this challenging.
 
 I think it would be a much better situation if the Nova API is a single 
 payload type. The work on the jsonschema validation is also something where I 
 think we could get to a fully discoverable API, which would be huge.
 
 If we never ship v3 API with XML as stable, we can deprecate it entirely, and 
 let it die with v2 ( probably a year out ).
 
   -Sean
 
 -- 
 Sean Dague
 Samsung Research America
 s...@dague.net / sean.da...@samsung.com
 http://dague.net
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack][Nova][Cold Migration] What about enable cold migration with taraget host

2014-01-13 Thread Jay Lau
Thanks Russell, will add this to V3 api ad leave V2 API as it is.

Regards,

Jay


2014/1/13 Russell Bryant rbry...@redhat.com

 On 01/13/2014 03:16 AM, Jay Lau wrote:
  Greetings,
 
  Now cold migration do not support migrate a VM instance with target
  host, what about add this feature to enable cold migration with a target
  host?

 Sounds reasonable to me.

 --
 Russell Bryant

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Common SSH

2014-01-13 Thread Doug Hellmann
On Mon, Jan 13, 2014 at 7:32 AM, Bhuvan Arumugam bhu...@apache.org wrote:

 On Fri, Jan 10, 2014 at 11:24 PM, Sergey Skripnick 
 sskripn...@mirantis.com wrote:


  I appreciate that we want to fix the ssh client. I'm not certain that
 writing our own is the best answer.


 I was supposed to fix oslo.processutils.ssh with this class, but it may
 be fixed without it, not big deal.




 In his comments on your pull request, the paramiko author recommended
 looking at Fabric. I know that Fabric has a long history in production.
 Does it provide the required features?


 Fabric is too much for just command execution on remote server. Spur
 seems like
 good choice for this.


 I'd go with Fabric. It support several remote server operations, file
 upload/download among them. We could just import the methods we are
 interested. It in turn use paramiko supporting most of ssh client options.
 If we begin using fabric for file upload/download, it'll open door for more
 remote server operations. Bringing in fabric as part of oslo will be cool.


Where are we doing those sorts of operations?

Doug




 A quick demo script to upload/download files using fabric.

 from fabric.api import get, put, run, settings



 remote_host = 'localhost'

 with settings(host_string=remote_host):

 # what is the remote hostname?

 run('hostname -f')

 # download /etc/hosts file

 get('/etc/hosts')

 # upload /etc/bashrc to /tmp directory

 put('/etc/bashrc', '/tmp/bashrc')

 The output may look like:

 [localhost] run: hostname -f
 [localhost] out: rainbow.local
 [localhost] out:

 [localhost] download: /Users/bhuvan/localhost/hosts - /etc/hosts
 [localhost] put: /etc/bashrc - /tmp/bashrc

 --
 Regards,
 Bhuvan Arumugam
 www.livecipher.com

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] javascript templating library choice for status pages

2014-01-13 Thread Sergey Lukjanov
Personally, I think that it's a great step now to move this code to the
templates. As for the huge frameworks - I prefer something like Angular.JS
or Knockout.JS.

Currently, the status.js file isn't so bug to rewrite it as a real life web
app and so, we could just add templates to make it much more readable and
improvable.


On Mon, Jan 13, 2014 at 5:05 PM, Sean Dague s...@dague.net wrote:

 On 01/12/2014 09:56 PM, Michael Krotscheck wrote:

 If all you're looking for is a javascript-based in-browser templating
 system, then handlebars is a fine choice. I'm not certain on how complex
 status.html/status.js is, however if you expect it to grow to something
 more like an application then perhaps looking at angular as a full
 application framework might help you avoid both this growing pain and
 future ones (alternatives: Ember, backbone, etc).


 Honestly, I've not done enough large scale js projects to know whether
 we'd consider status.js to be big or not. I just know it's definitely
 getting too big for += all the html together and doing document.writes.

 I guess the real question I had is is there an incremental path towards
 any of the other frameworks? I can see how to incrementally bring in
 templates, but again my personal lack of experience on these others means I
 don't know.


  Quick warning though, a lot of the javascript community out there uses
 tooling that is built on top of Node.js, for which current official
 packages for Centos/Ubuntu don't exist, and therefore infra won't
 support it for openstack. Storyboard is able to get around this because
 it's not actually part of openstack proper, but you might be forced to
 manage your code manually. That's not a deal breaker in my opinion -
 it's just more tedious (though I think it might be less tedious than
 what you're doing right now).


 I'd ideally like to be able to function without node, mostly because it's
 another development environment to have to manager. But I realize that's
 pushing against the current at this point. So I agree, not a deal breaker.


 -Sean

 --
 Sean Dague
 Samsung Research America
 s...@dague.net / sean.da...@samsung.com
 http://dague.net

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Detect changes in object model

2014-01-13 Thread Murray, Paul (HP Cloud Services)
Hi Dan,

I was actually thinking of changes to the list itself rather than the objects 
in the list. To try and be clear, I actually mean the following:

ObjectListBase has a field called objects that is typed 
fields.ListOfObjectsField('NovaObject'). I can see methods for count and index, 
and I guess you are talking about adding a method for are any of your contents 
changed here. I don't see other list operations (like append, insert, remove, 
pop) that modify the list. If these were included they would have to mark the 
list as changed so it is picked up when looking for changes. 

Do you see these belonging here or would you expect those to go in a sub-class 
if they were wanted?

Paul.

-Original Message-
From: Dan Smith [mailto:d...@danplanet.com] 
Sent: 10 January 2014 16:22
To: Murray, Paul (HP Cloud Services); Wang, Shane; OpenStack Development 
Mailing List (not for usage questions)
Cc: Lee, Alexis; Tan, Lin
Subject: Re: [Nova] Detect changes in object model

 Sounds good to me. The list base objects don't have methods to make changes 
 to the list - so it would be a case of iterating looking at each object in 
 the list. That would be ok. 

Hmm? You mean for NovaObjects that are lists? I hesitate to expose lists as 
changed when one of the objects inside has changed because I think that sends 
the wrong message. However, I think it makes sense to have a different method 
on lists for are any of your contents changed?

I'll cook up a patch to implement what I'm talking about so you can take a look.

--Dan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [wsme] Undefined attributes in WSME

2014-01-13 Thread Doug Hellmann
On Mon, Jan 13, 2014 at 3:29 AM, Ken'ichi Ohmichi ken1ohmi...@gmail.comwrote:

 Hi Doug,

 2014/1/11 Doug Hellmann doug.hellm...@dreamhost.com:
  On Thu, Jan 9, 2014 at 12:02 AM, Jamie Lennox jamielen...@redhat.com
  wrote:
 
  Is there any way to have WSME pass through arbitrary attributes to the
  created object? There is nothing that i can see in the documentation or
 code
  that would seem to support this.
 
  In keystone we have the situation where arbitrary data was able to be
  attached to our resources. For example there are a certain number of
  predefined attributes for a user including name, email but if you want
 to
  include an address you just add an 'address': 'value' to the resource
  creation and it will be saved and returned to you when you request the
  resource.
 
  Ignoring whether this is a good idea or not (it's done), is the option
  there that i missed - or is there any plans/way to support something
 like
  this?
 
 
  There's a change in WSME trunk (I don't think we've released it yet) that
  allows the schema for a type to be changed after the class is defined.
 There
  isn't any facility for allowing the caller to pass arbitrary data,
 though.
  Part of the point of WSME is to define the inputs and outputs of the API
 for
  validation.

 Is there a plan to release new WSME which includes new type classes?
 I'd like to try applying these classes to Ceilometer after the release
 because
 Ceilometer is the best for showing these classes' usage.


If you mean the feature I mentioned above, we will release it but I don't
think it needs to be used in ceilometer. We designed that API so it doesn't
change when plugins are installed. The feature was added for nova's
requirements, since the types of the message payloads aren't known until
all of the extensions are loaded.

Doug





 Thanks
 Ken'ichi Ohmichi

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [wsme] Undefined attributes in WSME

2014-01-13 Thread Doug Hellmann
On Sun, Jan 12, 2014 at 6:33 PM, Jamie Lennox jamielen...@redhat.comwrote:

 On Fri, 2014-01-10 at 10:23 -0500, Doug Hellmann wrote:
 
 
 
 
  On Thu, Jan 9, 2014 at 12:02 AM, Jamie Lennox jamielen...@redhat.com
  wrote:
  Is there any way to have WSME pass through arbitrary
  attributes to the created object? There is nothing that i can
  see in the documentation or code that would seem to support
  this.
 
  In keystone we have the situation where arbitrary data was
  able to be attached to our resources. For example there are a
  certain number of predefined attributes for a user including
  name, email but if you want to include an address you just add
  an 'address': 'value' to the resource creation and it will be
  saved and returned to you when you request the resource.
 
  Ignoring whether this is a good idea or not (it's done), is
  the option there that i missed - or is there any plans/way to
  support something like this?
 
 
  There's a change in WSME trunk (I don't think we've released it yet)
  that allows the schema for a type to be changed after the class is
  defined. There isn't any facility for allowing the caller to pass
  arbitrary data, though. Part of the point of WSME is to define the
  inputs and outputs of the API for validation.
 
 
  How are the arbitrary values being stored in keystone? What sorts of
  things can be done with them? Can an API caller query them, for
  example?
 
 
  Doug

 So you can't query based on these arbitrary values but they are there as
 part of the resource. We have generic middleware that interprets the
 incoming json or xml to python dictionaries. Then we extract the
 queryable information for storing into database cells. In the case of
 User these are: id, name, password, enabled, domain_id,
 default_project_id. Everything else in the dictionary is stored in an
 'extra' column in the database as a JSON dictionary. When we reconstruct
 the User object we recreate the extra dictionary and update it with the
 known attributes.

 So there is no restriction on types or depth of objects, and whilst you
 can't query from those attributes they will always be present if you get
 or list the user.

 Note that User is probably a bad example in this because of LDAP and
 other backends but the idea is the same for all keystone resources.


 So I don't think that changing the WSME type after definition is useful
 in this case. Is it the sort of thing that would be possible or accepted
 to add to WSME?


 From the little bit of looking i've done it appears that WSME loops over
 the defined attributes of the class and extracts those from the message
 rather than looping over keys in the message which makes this more
 difficult. Can WSME be made to decode all values in a purely python
 primative way (eg don't decode dates, objects etc, just give python
 dictionaries like from a json.loads)?


WSME asserts that APIs should be well and clearly defined, so that callers
of the API can understand what they are supposed to (or required to) pass
in, and what they will receive as a response. Accepting arbitrary data goes
somewhat against this design goal.

I would prefer not to have keystone using yet another framework from the
 rest of openstack, but should i just be looking to use jsonschema or
 something instead?


What requirement(s) led to keystone supporting this feature?

Doug




 Jamie

 
 
 
  Thanks,
 
  Jamie
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] javascript templating library choice for status pages

2014-01-13 Thread Greg Hill
If you're just using it for client-side templates, you should be able to treat 
it like any other js library (jquery, etc) without using npm (node's package 
manager) for installation.  Handlebars, for example, has a single downloadable 
js file that is available on their website:

http://builds.handlebarsjs.com.s3.amazonaws.com/handlebars-v1.3.0.js

I'm coming in to the conversation without a lot of context, though, so I might 
be missing some reason why that won't work.

As for the incremental approach to using one of the larger frameworks, 
templates definitely do seem like an incremental improvement that won't really 
hinder adoption of the larger framework, since most of them are pluggable to 
work with most of the major template engines last I checked.  

Greg

On Jan 13, 2014, at 7:05 AM, Sean Dague s...@dague.net wrote:

 On 01/12/2014 09:56 PM, Michael Krotscheck wrote:
 If all you're looking for is a javascript-based in-browser templating
 system, then handlebars is a fine choice. I'm not certain on how complex
 status.html/status.js is, however if you expect it to grow to something
 more like an application then perhaps looking at angular as a full
 application framework might help you avoid both this growing pain and
 future ones (alternatives: Ember, backbone, etc).
 
 Honestly, I've not done enough large scale js projects to know whether we'd 
 consider status.js to be big or not. I just know it's definitely getting too 
 big for += all the html together and doing document.writes.
 
 I guess the real question I had is is there an incremental path towards any 
 of the other frameworks? I can see how to incrementally bring in templates, 
 but again my personal lack of experience on these others means I don't know.
 
 Quick warning though, a lot of the javascript community out there uses
 tooling that is built on top of Node.js, for which current official
 packages for Centos/Ubuntu don't exist, and therefore infra won't
 support it for openstack. Storyboard is able to get around this because
 it's not actually part of openstack proper, but you might be forced to
 manage your code manually. That's not a deal breaker in my opinion -
 it's just more tedious (though I think it might be less tedious than
 what you're doing right now).
 
 I'd ideally like to be able to function without node, mostly because it's 
 another development environment to have to manager. But I realize that's 
 pushing against the current at this point. So I agree, not a deal breaker.
 
   -Sean
 
 -- 
 Sean Dague
 Samsung Research America
 s...@dague.net / sean.da...@samsung.com
 http://dague.net
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [rfc] drop XML from v3 API entirely

2014-01-13 Thread Russell Bryant
On 01/13/2014 09:38 AM, Sean Dague wrote:
 I know we've been here before, but I want to raise this again while
 there is still time left in icehouse.
 
 I would like to propose that the Nova v3 API removes the XML payload
 entirely. It adds complexity to the Nova code, and it requires
 duplicating all our test resources, because we need to do everything
 onces for JSON and once for XML. Even worse, the dual payload strategy
 that nova employed leaked out to a lot of other projects, so they now
 think maintaining 2 payloads is a good thing (which I would argue it is
 not).
 
 As we started talking about reducing tempest concurrency in the gate, I
 was starting to think a lot about what we could shed that would let us
 keep up a high level of testing, but bring our overall time back down.
 The fact that Nova provides an extremely wide testing surface makes this
 challenging.
 
 I think it would be a much better situation if the Nova API is a single
 payload type. The work on the jsonschema validation is also something
 where I think we could get to a fully discoverable API, which would be
 huge.
 
 If we never ship v3 API with XML as stable, we can deprecate it
 entirely, and let it die with v2 ( probably a year out ).

Can you also pose this question on the main openstack@ list?  I'd like
to cast a wider net on this question.

Thanks,

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] api schema validation pattern changes

2014-01-13 Thread Jay Pipes
On Mon, 2014-01-13 at 09:40 -0500, Ryan Petrello wrote:
 Jay, I’ll +1 to that.  As I’ve been tinkering with potential Pecan support 
 for the Nova API, I’ve run into the same issue w/ the API composition being 
 seriously complicated (to the point where I’ve realized Pecan isn’t the hard 
 part, it’s the bunch of cruft you need to tie in Pecan/Routes/Falcon/fill in 
 WSGI framework here).

Indeed. And don't get me wrong... I'm not at all saying that Chris is to
blame for anything (or any one person in particular). Just pointing out
that, IMO, the usefulness of API extensions is outweighed by the added
complexity and churn that they bring with them.

-jay

 ---
 Ryan Petrello
 Senior Developer, DreamHost
 ryan.petre...@dreamhost.com
 
 On Jan 13, 2014, at 9:23 AM, Jay Pipes jaypi...@gmail.com wrote:
 
  On Sun, 2014-01-12 at 19:52 -0800, Christopher Yeoh wrote:
  On my phone so will be very brief but perhaps the extensions extension
  could publish the jsonschema(s) for the extension. I think the only
  complicating  factor would be where extensions extend extensions but I
  think it's all doable.
  
  Am I the only one that sees the above statement as another indication of
  why API extensions should eventually find their way into the dustbin of
  OpenStack history?
  
  -jay
  
  
  
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Domain Model Locations

2014-01-13 Thread Jay Dobies

Excellent write up Jay.

I don't actually know the answer. I'm not 100% bought into the idea that 
Tuskar isn't going to store any information about the deployment and 
will rely entirely on Heat/Ironic as the data store there. Losing this 
extra physical information may be a a strong reason why we need to store 
capture additional data beyond what is or will be utilized by Ironic.


For now, I think the answer is that this is the first pass for Icehouse. 
We're still a ways off from being able to do what you described 
regardless of where the model lives. There are ideas around how to 
partition things as you're suggesting (configuring profiles for the 
nodes; I forget the exact term but there was a big thread about manual 
v. automatic node allocation that had an idea) but there's nothing in 
the wireframes to account for it yet.


So not a very helpful reply on my part :) But your feedback was 
described well which will help keep those concerns in mind post-Icehouse.



Hmm, so this is a bit disappointing, though I may be less disappointed
if I knew that Ironic (or something else?) planned to account for
datacenter inventory in a more robust way than is currently modeled.

If Triple-O/Ironic/Tuskar are indeed meant to be the deployment tooling
that an enterprise would use to deploy bare-metal hardware in a
continuous fashion, then the modeling of racks, and the attributes of
those racks -- location, power supply, etc -- are a critical part of the
overall picture.

As an example of why something like power supply is important... inside
ATT, we had both 8kW and 16kW power supplies in our datacenters. For a
42U or 44U rack, deployments would be limited to a certain number of
compute nodes, based on that power supply.

The average power draw for a particular vendor model of compute worker
would be used in determining the level of compute node packing that
could occur for that rack type within a particular datacenter. This was
a fundamental part of datacenter deployment and planning. If the tooling
intended to do bare-metal deployment of OpenStack in a continual manner
does not plan to account for these kinds of things, then the chances
that tooling will be used in enterprise deployments is diminished.

And, as we all know, when something isn't used, it withers. That's the
last thing I want to happen here. I want all of this to be the
bare-metal deployment tooling that is used *by default* in enterprise
OpenStack deployments, because the tooling fits the expectations of
datacenter deployers.

It doesn't have to be done tomorrow :) It just needs to be on the map
somewhere. I'm not sure if Ironic is the place to put this kind of
modeling -- I thought Tuskar was going to be that thing. But really,
IMO, it should be on the roadmap somewhere.

All the best,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] javascript templating library choice for status pages

2014-01-13 Thread Monty Taylor

On 01/13/2014 05:05 AM, Sean Dague wrote:

On 01/12/2014 09:56 PM, Michael Krotscheck wrote:

If all you're looking for is a javascript-based in-browser templating
system, then handlebars is a fine choice. I'm not certain on how complex
status.html/status.js is, however if you expect it to grow to something
more like an application then perhaps looking at angular as a full
application framework might help you avoid both this growing pain and
future ones (alternatives: Ember, backbone, etc).


Honestly, I've not done enough large scale js projects to know whether
we'd consider status.js to be big or not. I just know it's definitely
getting too big for += all the html together and doing document.writes.

I guess the real question I had is is there an incremental path towards
any of the other frameworks? I can see how to incrementally bring in
templates, but again my personal lack of experience on these others
means I don't know.


Quick warning though, a lot of the javascript community out there uses
tooling that is built on top of Node.js, for which current official
packages for Centos/Ubuntu don't exist, and therefore infra won't
support it for openstack. Storyboard is able to get around this because
it's not actually part of openstack proper, but you might be forced to
manage your code manually. That's not a deal breaker in my opinion -
it's just more tedious (though I think it might be less tedious than
what you're doing right now).


I'd ideally like to be able to function without node, mostly because
it's another development environment to have to manager. But I realize
that's pushing against the current at this point. So I agree, not a deal
breaker.


Yeah - as a quick note though, just for clarity - this is only talking 
about node as a dev/build time depend - not a runtime depend.


I think, given that we seem to be doing more and more with javascript, 
that we might should just bite the bullet and learn the toolchain - I'm 
starting feel that doing all the js stuff without it is like the crazy 
python people who refuse to touch pip for some reason.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [rfc] drop XML from v3 API entirely

2014-01-13 Thread Sean Dague
On 01/13/2014 10:13 AM, Russell Bryant wrote:
 On 01/13/2014 09:38 AM, Sean Dague wrote:
 I know we've been here before, but I want to raise this again while
 there is still time left in icehouse.

 I would like to propose that the Nova v3 API removes the XML payload
 entirely. It adds complexity to the Nova code, and it requires
 duplicating all our test resources, because we need to do everything
 onces for JSON and once for XML. Even worse, the dual payload strategy
 that nova employed leaked out to a lot of other projects, so they now
 think maintaining 2 payloads is a good thing (which I would argue it is
 not).

 As we started talking about reducing tempest concurrency in the gate, I
 was starting to think a lot about what we could shed that would let us
 keep up a high level of testing, but bring our overall time back down.
 The fact that Nova provides an extremely wide testing surface makes this
 challenging.

 I think it would be a much better situation if the Nova API is a single
 payload type. The work on the jsonschema validation is also something
 where I think we could get to a fully discoverable API, which would be
 huge.

 If we never ship v3 API with XML as stable, we can deprecate it
 entirely, and let it die with v2 ( probably a year out ).
 
 Can you also pose this question on the main openstack@ list?  I'd like
 to cast a wider net on this question.
 
 Thanks,

Absolutely.

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Partially Shared Networks

2014-01-13 Thread Jay Pipes
On Mon, 2014-01-13 at 10:23 +, Stephen Gran wrote:
 Hi,
 
 I don't think that's what's being asked for. Just that there be more 
 than the current check for '(isowner of network) or (shared)'
 
 If the data point could be 'enabled for network' for a given tenant, 
 that would be more flexible.

Agreed, but I believe Mathieu is thinking more in terms of how such a
check could be implemented. What makes this problematic (at least in my
simplistic understanding of Neutron wiring) is that there is no
guarantee that tenant A's subnet does not overlap with tenant B's
subnet. Because Neutron allows overlapping subnets (since Neutron uses
network namespaces for isolating traffic), code would need to be put in
place that says, basically, if this network is shared between tenants,
then do not allow overlapping subnets, since a single, shared network
namespace will be needed that routes traffic between the tenants.

Or at least, that's what I *think* is part of the problem...

Best,
-jay

 On 13/01/14 10:06, Mathieu Rohon wrote:
  Hi,
 
  This is something that we potentially could implement during the
  implementation of the isolated-network bp [1]
  Basically, on an isolated network, an ARP responder will respond to
  ARP request. For an L2 network which is totally isolated, ARP
  responder will only respond to arp-request of the gateway, other
  broadcast requests will be dropped (except for DHCP requests)
 
  We could enhance this feature to populate the arp-responder so that if
  tenant A and tenant B wants to be able to communicate on this shared
  and isolated network, ARP responder for the VM of tenant A will be
  populated with Mac address of VM of the Tenant B, and vice versa.
 
  [1] https://blueprints.launchpad.net/neutron/+spec/isolated-network
 
  On Fri, Jan 10, 2014 at 10:00 PM, Jay Pipes jaypi...@gmail.com wrote:
  On Fri, 2014-01-10 at 17:06 +, CARVER, PAUL wrote:
  If anyone is giving any thought to networks that are available to
  multiple tenants (controlled by a configurable list of tenants) but
  not visible to all tenants I’d like to hear about it.
 
  I’m especially thinking of scenarios where specific networks exist
  outside of OpenStack and have specific purposes and rules for who can
  deploy servers on them. We’d like to enable the use of OpenStack to
  deploy to these sorts of networks but we can’t do that with the
  current “shared or not shared” binary choice.
 
  Hi Paul :) Please see here:
 
  https://www.mail-archive.com/openstack-dev@lists.openstack.org/msg07268.html
 
  for a similar discussion.
 
  best,
  -jay
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Tuskar] Deployment Management section - Wireframes

2014-01-13 Thread Jay Dobies



On 01/13/2014 05:43 AM, Jaromir Coufal wrote:

On 2014/10/01 21:17, Jay Dobies wrote:

Another question:

- A Role (sounds like we're moving away from that so I'll call it
Resource Category) can have multiple Node Profiles defined (assuming I'm
interpretting the + and the tabs in the Create a Role wireframe
correctly). But I don't see anywhere where a profile is selected when
scaling the Resource Category. Is the idea behind the profiles that you
can select how much power you want to provide in addition to how many
nodes?


Yes, that is correct, Jay. I mentioned that in walkthrough and in
wireframes with the note More views needed (for deploying, scaling,
managing roles).

I would say there might be two approaches - one is to specify which node
profile you want to scale in order to select how much power you want to
add.

The other approach is just to scale the number of nodes in a role and
let system decide the best match (which node profile is chosen will be
decided on the best fit, probably).

I lean towards the first approach, where you specify what role and which
node profile you want to use for scaling. However this is just
introduction of the idea and I believe we can get answers until we get
to that step.

Any preferences for one of above mentioned approaches?


I lean towards the former as well. See the Domain Model Locations thread 
and Jay Pipes' response for an admin's use case that backs it up.


A few weeks ago, there was the giant thread that turned into manual v. 
automatic allocation[1]. The conversation used as an example a system 
that was heavily geared towards disk IO being specifically used for the 
storage-related roles.


Where I'm going with this is that I'm not sure it'll be enough to simply 
use some values for a node profile. I think we're going to need some way 
of identifying nodes as having a particular set of characteristics 
(totally running out of words here) and then saying that the new 
allocation should come from that type of node.


That's a long way of saying that I think an explicit step to say more 
about what we're adding is not only necessary, but potentially 
invalidates some of the wireframes as they exist today. I think over 
time, that is going to be much more complex than incrementing some numbers.


Don't get me wrong. I fully appreciate that we're still very early on 
and scoped to Icehouse for now. Need to start somewhere :)



[1] 
http://lists.openstack.org/pipermail/openstack-dev/2013-December/022163.html



-- Jarda

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Tuskar] Deployment Management section - Wireframes

2014-01-13 Thread Jiří Stránský

On 13.1.2014 11:43, Jaromir Coufal wrote:

On 2014/10/01 21:17, Jay Dobies wrote:

Another question:

- A Role (sounds like we're moving away from that so I'll call it
Resource Category) can have multiple Node Profiles defined (assuming I'm
interpretting the + and the tabs in the Create a Role wireframe
correctly). But I don't see anywhere where a profile is selected when
scaling the Resource Category. Is the idea behind the profiles that you
can select how much power you want to provide in addition to how many
nodes?


Yes, that is correct, Jay. I mentioned that in walkthrough and in
wireframes with the note More views needed (for deploying, scaling,
managing roles).

I would say there might be two approaches - one is to specify which node
profile you want to scale in order to select how much power you want to add.

The other approach is just to scale the number of nodes in a role and
let system decide the best match (which node profile is chosen will be
decided on the best fit, probably).


Hmm i'm not sure i understand - what do you think by best fit here? 
E.g. i have 32 GB RAM profile and 256 GB RAM profile in the compute role 
(and i have unused machines available for both profiles), and i increase 
compute node count by 2. What do i best-fit against?


(Alternatively, if we want to support scaling a role using just one 
spinner, even though the role has more profiles, maybe we could pick the 
largest profile with unused nodes available?)




I lean towards the first approach, where you specify what role and which
node profile you want to use for scaling. However this is just
introduction of the idea and I believe we can get answers until we get
to that step.


+1. I think we'll want the first approach to be at least possible (maybe 
not default). As a cloud operator, when i want deploy 2 more compute 
nodes, i imagine there are situations when i do care whether i'll get 
additional 64 GB or additional 512 GB capacity.


Jirka

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Detect changes in object model

2014-01-13 Thread Dan Smith
 ObjectListBase has a field called objects that is typed
 fields.ListOfObjectsField('NovaObject'). I can see methods for count
 and index, and I guess you are talking about adding a method for are
 any of your contents changed here. I don't see other list operations
 (like append, insert, remove, pop) that modify the list. If these
 were included they would have to mark the list as changed so it is
 picked up when looking for changes.
 
 Do you see these belonging here or would you expect those to go in a
 sub-class if they were wanted?

Well, I've been trying to avoid implying the notion that a list of
things represents the content of the database. Meaning, I don't think it
makes sense for someone to get a list of Foo objects, add another Foo to
the list and then call save() on the list. I think that ends up with the
assumption that the list matches the contents of the database, and if I
add or remove things from the list, I can save() the contents to the
database atomically. That definitely isn't something we can or would
want to support.

That said, if we make the parent object consider the child to be dirty
if any of its contents are dirty or the list itself is dirty (i.e. the
list of objects has changed) that should give us the desired behavior
for change tracking, right?

--Dan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [climate][cafe] Climate and Café teams meeting

2014-01-13 Thread Dina Belova
Guys from Cafe command are from New Zealand - UTC+13 :)


On Mon, Jan 13, 2014 at 6:58 PM, Sergey Lukjanov slukja...@mirantis.comwrote:

 Are there any guys from PST timezone?


 On Mon, Jan 13, 2014 at 5:32 PM, Sylvain Bauza sylvain.ba...@gmail.comwrote:

 I can propose anytime at 0600 UTC, I can make efforts for waking up
 earlier :-)
 I can yet understand that other EU people couldn't attend the call, so I
 will aggregate and feedback all the topics for my French peers.

 -Sylvain


 2014/1/13 Nikolay Starodubtsev nstarodubt...@mirantis.com

 Hi, all!
  Guys, both our teams (Climate and Cafe) want to make a cross-team
 meeting to discuss our future plans. If you want to participate please tip
 us with good time for you.

 Useful links:
 * Climate https://launchpad.net/climate
 * Cafe https://wiki.openstack.org/wiki/Cafe




 Nikolay Starodubtsev

 Software Engineer

 Mirantis Inc.


 Skype: dark_harlequine1

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Sincerely yours,
 Sergey Lukjanov
 Savanna Technical Lead
 Mirantis Inc.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 

Best regards,

Dina Belova

Software Engineer

Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Aggregation discussion

2014-01-13 Thread Jay Pipes
On Mon, 2014-01-13 at 13:39 +0400, Nadya Privalova wrote:
 Jay,
 
 
 Thanks for comments!
 
 The question you raised was discussed several times within Ceilometer
 team but as I understand there is no official resolution yet. 
 
 I agree with you that statistics' collection is the main Ceilometer's
 goal. But at the same time there should be a way to visualize the
 result of Ceilometer's work.
 
 My opinion is that the current set of data queries (get samples, get
 statistics) is the minimum set and it's ok to keep it as a part of
 Ceilometer's functionality. We need it at least for UI.

Yes, I've seen the (excellent) wireframes that Jarda and others have
been working on, and I can certainly see where the feature request comes
from.

 So, my proposal is to make this existing queries faster. Looks like
 our vision are the same :)

Certainly :)

 Pre-calculate when? :) During processing of samples, or during
 some
 periodic job?
 
 
 It should be a periodic job, right.

I'm actually not so sure. If you spread the work of updating the
aggregate into the (multi-processed) collectors, then updating the
aggregates become less resource-intensive.

It's the same idea behind putting a SQL trigger on a table in an RDBMS
that updates a rolling aggregate. Doing the update work one little piece
at a time can be more efficient than doing a large update on a periodic
job -- particularly when, as is the case here, the long-running periodic
update job would need to take locks on the base fact tables.

 
 The term aggregate really just means a generic grouping or
 summarization. If you are looking for a term that represents
 the
 rules/heuristics for maintaining rolling calculations, perhaps
 the term
 report is better?
 
 
 Hmm, I think that 'aggregate' is ok. In my terminology an aggregate
 is a ready set of statistics for concrete meter, period and query.
 Anyway, will think about it.

Sure, not a big deal. Was just throwing out ideas, really. Aggregate
is fine.

Best,
-jay



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [DevStack] Nominate Chmouel Boudjnah for core team

2014-01-13 Thread Dean Troyer
Welcome to the DevStack core team Chmouel!

dt

-- 

Dean Troyer
dtro...@gmail.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] javascript templating library choice for status pages

2014-01-13 Thread Sergey Lukjanov
Just to make a context for this discussion, here are the two files that
where're speaking about:

https://git.openstack.org/cgit/openstack-infra/config/tree/modules/openstack_project/files/zuul/status.html
https://git.openstack.org/cgit/openstack-infra/config/tree/modules/openstack_project/files/zuul/status.js


On Mon, Jan 13, 2014 at 7:55 PM, Sergey Lukjanov slukja...@mirantis.comwrote:

 Currently, we already have a simple status page in zuul repo and status
 page in infra/config, probably, we should think about moving them to the
 separated repo and merge their functionality and in this case it'll be easy
 to use any actual js tools. Otherwise it'll be not really straightforward
 to have internal node.js project in mostly-puppet infra/config or python
 zuul.


 On Mon, Jan 13, 2014 at 7:21 PM, Monty Taylor mord...@inaugust.comwrote:

 On 01/13/2014 05:05 AM, Sean Dague wrote:

 On 01/12/2014 09:56 PM, Michael Krotscheck wrote:

 If all you're looking for is a javascript-based in-browser templating
 system, then handlebars is a fine choice. I'm not certain on how complex
 status.html/status.js is, however if you expect it to grow to something
 more like an application then perhaps looking at angular as a full
 application framework might help you avoid both this growing pain and
 future ones (alternatives: Ember, backbone, etc).


 Honestly, I've not done enough large scale js projects to know whether
 we'd consider status.js to be big or not. I just know it's definitely
 getting too big for += all the html together and doing document.writes.

 I guess the real question I had is is there an incremental path towards
 any of the other frameworks? I can see how to incrementally bring in
 templates, but again my personal lack of experience on these others
 means I don't know.

  Quick warning though, a lot of the javascript community out there uses
 tooling that is built on top of Node.js, for which current official
 packages for Centos/Ubuntu don't exist, and therefore infra won't
 support it for openstack. Storyboard is able to get around this because
 it's not actually part of openstack proper, but you might be forced to
 manage your code manually. That's not a deal breaker in my opinion -
 it's just more tedious (though I think it might be less tedious than
 what you're doing right now).


 I'd ideally like to be able to function without node, mostly because
 it's another development environment to have to manager. But I realize
 that's pushing against the current at this point. So I agree, not a deal
 breaker.


 Yeah - as a quick note though, just for clarity - this is only talking
 about node as a dev/build time depend - not a runtime depend.

 I think, given that we seem to be doing more and more with javascript,
 that we might should just bite the bullet and learn the toolchain - I'm
 starting feel that doing all the js stuff without it is like the crazy
 python people who refuse to touch pip for some reason.



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Sincerely yours,
 Sergey Lukjanov
 Savanna Technical Lead
 Mirantis Inc.




-- 
Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][IPv6] Reminder - new meeting time day - Tuesday 1400UTC

2014-01-13 Thread Collins, Sean
In #openstack-meeting

-- 
Sean M. Collins
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] top gate bugs: a plea for help

2014-01-13 Thread Russell Bryant
On 01/11/2014 09:57 AM, Russell Bryant wrote:
 5) https://review.openstack.org/#/c/65989/
 
 This patch isn't a candidate for merging, but was written to test the
 theory that by updating nova-network to use conductor instead of direct
 database access, nova-network will be able to do work in parallel better
 than it does today, just as we have observed with nova-compute.
 
 Dan's initial test results from this are **very** promising.  Initial
 testing showed a 20% speedup in runtime and a 33% decrease in CPU
 consumption by nova-network.
 
 Doing this properly will not be quick, but I'm hopeful that we can
 complete it by the Icehouse release.  We will need to convert
 nova-network to use Nova's object model.  Much of this work is starting
 to catch nova-network up on work that we've been doing in the rest of
 the tree but have passed on doing for nova-network due to nova-network
 being in a freeze.

I have filed a blueprint to track the completion of this work throughout
the rest of Icehouse.

https://blueprints.launchpad.net/nova/+spec/nova-network-objects

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [neutron] PCI pass-through network support

2014-01-13 Thread Robert Li (baoli)
As I have responded in the other email, and If I understand PCI flavor 
correctly, then the issue that we need to deal with is the overlapping issue. A 
simplest case of this overlapping is that you can define a flavor F1 as 
[vendor_id='v', product_id='p'], and a flavor F2 as [vendor_id = 'v'] .  Let's 
assume that only the admin can define the flavors. It's not hard to see that a 
device can belong to the two different flavors in the same time. This 
introduces an issue in the scheduler. Suppose the scheduler (counts or stats 
based) maintains counts based on flavors (or the keys corresponding to the 
flavors). To request a device with the flavor F1,  counts in F2 needs to be 
subtracted by one as well. There may be several ways to achieve that. But 
regardless, it introduces tremendous overhead in terms of system processing and 
administrative costs.

What are the use cases for that? How practical are those use cases?

thanks,
Robert

On 1/10/14 9:34 PM, Ian Wells 
ijw.ubu...@cack.org.ukmailto:ijw.ubu...@cack.org.uk wrote:



 OK - so if this is good then I think the question is how we could change the 
 'pci_whitelist' parameter we have - which, as you say, should either *only* 
 do whitelisting or be renamed - to allow us to add information.  Yongli has 
 something along those lines but it's not flexible and it distinguishes poorly 
 between which bits are extra information and which bits are matching 
 expressions (and it's still called pci_whitelist) - but even with those 
 criticisms it's very close to what we're talking about.  When we have that I 
 think a lot of the rest of the arguments should simply resolve themselves.



 [yjiang5_1] The reason that not easy to find a flexible/distinguishable 
 change to pci_whitelist is because it combined two things. So a stupid/naive 
 solution in my head is, change it to VERY generic name, 
 ‘pci_devices_information’,

 and change schema as an array of {‘devices_property’=regex exp, ‘group_name’ 
 = ‘g1’} dictionary, and the device_property expression can be ‘address ==xxx, 
 vendor_id == xxx’ (i.e. similar with current white list),  and we can squeeze 
 more into the “pci_devices_information” in future, like ‘network_information’ 
 = xxx or “Neutron specific information” you required in previous mail.


We're getting to the stage that an expression parser would be useful, 
annoyingly, but if we are going to try and squeeze it into JSON can I suggest:

{ match = { class = Acme inc. discombobulator }, info = { group = we like 
teh groups, volume = 11 } }


 All keys other than ‘device_property’ becomes extra information, i.e. 
 software defined property. These extra information will be carried with the 
 PCI devices,. Some implementation details, A)we can limit the acceptable 
 keys, like we only support ‘group_name’, ‘network_id’, or we can accept any 
 keys other than reserved (vendor_id, device_id etc) one.


Not sure we have a good list of reserved keys at the moment, and with two dicts 
it isn't really necessary, I guess.  I would say that we have one match parser 
which looks something like this:

# does this PCI device match the expression given?
def match(expression, pci_details, extra_specs):
   for (k, v) in expression:
if k.starts_with('e.'):
   mv = extra_specs.get(k[2:])
else:
   mv = pci_details.get(k[2:])
if not match(m, mv):
return False
return True

Usable in this matching (where 'e.' just won't work) and also for flavor 
assignment (where e. will indeed match the extra values).

 B) if a device match ‘device_property’ in several entries, raise exception, 
 or use the first one.

Use the first one, I think.  It's easier, and potentially more useful.

 [yjiang5_1] Another thing need discussed is, as you pointed out, “we would 
 need to add a config param on the control host to decide which flags to group 
 on when doing the stats”.  I agree with the design, but some details need 
 decided.

This is a patch that can come at any point after we do the above stuff (which 
we need for Neutron), clearly.

 Where should it defined. If we a) define it in both control node and compute 
 node, then it should be static defined (just change pool_keys in 
 /opt/stack/nova/nova/pci/pci_stats.py to a configuration parameter) . Or b) 
 define only in control node, then I assume the control node should be the 
 scheduler node, and the scheduler manager need save such information, present 
 a API to fetch such information and the compute node need fetch it on every 
 update_available_resource() periodic task. I’d prefer to take a) option in 
 first step. Your idea?

I think it has to be (a), which is a shame.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO][Tuskar] Editing Nodes

2014-01-13 Thread Jay Dobies
I'm pulling this particular discussion point out of the Wireframes 
thread so it doesn't get lost in the replies.


= Background =

It started with my first bulletpoint:

- When a role is edited, if it has existing nodes deployed with the old 
version, are the automatically/immediately updated? If not, how do we 
reflect that there's a difference between how the role is currently 
configured and the nodes that were previously created from it?


Replies:

I would expect any Role change to be applied immediately. If there is 
some change where I want to keep older nodes how they are set up and 
apply new settings only to new added nodes, I would create new Role then.


We will have to store image metadata in tuskar probably, that would map 
to glance, once the image is generated. I would say we need to store the 
list of the elements and probably the commit hashes (because elements 
can change). Also it should be versioned, as the images in glance will 
be also versioned.
We can't probably store it in the Glance, cause we will first store the 
metadata, then generate image. Right?


Then we could see whether image was created from the metadata and 
whether that image was used in the heat-template. With versions we could 
also see what has changed.


But there was also idea that there will be some generic image, 
containing all services, we would just configure which services to 
start. In that case we would need to version also this.



= New Comments =

My comments on this train of thought:

- I'm afraid of the idea of applying changes immediately for the same 
reasons I'm worried about a few other things. Very little of what we do 
will actually finish executing immediately and will instead be long 
running operations. If I edit a few roles in a row, we're looking at a 
lot of outstanding operations executing against other OpenStack pieces 
(namely Heat).


The idea of immediately also suffers from a sort of Oh shit, that's not 
what I meant when hitting save. There's no way for the user to review 
what the larger picture is before deciding to make it so.


- Also falling into this category is the image creation. This is not 
something that finishes immediately, so there's a period between when 
the resource category is saved and the new image exists.


If the image is immediately created, what happens if the user tries to 
change the resource category counts while it's still being generated? 
That question applies both if we automatically update existing nodes as 
well as if we don't and the user is just quick moving around the UI.


What do we do with old images from previous configurations of the 
resource category? If we don't clean them up, they can grow out of hand. 
If we automatically delete them when the new one is generated, what 
happens if there is an existing deployment in process and the image is 
deleted while it runs?


We need some sort of task tracking that prevents overlapping operations 
from executing at the same time. Tuskar needs to know what's happening 
instead of simply having the UI fire off into other OpenStack components 
when the user presses a button.


To rehash an earlier argument, this is why I advocate for having the 
business logic in the API itself instead of at the UI. Even if it's just 
a queue to make sure they don't execute concurrently (that's not enough 
IMO, but for example), the server is where that sort of orchestration 
should take place and be able to understand the differences between the 
configured state in Tuskar and the actual deployed state.


I'm off topic a bit though. Rather than talk about how we pull it off, 
I'd like to come to an agreement on what the actual policy should be. My 
concerns focus around the time to create the image and get it into 
Glance where it's available to actually be deployed. When do we bite 
that time off and how do we let the user know it is or isn't ready yet?


- Editing a node is going to run us into versioning complications. So 
far, all we've entertained are ways to map a node back to the resource 
category it was created under. If the configuration of that category 
changes, we have no way of indicating that the node is out of sync.


We could store versioned resource categories in the Tuskar DB and have 
the version information also find its way to the nodes (note: the idea 
is to use the metadata field on a Heat resource to store the res-cat 
information, so including version is possible). I'm less concerned with 
eventual reaping of old versions here since it's just DB data, though we 
still hit the question of when to delete old images.


- For the comment on a generic image with service configuration, the 
first thing that came to mind was the thread on creating images from 
packages [1]. It's not the exact same problem, but see Clint Byrum's 
comments in there about drift. My gut feeling is that having specific 
images for each res-cat will be easier to manage than trying to edit 
what services are running on 

Re: [openstack-dev] [infra] javascript templating library choice for status pages

2014-01-13 Thread Russell Bryant
On 01/12/2014 09:56 PM, Michael Krotscheck wrote:
 If all you're looking for is a javascript-based in-browser templating
 system, then handlebars is a fine choice. I'm not certain on how complex
 status.html/status.js is, however if you expect it to grow to something
 more like an application then perhaps looking at angular as a full
 application framework might help you avoid both this growing pain and
 future ones (alternatives: Ember, backbone, etc). 

For reference, you can find status.html / status.js here:

http://git.openstack.org/cgit/openstack-infra/config/tree/modules/openstack_project/files/zuul

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Common SSH

2014-01-13 Thread Bhuvan Arumugam
On Mon, Jan 13, 2014 at 7:02 AM, Doug Hellmann
doug.hellm...@dreamhost.comwrote:




 On Mon, Jan 13, 2014 at 7:32 AM, Bhuvan Arumugam bhu...@apache.orgwrote:

 On Fri, Jan 10, 2014 at 11:24 PM, Sergey Skripnick 
 sskripn...@mirantis.com wrote:


  I appreciate that we want to fix the ssh client. I'm not certain that
 writing our own is the best answer.


 I was supposed to fix oslo.processutils.ssh with this class, but it may
 be fixed without it, not big deal.




 In his comments on your pull request, the paramiko author recommended
 looking at Fabric. I know that Fabric has a long history in production.
 Does it provide the required features?


 Fabric is too much for just command execution on remote server. Spur
 seems like
 good choice for this.


 I'd go with Fabric. It support several remote server operations, file
 upload/download among them. We could just import the methods we are
 interested. It in turn use paramiko supporting most of ssh client options.
 If we begin using fabric for file upload/download, it'll open door for more
 remote server operations. Bringing in fabric as part of oslo will be cool.


 Where are we doing those sorts of operations?


Currently, we don't upload/download files to remote server through ssh/scp.
We do execute commands, pipe multiple commands in few tempest when ssh is
enabled. With oslo/fabric, we may develop a common ground to deal with
remote servers, be it executing commands or dealing with files.
 --
Regards,
Bhuvan Arumugam
www.livecipher.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Detect changes in object model

2014-01-13 Thread Murray, Paul (HP Cloud Services)
Yes, I agree. 

Actually, I am trying to infer what the programming model for this is as we 
go along. 

Personally I would have been happy with only marking the fields when they are 
set. Then, if a you want to change a list somehow you would get it and then set 
it again, e.g.: 
  mylist = object.alist 
  do something to mylist
  object.alist = mylist
  object.save()

Having said that, it can be convenient to use the data structures in place. In 
which case we need all these means to track the changes and they should go in 
the base classes so they are used consistently.

So in short, I am happy with your dirty children :)

Paul.

-Original Message-
From: Dan Smith [mailto:d...@danplanet.com] 
Sent: 13 January 2014 15:26
To: Murray, Paul (HP Cloud Services); Wang, Shane; OpenStack Development 
Mailing List (not for usage questions)
Cc: Lee, Alexis; Tan, Lin
Subject: Re: [Nova] Detect changes in object model

 ObjectListBase has a field called objects that is typed 
 fields.ListOfObjectsField('NovaObject'). I can see methods for count 
 and index, and I guess you are talking about adding a method for are 
 any of your contents changed here. I don't see other list operations 
 (like append, insert, remove, pop) that modify the list. If these were 
 included they would have to mark the list as changed so it is picked 
 up when looking for changes.
 
 Do you see these belonging here or would you expect those to go in a 
 sub-class if they were wanted?

Well, I've been trying to avoid implying the notion that a list of things 
represents the content of the database. Meaning, I don't think it makes sense 
for someone to get a list of Foo objects, add another Foo to the list and then 
call save() on the list. I think that ends up with the assumption that the list 
matches the contents of the database, and if I add or remove things from the 
list, I can save() the contents to the database atomically. That definitely 
isn't something we can or would want to support.

That said, if we make the parent object consider the child to be dirty if any 
of its contents are dirty or the list itself is dirty (i.e. the list of objects 
has changed) that should give us the desired behavior for change tracking, 
right?

--Dan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] pep8 gating fails due to tools/config/check_uptodate.sh

2014-01-13 Thread Sahid Ferdjaoui
Hello all,

It looks 100% of the pep8 gate for nova is failing because of a bug reported, 
we probably need to mark this as Critical.

   https://bugs.launchpad.net/nova/+bug/1268614

Ivan Melnikov has pushed a patchset waiting for review:
   https://review.openstack.org/#/c/66346/

http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiRVJST1I6IEludm9jYXRpb25FcnJvcjogXFwnL2hvbWUvamVua2lucy93b3Jrc3BhY2UvZ2F0ZS1ub3ZhLXBlcDgvdG9vbHMvY29uZmlnL2NoZWNrX3VwdG9kYXRlLnNoXFwnXCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjQzMjAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTM4OTYzMTQzMzQ4OSwibW9kZSI6IiIsImFuYWx5emVfZmllbGQiOiIifQ==


s.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Mistral] Community meeting minutes - 01/13/2014

2014-01-13 Thread Renat Akhmerov
Hi,

Thanks for joining us today in IRC at #openstack-meeting. Here are the links to 
minutes and log of the meeting:

Minutes: 
http://eavesdrop.openstack.org/meetings/mistral/2014/mistral.2014-01-13-16.00.html
Log: 
http://eavesdrop.openstack.org/meetings/mistral/2014/mistral.2014-01-13-16.00.log.html

Renat Akhmerov
@ Mirantis Inc.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] Some updates to nova-network code

2014-01-13 Thread Russell Bryant
Greetings,

There has been much discussion about the future of nova-network.  While
a freeze is still in effect, I think we need to relax some of the
restrictions we have put in place as a part of this freeze.

Previously, we have been ignoring nova-network as we made architectural
enhancements to the rest of Nova.  This is starting to cause a bit too
much pain.  nova-network performance is the biggest Nova performance
problem related to recent gate instability.  More details here:

http://lists.openstack.org/pipermail/openstack-dev/2014-January/024052.html

http://lists.openstack.org/pipermail/openstack-dev/2014-January/024167.html

One of the architectural improvements we've held off on for nova-network
was updating it to use nova-objects and to use the nova-conductor
service.  For the performance reasons discussed in the linked thread
above, I think we need to move forward with this in nova-network.  I
have filed the following blueprint for this work:

https://blueprints.launchpad.net/nova/+spec/nova-network-objects

This is not an unfreeze of nova-network.  This is just a recognition
that we need to catch it up with the rest of Nova.

Thanks,

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Devstack] nova compute fails to load the virt driver

2014-01-13 Thread Evgeny Fedoruk
Hi All,

I just cloned master devstack  and stacked.
Nova-compute fails while trying to load libvirt driver

Log:

Loading compute driver 'libvirt.LibvirtDriver'^[[00m
2014-01-13 08:20:01.228 ^[[01;31mERROR nova.virt.driver [^[[00;36m-^[[01;31m] 
^[[01;35m^[[01;31mUnable to load the virtualization driver^[[00m
^[[01;31m2014-01-13 08:20:01.228 TRACE nova.virt.driver 
^[[01;35m^[[00mTraceback (most recent call last):
^[[01;31m2014-01-13 08:20:01.228 TRACE nova.virt.driver ^[[01;35m^[[00m  File 
/opt/stack/nova/nova/virt/driver.py, line 1179, in load_compute_driver
^[[01;31m2014-01-13 08:20:01.228 TRACE nova.virt.driver ^[[01;35m^[[00m
virtapi)
^[[01;31m2014-01-13 08:20:01.228 TRACE nova.virt.driver ^[[01;35m^[[00m  File 
/opt/stack/nova/nova/openstack/common/importutils.py, line 52, in 
import_object_ns
^[[01;31m2014-01-13 08:20:01.228 TRACE nova.virt.driver ^[[01;35m^[[00m
return import_class(import_str)(*args, **kwargs)
^[[01;31m2014-01-13 08:20:01.228 TRACE nova.virt.driver ^[[01;35m^[[00m  File 
/opt/stack/nova/nova/openstack/common/importutils.py, line 33, in import_class
^[[01;31m2014-01-13 08:20:01.228 TRACE nova.virt.driver ^[[01;35m^[[00m
traceback.format_exception(*sys.exc_info(
^[[01;31m2014-01-13 08:20:01.228 TRACE nova.virt.driver 
^[[01;35m^[[00mImportError: Class LibvirtDriver cannot be found (['Traceback 
(most recent call last):\n', '  File 
/opt/stack/nova/nova/openstack/common/importutils.py, line 29, in 
import_class\nreturn getattr(sys.modules[mod_str], class_str)\n', 
AttributeError: 'module' object has no attribute 'LibvirtDriver'\n])
^[[01;31m2014-01-13 08:20:01.228 TRACE nova.virt.driver ^[[01;35m^[[00m
n-cpu failed to start

Does anybody know how to solve the issue?

Thanks!
Evgeny
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Oslo] Improving oslo-incubator update.py

2014-01-13 Thread Doug Hellmann
[resurrecting an old thread]


On Wed, Nov 27, 2013 at 6:26 AM, Flavio Percoco fla...@redhat.com wrote:

 On 27/11/13 10:59 +, Mark McLoughlin wrote:

 On Wed, 2013-11-27 at 11:50 +0100, Flavio Percoco wrote:

 On 26/11/13 22:54 +, Mark McLoughlin wrote:
 On Fri, 2013-11-22 at 12:39 -0500, Doug Hellmann wrote:
  On Fri, Nov 22, 2013 at 4:11 AM, Flavio Percoco fla...@redhat.com
 wrote:
  1) Store the commit sha from which the module was copied from.
  Every project using oslo, currently keeps the list of modules it
  is using in `openstack-modules.conf` in a `module` parameter. We
  could store, along with the module name, the sha of the commit it
  was last synced from:
  
  module=log,commit
  
  or
  
  module=log
  log=commit
  
 
  The second form will be easier to manage. Humans edit the module
 field and
  the script will edit the others.
 
 How about adding it as a comment at the end of the python files
 themselves and leaving openstack-common.conf for human editing?

 I think having the commit sha will give us a starting point from which
 we could start updating that module from.


 Sure, my only point was about where the commit sha comes from - i.e.
 whether it's from a comment at the end of the python module itself or in
 openstack-common.conf


 And, indeed you said 'at the end of the python files'. Don't ask me
 how the heck I misread that.

 The benefit I see from having them in the openstack-common.conf is
 that we can register a `StrOpt` for each object dynamically and get
 the sha using oslo.config. If we put it as a comment at the end of the
 python file, we'll have to read it and 'parse' it, I guess.



  It will mostly help with
 getting a diff for that module and the short commit messages where it
 was modified.

 Here's a pseudo-buggy-algorithm for the update process:

 (1) Get current sha for $module
 (2) Get list of new commits for $module
 (3) for each commit of $module:
 (3.1) for each modified_module in $commit
 (3.1.1) Update those modules up to $commit
 (1)(modified_module)
 (3.2) Copy the new file
 (3.3) Update openstack-common with the latest sha

 This trusts the granularity and isolation of the patches proposed in
 oslo-incubator. However, in cases like 'remove vim mode lines' it'll
 fail assuming that updating every module is necessary - which is true
 from a git stand point.


 This is another variant of the kind of inter-module dependency smarts
 that update.py already has ... I'd be inclined to just omit those smarts
 and just require the caller to explicitly list the modules they want to
 include.

 Maybe update.py could include some reporting to help with that choice
 like module foo depends on modules bar and blaa, maybe you want to
 include them too and commit XXX modified module foo, but also module
 bar and blaa, maybe you want to include them too.


 But, if we get to the point of suggesting the user to update module
 foo because it was modified in commit XXX, we'd have everything needed
 to make it recursive and update those modules as well.

 I agree with you on making it explicit, though. What about making it
 interactive then? update.py could ask users if they want to update
 module foo because it was modified in commit XXX and do it right away,
 which is not very different from updating module foo, print a report
 and let the user choose afterwards.

 (/me feels like Gollum now)

 I prefer the interactive way though, at least it doesn't require the
 user to run update several times for each module. We could also add a
 `--no-stop` flag that does exactly what you suggested.


I spent some time trying to think through how we could improve the update
script for [1], and I'm stumped on how to figure out *accurately* what
state the project repositories are in today.

We can't just compute the hash of the modules in the project receiving
copies, and then look for them in the oslo-incubator repo, because we
modify the files as we copy them out (to update the import statements and
replace oslo with the receiving project name in some places like config
option defaults).

We could undo those changes before computing the hash, but the problem is
further complicated because syncs are not being done of all modules
together. The common code in a project doesn't move forward in step with
the oslo-incubator repository as a whole. For example, sometimes only the
openstack/common/log.py module is copied and not all of openstack/common.
So log.py might be newer than a lot of the rest of the oslo code. The
problem is even worse for something like rpc, where it's possible that
modules within the rpc package might not all be updated together.

We could probably spend a lot of effort building a tool to tell us exactly
what the state of all of each common file is in each project, to figure out
what needs to be synced. I would much rather spend that effort on turning
the 

Re: [openstack-dev] [oslo] Common SSH

2014-01-13 Thread Doug Hellmann
On Mon, Jan 13, 2014 at 11:34 AM, Bhuvan Arumugam bhu...@apache.org wrote:


 On Mon, Jan 13, 2014 at 7:02 AM, Doug Hellmann 
 doug.hellm...@dreamhost.com wrote:




 On Mon, Jan 13, 2014 at 7:32 AM, Bhuvan Arumugam bhu...@apache.orgwrote:

 On Fri, Jan 10, 2014 at 11:24 PM, Sergey Skripnick 
 sskripn...@mirantis.com wrote:


  I appreciate that we want to fix the ssh client. I'm not certain that
 writing our own is the best answer.


 I was supposed to fix oslo.processutils.ssh with this class, but it may
 be fixed without it, not big deal.




 In his comments on your pull request, the paramiko author recommended
 looking at Fabric. I know that Fabric has a long history in production.
 Does it provide the required features?


 Fabric is too much for just command execution on remote server. Spur
 seems like
 good choice for this.


 I'd go with Fabric. It support several remote server operations, file
 upload/download among them. We could just import the methods we are
 interested. It in turn use paramiko supporting most of ssh client options.
 If we begin using fabric for file upload/download, it'll open door for more
 remote server operations. Bringing in fabric as part of oslo will be cool.


 Where are we doing those sorts of operations?


 Currently, we don't upload/download files to remote server through
 ssh/scp. We do execute commands, pipe multiple commands in few tempest when
 ssh is enabled. With oslo/fabric, we may develop a common ground to deal
 with remote servers, be it executing commands or dealing with files.


Are we using ssh to run commands anywhere else in OpenStack? Maybe in one
of the orchestration layers like heat or trove?

Doug




  --
 Regards,
 Bhuvan Arumugam
 www.livecipher.com

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Common SSH

2014-01-13 Thread Greg Hill
Trove doesn't use ssh afaik.  It has an agent that runs in the guest that is 
communicated with via our normal RPC messaging options.

Greg

On Jan 13, 2014, at 11:10 AM, Doug Hellmann 
doug.hellm...@dreamhost.commailto:doug.hellm...@dreamhost.com wrote:




On Mon, Jan 13, 2014 at 11:34 AM, Bhuvan Arumugam 
bhu...@apache.orgmailto:bhu...@apache.org wrote:

On Mon, Jan 13, 2014 at 7:02 AM, Doug Hellmann 
doug.hellm...@dreamhost.commailto:doug.hellm...@dreamhost.com wrote:



On Mon, Jan 13, 2014 at 7:32 AM, Bhuvan Arumugam 
bhu...@apache.orgmailto:bhu...@apache.org wrote:
On Fri, Jan 10, 2014 at 11:24 PM, Sergey Skripnick 
sskripn...@mirantis.commailto:sskripn...@mirantis.com wrote:

I appreciate that we want to fix the ssh client. I'm not certain that writing 
our own is the best answer.

I was supposed to fix oslo.processutils.ssh with this class, but it may
be fixed without it, not big deal.




In his comments on your pull request, the paramiko author recommended looking 
at Fabric. I know that Fabric has a long history in production. Does it 
provide the required features?


Fabric is too much for just command execution on remote server. Spur seems like
good choice for this.

I'd go with Fabric. It support several remote server operations, file 
upload/download among them. We could just import the methods we are interested. 
It in turn use paramiko supporting most of ssh client options. If we begin 
using fabric for file upload/download, it'll open door for more remote server 
operations. Bringing in fabric as part of oslo will be cool.

Where are we doing those sorts of operations?

Currently, we don't upload/download files to remote server through ssh/scp. We 
do execute commands, pipe multiple commands in few tempest when ssh is enabled. 
With oslo/fabric, we may develop a common ground to deal with remote servers, 
be it executing commands or dealing with files.

Are we using ssh to run commands anywhere else in OpenStack? Maybe in one of 
the orchestration layers like heat or trove?

Doug



 --
Regards,
Bhuvan Arumugam
www.livecipher.comhttp://www.livecipher.com/

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Building a new open source NFV system for Neutron

2014-01-13 Thread Luke Gorrie
Howdy Ian!

Thanks for the background on the Passthrough work.

I reckon the best choice for us now is to use the traditional Neutron
APIs instead of Passthrough. I think they cover all of our use cases
as it stands now (many thanks to you for your earlier help with
working this out :)). The idea is to put the SR-IOV hardware to work
behind-the-scenes of a normal software switch.

We will definitely check out the Passthrough when it's ready and see
if we should also support that somehow.


On 11 January 2014 01:04, Ian Wells ijw.ubu...@cack.org.uk wrote:
 Hey Luke,

 If you look at the passthrough proposals, the overview is that part of the
 passthrough work is to ensure there's an PCI function available to allocate
 to the VM, and part is to pass that function on to the Neutron plugin via
 conventional means.  There's nothing that actually mandates that you connect
 the SRIOV port using the passthrough mechanism, and we've been working on
 the assumption that we would be supporting the 'macvtap' method of
 attachment that Mellanox came up with some time ago.

 I think what we'll probably have is a set of standard attachments (including
 passthrough) added to the Nova drivers - you'll see in the virtualisation
 drivers that Neutron already gets to tell Nova how to attach the port and
 can pass auxiliary information - and we will pass the PCI path and,
 optionally, other parameters to Neutron in the port-update that precedes VIF
 plugging.  That would leave you with the option of passing the path back and
 requesting an actual passthrough or coming up with some other mechanism of
 your own choosing (which may not involve changing Nova at all, if you're
 using your standard virtual plugging mechanism).

 --
 Ian.


 On 10 January 2014 19:26, Luke Gorrie l...@snabb.co wrote:

 Hi Mike,

 On 10 January 2014 17:35, Michael Bright mjbrigh...@gmail.com wrote:

  Very pleased to see this initiative in the OpenStack/NFV space.

 Glad to hear it!

  A dumb question - how do you see this related to the ongoing
   [openstack-dev] [nova] [neutron] PCI pass-through network support
 
  discussion on this list?
 
  Do you see that work as one component within your proposed architecture
  for
  example or an alternative implementation?

 Good question. I'd like to answer separately about the underlying
 technology on the one hand and the OpenStack API on the other.

 The underlying technology of SR-IOV and IOMMU hardware capabilities
 are the same in PCI pass-through and Snabb NFV. The difference is that
 we introduce a very thin layer of software over the top that preserves
 the basic zero-copy operation while adding a Virtio-net abstraction
 towards the VM, packet filtering, tunneling, and policing (to start
 off with). The design goal is to add quite a bit of functionality with
 only a modest processing cost.

 The OpenStack API question is more open. How should we best map our
 functionality onto Neutron APIs? This is something we need to thrash
 out together with the community. Our current best guess - which surely
 needs much revision, and is not based on the PCI pass-through
 blueprint - is here:

 https://github.com/SnabbCo/snabbswitch/tree/snabbnfv-readme/src/designs/nfv#neutron-configuration

 Cheers,
 -Luke

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][IPv6] Subnet mode - API extension or change to core API?

2014-01-13 Thread Collins, Sean
Hi,

I posted a message to the mailing list[1] when I first began work on the
subnet mode keyword, asking if anyone had a suggestion about if it
should be an API extension or can be a change to the core API.

 I don't know if adding the dhcp_mode attribute to Subnets should be
 considered an API extension (and the code should be converted to an API
 extension) or if we're simply specifying behavior that was originally 
 undefined.

In the months since, we have iterated on the commit, and have continued
working on IPv6 functionality in Neutron.

Nachi recently -1'd the review[2], saying that it needs to be an API
extension.

I disagree that it should be an API extension, since I have added
behavior that sets the subnet_mode keyword to default with the attribute
is not specified, for backwards compatibility. Any plugin that inherits
from the NeutronDbPluginV2 class will have backwards compatibility.

Suggestions?

[1]: http://lists.openstack.org/pipermail/openstack-dev/2013-October/017087.html
[2]: https://review.openstack.org/#/c/52983/
-- 
Sean M. Collins
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Common SSH

2014-01-13 Thread Doug Hellmann
On Mon, Jan 13, 2014 at 12:21 PM, Greg Hill greg.h...@rackspace.com wrote:

  Trove doesn't use ssh afaik.  It has an agent that runs in the guest that
 is communicated with via our normal RPC messaging options.


Good.

Doug




  Greg

  On Jan 13, 2014, at 11:10 AM, Doug Hellmann doug.hellm...@dreamhost.com
 wrote:




 On Mon, Jan 13, 2014 at 11:34 AM, Bhuvan Arumugam bhu...@apache.orgwrote:


  On Mon, Jan 13, 2014 at 7:02 AM, Doug Hellmann 
 doug.hellm...@dreamhost.com wrote:




  On Mon, Jan 13, 2014 at 7:32 AM, Bhuvan Arumugam bhu...@apache.orgwrote:

  On Fri, Jan 10, 2014 at 11:24 PM, Sergey Skripnick 
 sskripn...@mirantis.com wrote:


  I appreciate that we want to fix the ssh client. I'm not certain that
 writing our own is the best answer.


  I was supposed to fix oslo.processutils.ssh with this class, but it
 may
 be fixed without it, not big deal.




 In his comments on your pull request, the paramiko author recommended
 looking at Fabric. I know that Fabric has a long history in production.
 Does it provide the required features?


  Fabric is too much for just command execution on remote server. Spur
 seems like
 good choice for this.


  I'd go with Fabric. It support several remote server operations, file
 upload/download among them. We could just import the methods we are
 interested. It in turn use paramiko supporting most of ssh client options.
 If we begin using fabric for file upload/download, it'll open door for more
 remote server operations. Bringing in fabric as part of oslo will be cool.


  Where are we doing those sorts of operations?


  Currently, we don't upload/download files to remote server through
 ssh/scp. We do execute commands, pipe multiple commands in few tempest when
 ssh is enabled. With oslo/fabric, we may develop a common ground to deal
 with remote servers, be it executing commands or dealing with files.


  Are we using ssh to run commands anywhere else in OpenStack? Maybe in
 one of the orchestration layers like heat or trove?

  Doug




  --
 Regards,
 Bhuvan Arumugam
 www.livecipher.com

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Building a new open source NFV system for Neutron

2014-01-13 Thread Ian Wells
Understood.  You should be able to make that work but the issue is
allocating your VM to some machine that has spare hardware - which is
really what the patches are about, Nova manages allocations and Neutron
manages using the hardware when appropriate.  From past experience with the
patch that was around back in the Essex timeframe, you can get this to work
temporarily by rejecting the schedule in nova-compute when the machine is
short of hardware and using a high schedule retry count, which will get you
somewhere but is obviously a bit sucky in the long run.
-- 
Ian.


On 13 January 2014 18:44, Luke Gorrie l...@snabb.co wrote:

 Howdy Ian!

 Thanks for the background on the Passthrough work.

 I reckon the best choice for us now is to use the traditional Neutron
 APIs instead of Passthrough. I think they cover all of our use cases
 as it stands now (many thanks to you for your earlier help with
 working this out :)). The idea is to put the SR-IOV hardware to work
 behind-the-scenes of a normal software switch.

 We will definitely check out the Passthrough when it's ready and see
 if we should also support that somehow.


 On 11 January 2014 01:04, Ian Wells ijw.ubu...@cack.org.uk wrote:
  Hey Luke,
 
  If you look at the passthrough proposals, the overview is that part of
 the
  passthrough work is to ensure there's an PCI function available to
 allocate
  to the VM, and part is to pass that function on to the Neutron plugin via
  conventional means.  There's nothing that actually mandates that you
 connect
  the SRIOV port using the passthrough mechanism, and we've been working on
  the assumption that we would be supporting the 'macvtap' method of
  attachment that Mellanox came up with some time ago.
 
  I think what we'll probably have is a set of standard attachments
 (including
  passthrough) added to the Nova drivers - you'll see in the virtualisation
  drivers that Neutron already gets to tell Nova how to attach the port and
  can pass auxiliary information - and we will pass the PCI path and,
  optionally, other parameters to Neutron in the port-update that precedes
 VIF
  plugging.  That would leave you with the option of passing the path back
 and
  requesting an actual passthrough or coming up with some other mechanism
 of
  your own choosing (which may not involve changing Nova at all, if you're
  using your standard virtual plugging mechanism).
 
  --
  Ian.
 
 
  On 10 January 2014 19:26, Luke Gorrie l...@snabb.co wrote:
 
  Hi Mike,
 
  On 10 January 2014 17:35, Michael Bright mjbrigh...@gmail.com wrote:
 
   Very pleased to see this initiative in the OpenStack/NFV space.
 
  Glad to hear it!
 
   A dumb question - how do you see this related to the ongoing
[openstack-dev] [nova] [neutron] PCI pass-through network
 support
  
   discussion on this list?
  
   Do you see that work as one component within your proposed
 architecture
   for
   example or an alternative implementation?
 
  Good question. I'd like to answer separately about the underlying
  technology on the one hand and the OpenStack API on the other.
 
  The underlying technology of SR-IOV and IOMMU hardware capabilities
  are the same in PCI pass-through and Snabb NFV. The difference is that
  we introduce a very thin layer of software over the top that preserves
  the basic zero-copy operation while adding a Virtio-net abstraction
  towards the VM, packet filtering, tunneling, and policing (to start
  off with). The design goal is to add quite a bit of functionality with
  only a modest processing cost.
 
  The OpenStack API question is more open. How should we best map our
  functionality onto Neutron APIs? This is something we need to thrash
  out together with the community. Our current best guess - which surely
  needs much revision, and is not based on the PCI pass-through
  blueprint - is here:
 
 
 https://github.com/SnabbCo/snabbswitch/tree/snabbnfv-readme/src/designs/nfv#neutron-configuration
 
  Cheers,
  -Luke
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Common SSH

2014-01-13 Thread Ben Nemec
 

On 2014-01-13 11:21, Greg Hill wrote: 

 Trove doesn't use ssh afaik. It has an agent that runs in the guest that is 
 communicated with via our normal RPC messaging options. 
 
 Greg

My understanding is that Heat is similar. It uses cloud-init to do its
guest configuration. I'm pretty sure they explicitly decided against
using ssh at some point, so that's unlikely to change either. 

-Ben 

 On Jan 13, 2014, at 11:10 AM, Doug Hellmann doug.hellm...@dreamhost.com 
 wrote: 
 
 On Mon, Jan 13, 2014 at 11:34 AM, Bhuvan Arumugam bhu...@apache.org wrote:
 
 On Mon, Jan 13, 2014 at 7:02 AM, Doug Hellmann doug.hellm...@dreamhost.com 
 wrote:
 
 On Mon, Jan 13, 2014 at 7:32 AM, Bhuvan Arumugam bhu...@apache.org wrote:
 
 On Fri, Jan 10, 2014 at 11:24 PM, Sergey Skripnick sskripn...@mirantis.com 
 wrote: 
 
 I appreciate that we want to fix the ssh client. I'm not certain that writing 
 our own is the best answer. I was supposed to fix oslo.processutils.ssh with 
 this class, but it may
 be fixed without it, not big deal. 
 
 In his comments on your pull request, the paramiko author recommended looking 
 at Fabric. I know that Fabric has a long history in production. Does it 
 provide the required features?
 
 Fabric is too much for just command execution on remote server. Spur seems 
 like
 good choice for this.

I'd go with Fabric. It support several remote server operations, file
upload/download among them. We could just import the methods we are
interested. It in turn use paramiko supporting most of ssh client
options. If we begin using fabric for file upload/download, it'll open
door for more remote server operations. Bringing in fabric as part of
oslo will be cool. 

Where are we doing those sorts of operations? 

Currently, we don't upload/download files to remote server through
ssh/scp. We do execute commands, pipe multiple commands in few tempest
when ssh is enabled. With oslo/fabric, we may develop a common ground to
deal with remote servers, be it executing commands or dealing with
files. 

Are we using ssh to run commands anywhere else in OpenStack? Maybe in
one of the orchestration layers like heat or trove? 

Doug 

 -- 
 
 Regards,
 Bhuvan Arumugam 
 www.livecipher.com [1] 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev [2]
 ___
 OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[2]

 

Links:
--
[1] http://www.livecipher.com/
[2] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [neutron] PCI pass-through network support

2014-01-13 Thread Jiang, Yunhong
Hi, Robert, scheduler keep count based on pci_stats instead of the pci flavor.

As stated by Ian at 
https://www.mail-archive.com/openstack-dev@lists.openstack.org/msg13455.html 
already, the flavor will only use the tags used by pci_stats.

Thanks
--jyh

From: Robert Li (baoli) [mailto:ba...@cisco.com]
Sent: Monday, January 13, 2014 8:22 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] [neutron] PCI pass-through network support

As I have responded in the other email, and If I understand PCI flavor 
correctly, then the issue that we need to deal with is the overlapping issue. A 
simplest case of this overlapping is that you can define a flavor F1 as 
[vendor_id='v', product_id='p'], and a flavor F2 as [vendor_id = 'v'] .  Let's 
assume that only the admin can define the flavors. It's not hard to see that a 
device can belong to the two different flavors in the same time. This 
introduces an issue in the scheduler. Suppose the scheduler (counts or stats 
based) maintains counts based on flavors (or the keys corresponding to the 
flavors). To request a device with the flavor F1,  counts in F2 needs to be 
subtracted by one as well. There may be several ways to achieve that. But 
regardless, it introduces tremendous overhead in terms of system processing and 
administrative costs.

What are the use cases for that? How practical are those use cases?

thanks,
Robert

On 1/10/14 9:34 PM, Ian Wells 
ijw.ubu...@cack.org.ukmailto:ijw.ubu...@cack.org.uk wrote:



 OK - so if this is good then I think the question is how we could change the 
 'pci_whitelist' parameter we have - which, as you say, should either *only* 
 do whitelisting or be renamed - to allow us to add information.  Yongli has 
 something along those lines but it's not flexible and it distinguishes poorly 
 between which bits are extra information and which bits are matching 
 expressions (and it's still called pci_whitelist) - but even with those 
 criticisms it's very close to what we're talking about.  When we have that I 
 think a lot of the rest of the arguments should simply resolve themselves.



 [yjiang5_1] The reason that not easy to find a flexible/distinguishable 
 change to pci_whitelist is because it combined two things. So a stupid/naive 
 solution in my head is, change it to VERY generic name, 
 'pci_devices_information',

 and change schema as an array of {'devices_property'=regex exp, 'group_name' 
 = 'g1'} dictionary, and the device_property expression can be 'address ==xxx, 
 vendor_id == xxx' (i.e. similar with current white list),  and we can squeeze 
 more into the pci_devices_information in future, like 'network_information' 
 = xxx or Neutron specific information you required in previous mail.


We're getting to the stage that an expression parser would be useful, 
annoyingly, but if we are going to try and squeeze it into JSON can I suggest:

{ match = { class = Acme inc. discombobulator }, info = { group = we like 
teh groups, volume = 11 } }


 All keys other than 'device_property' becomes extra information, i.e. 
 software defined property. These extra information will be carried with the 
 PCI devices,. Some implementation details, A)we can limit the acceptable 
 keys, like we only support 'group_name', 'network_id', or we can accept any 
 keys other than reserved (vendor_id, device_id etc) one.


Not sure we have a good list of reserved keys at the moment, and with two dicts 
it isn't really necessary, I guess.  I would say that we have one match parser 
which looks something like this:

# does this PCI device match the expression given?
def match(expression, pci_details, extra_specs):
   for (k, v) in expression:
if k.starts_with('e.'):
   mv = extra_specs.get(k[2:])
else:
   mv = pci_details.get(k[2:])
if not match(m, mv):
return False
return True

Usable in this matching (where 'e.' just won't work) and also for flavor 
assignment (where e. will indeed match the extra values).

 B) if a device match 'device_property' in several entries, raise exception, 
 or use the first one.

Use the first one, I think.  It's easier, and potentially more useful.

 [yjiang5_1] Another thing need discussed is, as you pointed out, we would 
 need to add a config param on the control host to decide which flags to group 
 on when doing the stats.  I agree with the design, but some details need 
 decided.

This is a patch that can come at any point after we do the above stuff (which 
we need for Neutron), clearly.

 Where should it defined. If we a) define it in both control node and compute 
 node, then it should be static defined (just change pool_keys in 
 /opt/stack/nova/nova/pci/pci_stats.py to a configuration parameter) . Or b) 
 define only in control node, then I assume the control node should be the 
 scheduler node, and the scheduler manager need save such information, present 
 a API to fetch such information and 

Re: [openstack-dev] [oslo.config] Centralized config management

2014-01-13 Thread Nachi Ueno
Hi Clint

2014/1/10 Clint Byrum cl...@fewbar.com:
 Excerpts from Nachi Ueno's message of 2014-01-10 13:42:30 -0700:
 Hi Flavio, Clint

 I agree with you guys.
 sorry, may be, I wasn't clear. My opinion is to remove every
 configuration in the node,
 and every configuration should be done by API from central resource
 manager. (nova-api or neturon server etc).

 This is how to add new hosts, in cloudstack, vcenter, and openstack.

 Cloudstack: Go to web UI, add Host/ID/PW.
 http://cloudstack.apache.org/docs/en-US/Apache_CloudStack/4.0.2/html/Installation_Guide/host-add.html

 vCenter: Go to vsphere client, Host/ID/PW.
 https://pubs.vmware.com/vsphere-51/index.jsp?topic=%2Fcom.vmware.vsphere.solutions.doc%2FGUID-A367585C-EB0E-4CEB-B147-817C1E5E8D1D.html

 Openstack,
 - Manual
- setup mysql connection config, rabbitmq/qpid connection config,
 keystone config,, neturon config, 
 http://docs.openstack.org/havana/install-guide/install/apt/content/nova-compute.html

 We have some deployment system including chef / puppet / packstack, TripleO
 - Chef/Puppet
Setup chef node
Add node/ apply role
 - Packstack
-  Generate answer file
   
 https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux_OpenStack_Platform/2/html/Getting_Started_Guide/sect-Running_PackStack_Non-interactively.html
-  packstack --install-hosts=192.168.1.0,192.168.1.1,192.168.1.2
 - TripleO
- UnderCloud
nova baremetal node add
- OverCloud
modify heat template

 For residence in this mailing list, Chef/Puppet or third party tool is
 easy to use.
 However,  I believe they are magical tools to use for many operators.
 Furthermore, these development system tend to take time to support
 newest release.
 so most of users, OpenStack release didn't means it can be usable for them.

 IMO, current way to manage configuration is the cause of this issue.
 If we manage everything via API, we can manage cluster by horizon.
 Then user can do go to horizon, just add host.

 It may take time to migrate config to API, so one easy step is to convert
 existing config for API resources. This is the purpose of this proposal.


 Hi Nachi. What you've described is the vision for TripleO and Tuskar. We
 do not lag the release. We run CD and will be in the gate real soon
 now so that TripleO should be able to fully deploy Icehouse on Icehouse
 release day.

yeah, I'm big fan of TripleO and Tuskar.
However, may be, it is difficult to let TripleO/Tuskar up-to-dated
with newest releases.

so let's say Nova and neutron added new function in 3rd release (I3
for icehouse),
there is no way to support it in TripleO/Tuskar.
This is natural, because TripleO/Tuskar is 3rd party tool for nova or
neutron. (same as Chef/Puppet).
IMO, Tuskar API and existing projects(nova, neturon) should be
integrated in design level.


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][IPv6] Subnet mode - API extension or change to core API?

2014-01-13 Thread Ian Wells
I would say that since v4 dhcp_mode is core, the DHCPv6/RA setting should
similarly be core.

To fill others in, we've had discussions on the rest of the patch and
Shixiong is working on it now, the current plan is:

New subnet attribute ipv6_address_auto_config (not catchy, but because of
the way that ipv6 works it's not simply DHCPv6) with the four values:

off - no packets sent to assign addresses for this subnet, do it yourself
slaac - RA packet sent telling the machine to choose its own address from
within the subnet; it will choose an address based on its own MAC; because
we're talking servers here, this will explicitly *not* work with ipv6
privacy extensions, because - as with the ipv4 implementation - we need
one, fixed, *known* address that's planned in advance to make firewalling
etc. work
dhcpv6-stateless - RA packet allocates address before, plus DHCPv6 running
to provide additional information if requested
dhcpv6-stateful - DHCPv6 will assign the address set on the port rather
than leaving the machine to work it out from the MAC, along with other
information as required.  (For the other settings, the address on the port
will be hard coded to the MAC-based address; for this one it may well be
hardcoded initially but will eventually be modifiable as for the v4
address.)

Port firewalling (i.e. security groups, antispoof) will consume the
information on the port and subnet as usual.

Obviously you can, as before, use static address config in your VM image or
config-drive setup, independent of the above options; this just determines
what network functions will be set up and running.
-- 
Ian.


On 13 January 2014 18:24, Collins, Sean sean_colli...@cable.comcast.comwrote:

 Hi,

 I posted a message to the mailing list[1] when I first began work on the
 subnet mode keyword, asking if anyone had a suggestion about if it
 should be an API extension or can be a change to the core API.

  I don't know if adding the dhcp_mode attribute to Subnets should be
  considered an API extension (and the code should be converted to an API
  extension) or if we're simply specifying behavior that was originally
 undefined.

 In the months since, we have iterated on the commit, and have continued
 working on IPv6 functionality in Neutron.

 Nachi recently -1'd the review[2], saying that it needs to be an API
 extension.

 I disagree that it should be an API extension, since I have added
 behavior that sets the subnet_mode keyword to default with the attribute
 is not specified, for backwards compatibility. Any plugin that inherits
 from the NeutronDbPluginV2 class will have backwards compatibility.

 Suggestions?

 [1]:
 http://lists.openstack.org/pipermail/openstack-dev/2013-October/017087.html
 [2]: https://review.openstack.org/#/c/52983/
 --
 Sean M. Collins
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Common SSH

2014-01-13 Thread Clint Byrum
Excerpts from Doug Hellmann's message of 2014-01-13 09:10:27 -0800:
 On Mon, Jan 13, 2014 at 11:34 AM, Bhuvan Arumugam bhu...@apache.org wrote:
 
 
  On Mon, Jan 13, 2014 at 7:02 AM, Doug Hellmann 
  doug.hellm...@dreamhost.com wrote:
  Where are we doing those sorts of operations?
 
 
  Currently, we don't upload/download files to remote server through
  ssh/scp. We do execute commands, pipe multiple commands in few tempest when
  ssh is enabled. With oslo/fabric, we may develop a common ground to deal
  with remote servers, be it executing commands or dealing with files.
 
 
 Are we using ssh to run commands anywhere else in OpenStack? Maybe in one
 of the orchestration layers like heat or trove?

Heat does not use SSH, though I believe it did in its early days.

SSH belongs to the admins. I don't think OpenStack should be using it.

I see the usefuleness in the tempest case, which tests that the key given
is on the box, and may need to verify other things inside the instance.

However things that won't work with a simple ssh to that box and report
success/fail could also be done just by having an image which calls
back to tempest on boot.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.config] Centralized config management

2014-01-13 Thread Clint Byrum
Excerpts from Nachi Ueno's message of 2014-01-13 10:35:07 -0800:
 Hi Clint
 
 2014/1/10 Clint Byrum cl...@fewbar.com:
  Excerpts from Nachi Ueno's message of 2014-01-10 13:42:30 -0700:
  Hi Flavio, Clint
 
  I agree with you guys.
  sorry, may be, I wasn't clear. My opinion is to remove every
  configuration in the node,
  and every configuration should be done by API from central resource
  manager. (nova-api or neturon server etc).
 
  This is how to add new hosts, in cloudstack, vcenter, and openstack.
 
  Cloudstack: Go to web UI, add Host/ID/PW.
  http://cloudstack.apache.org/docs/en-US/Apache_CloudStack/4.0.2/html/Installation_Guide/host-add.html
 
  vCenter: Go to vsphere client, Host/ID/PW.
  https://pubs.vmware.com/vsphere-51/index.jsp?topic=%2Fcom.vmware.vsphere.solutions.doc%2FGUID-A367585C-EB0E-4CEB-B147-817C1E5E8D1D.html
 
  Openstack,
  - Manual
 - setup mysql connection config, rabbitmq/qpid connection config,
  keystone config,, neturon config, 
  http://docs.openstack.org/havana/install-guide/install/apt/content/nova-compute.html
 
  We have some deployment system including chef / puppet / packstack, TripleO
  - Chef/Puppet
 Setup chef node
 Add node/ apply role
  - Packstack
 -  Generate answer file

  https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux_OpenStack_Platform/2/html/Getting_Started_Guide/sect-Running_PackStack_Non-interactively.html
 -  packstack --install-hosts=192.168.1.0,192.168.1.1,192.168.1.2
  - TripleO
 - UnderCloud
 nova baremetal node add
 - OverCloud
 modify heat template
 
  For residence in this mailing list, Chef/Puppet or third party tool is
  easy to use.
  However,  I believe they are magical tools to use for many operators.
  Furthermore, these development system tend to take time to support
  newest release.
  so most of users, OpenStack release didn't means it can be usable for them.
 
  IMO, current way to manage configuration is the cause of this issue.
  If we manage everything via API, we can manage cluster by horizon.
  Then user can do go to horizon, just add host.
 
  It may take time to migrate config to API, so one easy step is to convert
  existing config for API resources. This is the purpose of this proposal.
 
 
  Hi Nachi. What you've described is the vision for TripleO and Tuskar. We
  do not lag the release. We run CD and will be in the gate real soon
  now so that TripleO should be able to fully deploy Icehouse on Icehouse
  release day.
 
 yeah, I'm big fan of TripleO and Tuskar.
 However, may be, it is difficult to let TripleO/Tuskar up-to-dated
 with newest releases.
 
 so let's say Nova and neutron added new function in 3rd release (I3
 for icehouse),
 there is no way to support it in TripleO/Tuskar.
 This is natural, because TripleO/Tuskar is 3rd party tool for nova or
 neutron. (same as Chef/Puppet).
 IMO, Tuskar API and existing projects(nova, neturon) should be
 integrated in design level.
 

This is false. TripleO is the official OpenStack deployment program. It
is not a 3rd party tool. Of course sometimes TripleO may lag the same
way Heat may lag other integrated release components. But that is one
reason we have release meetings, blueprints, and summits, so that projects
like Heat and TripleO can be aware of what is landing in i3 and at least
attempt to have some support in place ASAP.

Trying to make this happen inside the individual projects instead of
in projects dedicated to working well in this space is a recipe for
frustration, and I don't believe it would lead to any less lag. People
would just land features with FIXME: support config api.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [neutron] PCI pass-through network support

2014-01-13 Thread Ian Wells
It's worth noting that this makes the scheduling a computationally hard
problem. The answer to that in this scheme is to reduce the number of
inputs to trivialise the problem.  It's going to be O(f(number of flavor
types requested, number of pci_stats pools)) and if you group appropriately
there shouldn't be an excessive number of pci_stats pools.  I am not going
to stand up and say this makes it achievable - and if it doesn't them I'm
not sure that anything would make overlapping flavors achievable - but I
think it gives us some hope.
-- 
Ian.


On 13 January 2014 19:27, Jiang, Yunhong yunhong.ji...@intel.com wrote:

  Hi, Robert, scheduler keep count based on pci_stats instead of the pci
 flavor.



 As stated by Ian at
 https://www.mail-archive.com/openstack-dev@lists.openstack.org/msg13455.htmlalready,
  the flavor will only use the tags used by pci_stats.



 Thanks

 --jyh



 *From:* Robert Li (baoli) [mailto:ba...@cisco.com]
 *Sent:* Monday, January 13, 2014 8:22 AM

 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] [nova] [neutron] PCI pass-through network
 support



 As I have responded in the other email, and If I understand PCI flavor
 correctly, then the issue that we need to deal with is the overlapping
 issue. A simplest case of this overlapping is that you can define a flavor
 F1 as [vendor_id='v', product_id='p'], and a flavor F2 as [vendor_id = 'v']
 .  Let's assume that only the admin can define the flavors. It's not hard
 to see that a device can belong to the two different flavors in the same
 time. This introduces an issue in the scheduler. Suppose the scheduler
 (counts or stats based) maintains counts based on flavors (or the keys
 corresponding to the flavors). To request a device with the flavor F1,
  counts in F2 needs to be subtracted by one as well. There may be several
 ways to achieve that. But regardless, it introduces tremendous overhead in
 terms of system processing and administrative costs.



 What are the use cases for that? How practical are those use cases?



 thanks,

 Robert



 On 1/10/14 9:34 PM, Ian Wells ijw.ubu...@cack.org.uk wrote:




 
  OK - so if this is good then I think the question is how we could change
 the 'pci_whitelist' parameter we have - which, as you say, should either
 *only* do whitelisting or be renamed - to allow us to add information.
  Yongli has something along those lines but it's not flexible and it
 distinguishes poorly between which bits are extra information and which
 bits are matching expressions (and it's still called pci_whitelist) - but
 even with those criticisms it's very close to what we're talking about.
  When we have that I think a lot of the rest of the arguments should simply
 resolve themselves.
 
 
 
  [yjiang5_1] The reason that not easy to find a flexible/distinguishable
 change to pci_whitelist is because it combined two things. So a
 stupid/naive solution in my head is, change it to VERY generic name,
 ‘pci_devices_information’,
 
  and change schema as an array of {‘devices_property’=regex exp,
 ‘group_name’ = ‘g1’} dictionary, and the device_property expression can be
 ‘address ==xxx, vendor_id == xxx’ (i.e. similar with current white list),
  and we can squeeze more into the “pci_devices_information” in future, like
 ‘network_information’ = xxx or “Neutron specific information” you required
 in previous mail.


 We're getting to the stage that an expression parser would be useful,
 annoyingly, but if we are going to try and squeeze it into JSON can I
 suggest:

 { match = { class = Acme inc. discombobulator }, info = { group = we
 like teh groups, volume = 11 } }

 
  All keys other than ‘device_property’ becomes extra information, i.e.
 software defined property. These extra information will be carried with the
 PCI devices,. Some implementation details, A)we can limit the acceptable
 keys, like we only support ‘group_name’, ‘network_id’, or we can accept any
 keys other than reserved (vendor_id, device_id etc) one.


 Not sure we have a good list of reserved keys at the moment, and with two
 dicts it isn't really necessary, I guess.  I would say that we have one
 match parser which looks something like this:

 # does this PCI device match the expression given?
 def match(expression, pci_details, extra_specs):
for (k, v) in expression:
 if k.starts_with('e.'):
mv = extra_specs.get(k[2:])
 else:
mv = pci_details.get(k[2:])
 if not match(m, mv):
 return False
 return True

 Usable in this matching (where 'e.' just won't work) and also for flavor
 assignment (where e. will indeed match the extra values).

  B) if a device match ‘device_property’ in several entries, raise
 exception, or use the first one.

 Use the first one, I think.  It's easier, and potentially more useful.

  [yjiang5_1] Another thing need discussed is, as you pointed out, “we
 would need to add a config param on the control 

[openstack-dev] [savanna] client release 0.4.1

2014-01-13 Thread Sergey Lukjanov
Hi folks,

I'm planning to release python-savannaclient Jan 14/15 due to the number of
important fixes and improvements including, for example, basic impl of CLI.
This changes are needed for updating savanna-dashboard, integration tests,
for adding support of scenarios tests in tempest and etc.

There are several open CLI-related changes and Java EDP action support
patch [1] that should be included into this release.

Are there any thoughts about the things that should be done in 0.4.1 client?

Thanks.

[1] Allow passing extra args to
JobExecutionsManager.create()https://review.openstack.org/#/c/66398/

-- 
Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Neutron Tempest code sprint - 2nd week of January, Montreal, QC, Canada

2014-01-13 Thread Nachi Ueno
Hi Anita

Location: I am about to sign the contract for Salle du Parc at 3625 Parc
 avenue, a room in a residence of McGill University.

^^^ Let's me confirmed the room number?




2014/1/7 Anita Kuno ante...@anteaya.info:
 On 01/08/2014 03:10 AM, Nachi Ueno wrote:
 Hi Anita

 Let's me join this session also.

 Nachi Ueno
 NTT i3
 Wonderful Nachi. We have spoken on irc to ensure you have your questions
 answered.

 It will be great to have another neutron-core at the code sprint.

 Thank you,
 Anita.

 2014/1/5 Anita Kuno ante...@anteaya.info:
 On 01/05/2014 03:42 AM, Sukhdev Kapur wrote:
 Folks,

 I finally got over my fear of weather and booked my flight and hotel for
 this sprint.

 I am relatively new to OpenStack community with a strong desire to learn
 and contribute.
 Having a strong desire to participate effectively is great.

 The difficulty for us is that we have already indicated that we need
 experienced participants at the code sprint. [0]

 Having long periods of silence and then simply announcing you have
 booked your flights makes things difficult since we have been in
 conversation with others about this for some time. I'm not saying don't
 come, I am saying that this now puts myself and Mark in a position of
 having to explain ourselves to others regarding consistency.

 Mark and I will address this, though I will need to discuss this with
 Mark to hear his thoughts and I am unable right now since I am at a
 conference all week. [1]

 Going forward, having regular conversations about items of this nature
 (irc is a great tool for this) is something I would like to see happen
 more often.

 You may have seen that Arista Testing has come alive and is voting on the
 newly submitted neutron patches. I have been busy putting together the
 framework, and learning the Jenkins/Gerrit interface.
 Yes. Can you respond on the Remove voting until your testing structure
 works thread please? This enables people who wish to respond to you on
 this point a place to conduct the conversation. It also preserves a
 history of the topic so that those searching the archives have all the
 relevant information in one place.

 Now, I have shifted
 my focus on Neutron/networking tempest tests. I notice some of these tests
 are failing on my setup. I have started to dig into these with the intent
 to understand them.
 That is a great place to begin. Thank you for taking interest and
 pursuing test bugs.


 In terms of this upcoming sprint, if you folks can give some pointers that
 will help me get better prepared and productive, that will be appreciated.
 We need folks attending the sprint who are familiar with offering
 patches to tempest. Seeing your name in this list would be a great
 indicator that you are at least able to offer a patch. [3]

 If you are able to focus this week on getting up to speed on Tempest and
 the Neutron Tempest process, then your attendance at the conference may
 possibly be effective both for yourself and for the rest of the
 participants.

 This wiki page is probably a good place to begin. [4]

 The etherpad tracking the Neutron Tempest team's progress is here. [5]

 Familiarizing yourself with the status of the conversation during the
 meetings will help as well, though it isn't as important in terms of
 being useful at the sprint as offering a tempest patch. Neutron meeting
 logs can be found here. [6]

 Also being available in channel will go a long way to fostering the kind
 of interactions which are constructive now and in the future. I don't
 see you in channel much, it would be great to see you more.

 Looking forward to meeting and working with you.
 And I you. Let's consider this an opportunity for greater participation
 with Neutron and having more conversations in irc is a great way to begin.

 Though I am not available in -neutron this week others are, so please
 announce when you are ready to work and hopefully someone will be
 keeping an eye out for you and offer you a hand.

 regards..
 -Sukhdev

 Thanks Sukhdev,
 Anita.

 [0]
 http://lists.openstack.org/pipermail/openstack-dev/2013-December/022918.html
 [1]
 http://eavesdrop.openstack.org/meetings/networking/2013/networking.2013-12-16-21.02.log.html
 timestamp 21:57:46
 [2]
 http://lists.openstack.org/pipermail/openstack-dev/2013-December/023228.html
 [3]
 https://review.openstack.org/#/q/status:open+project:openstack/tempest,n,z
 [4] https://wiki.openstack.org/wiki/Neutron/TempestAPITests
 [5] https://etherpad.openstack.org/p/icehouse-summit-qa-neutron
 [6] http://eavesdrop.openstack.org/meetings/networking/2013/





 On Fri, Dec 27, 2013 at 9:00 AM, Anita Kuno ante...@anteaya.info wrote:

 On 12/18/2013 04:17 PM, Anita Kuno wrote:
 Okay time for a recap.

 What: Neutron Tempest code sprint
 Where: Montreal, QC, Canada
 When: January 15, 16, 17 2014
 Location: I am about to sign the contract for Salle du Parc at 3625 Parc
 avenue, a room in a residence of McGill University.
 Time: 9am - 5am
 Time: 9am - 5pm

 I am 

Re: [openstack-dev] [nova] [neutron] PCI pass-through network support

2014-01-13 Thread Jiang, Yunhong
I'm not network engineer and always lost at 802.1Qbh/802.1BR specs :(  So I'd 
wait for requirement from Neutron. A quick check seems my discussion with Ian 
meet the requirement already?

Thanks
--jyh

From: Irena Berezovsky [mailto:ire...@mellanox.com]
Sent: Monday, January 13, 2014 12:51 AM
To: OpenStack Development Mailing List (not for usage questions)
Cc: Jiang, Yunhong; He, Yongli; Robert Li (baoli) (ba...@cisco.com); Sandhya 
Dasu (sadasu) (sad...@cisco.com); ijw.ubu...@cack.org.uk; j...@johngarbutt.com
Subject: RE: [openstack-dev] [nova] [neutron] PCI pass-through network support

Hi,
After having a lot of discussions both on IRC and mailing list, I would like to 
suggest to define basic use cases for PCI pass-through network support with 
agreed list of limitations and assumptions  and implement it.  By doing this 
Proof of Concept we will be able to deliver basic PCI pass-through network 
support in Icehouse timeframe and understand better how to provide complete 
solution starting from  tenant /admin API enhancement, enhancing nova-neutron 
communication and eventually provide neutron plugin  supporting the PCI 
pass-through networking.
We can try to split tasks between currently involved participants and bring up 
the basic case. Then we can enhance the implementation.
Having more knowledge and experience with neutron parts, I would like  to start 
working on neutron mechanism driver support.  I have already started to arrange 
the following blueprint doc based on everyone's ideas:
https://docs.google.com/document/d/1RfxfXBNB0mD_kH9SamwqPI8ZM-jg797ky_Fze7SakRc/edit#https://docs.google.com/document/d/1RfxfXBNB0mD_kH9SamwqPI8ZM-jg797ky_Fze7SakRc/edit

For the basic PCI pass-through networking case we can assume the following:

1.   Single provider network (PN1)

2.   White list of available SRIOV PCI devices for allocation as NIC for 
neutron networks on provider network  (PN1) is defined on each compute node

3.   Support directly assigned SRIOV PCI pass-through device as vNIC. (This 
will limit the number of tests)

4.   More 


If my suggestion seems reasonable to you, let's try to reach an agreement and 
split the work during our Monday IRC meeting.

BR,
Irena

From: Jiang, Yunhong [mailto:yunhong.ji...@intel.com]
Sent: Saturday, January 11, 2014 8:36 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] [neutron] PCI pass-through network support

Comments with prefix [yjiang5_2] , including the double confirm.

I think we (you and me) is mostly on the same page, would you please give a 
summary, and then we can have community , including Irena/Robert, to check it. 
We need Cores to sponsor it. We should check with John to see if this is 
different with his mentor picture, and we may need a neutron core (I assume 
Cisco has a bunch of Neutron cores :) )to sponsor it?

And, will anyone from Cisco can help on the implementation? After this long 
discussion, we are in half bottom of I release and I'm not sure if Yongli and I 
alone can finish them in I release.

Thanks
--jyh

From: Ian Wells [mailto:ijw.ubu...@cack.org.uk]
Sent: Friday, January 10, 2014 6:34 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] [neutron] PCI pass-through network support



 OK - so if this is good then I think the question is how we could change the 
 'pci_whitelist' parameter we have - which, as you say, should either *only* 
 do whitelisting or be renamed - to allow us to add information.  Yongli has 
 something along those lines but it's not flexible and it distinguishes poorly 
 between which bits are extra information and which bits are matching 
 expressions (and it's still called pci_whitelist) - but even with those 
 criticisms it's very close to what we're talking about.  When we have that I 
 think a lot of the rest of the arguments should simply resolve themselves.



 [yjiang5_1] The reason that not easy to find a flexible/distinguishable 
 change to pci_whitelist is because it combined two things. So a stupid/naive 
 solution in my head is, change it to VERY generic name, 
 'pci_devices_information',

 and change schema as an array of {'devices_property'=regex exp, 'group_name' 
 = 'g1'} dictionary, and the device_property expression can be 'address ==xxx, 
 vendor_id == xxx' (i.e. similar with current white list),  and we can squeeze 
 more into the pci_devices_information in future, like 'network_information' 
 = xxx or Neutron specific information you required in previous mail.


We're getting to the stage that an expression parser would be useful, 
annoyingly, but if we are going to try and squeeze it into JSON can I suggest:

{ match = { class = Acme inc. discombobulator }, info = { group = we like 
teh groups, volume = 11 } }

[yjiang5_2] Double confirm that 'match' is whitelist, and info is 'extra info', 
right?  Can the key be more meaningful, for example, 

Re: [openstack-dev] [Neutron] Partially Shared Networks

2014-01-13 Thread Rick Jones

On 01/13/2014 07:32 AM, Jay Pipes wrote:

On Mon, 2014-01-13 at 10:23 +, Stephen Gran wrote:

Hi,

I don't think that's what's being asked for. Just that there be more
than the current check for '(isowner of network) or (shared)'

If the data point could be 'enabled for network' for a given tenant,
that would be more flexible.


Agreed, but I believe Mathieu is thinking more in terms of how such a
check could be implemented. What makes this problematic (at least in my
simplistic understanding of Neutron wiring) is that there is no
guarantee that tenant A's subnet does not overlap with tenant B's
subnet. Because Neutron allows overlapping subnets (since Neutron uses
network namespaces for isolating traffic), code would need to be put in
place that says, basically, if this network is shared between tenants,
then do not allow overlapping subnets, since a single, shared network
namespace will be needed that routes traffic between the tenants.

Or at least, that's what I *think* is part of the problem...


Are such checks actually necessary?  That is to say, unless it will 
completely fubar something internally ina database or something (versus 
just having confused routing), I would think that it would be but a 
nicety for Neutron runtime to warn the user(s) they were about to try to 
connect overlapping subnets to the same router.  Nice to report it 
perhaps as a warning, but not an absolutely required bit of 
functionality to go forward.


If Tenant A and Tenant B were separate, recently merged companies, they 
would have to work-out, in advance, issues of address overlap before 
they could join their two networks.  At one level at least, we could 
consider their trying to do the same sort of thing within the context of 
Neutron as being the same.



FWIW, here is an intra-tenant attempt to assign two overlapping subnets 
to the same router.  Of course I'm probably playing with older bits in 
this particular sandbox and they won't reflect the current top-of-trunk:


$ nova list
+--++++-+---+
| ID   | Name   | Status 
| Task State | Power State | Networks  |

+--++++-+---+
| d97a46ed-19eb-4a87-8536-eb9ca4ba3895 | overlap-net_lg | ACTIVE 
| None   | Running | overlap-net=192.168.123.2 |
| ad8d6c9c-9a4c-442e-aebf-fd30475b7675 | overlap-net0001_lg | ACTIVE 
| None   | Running | overlap-net0001=192.168.123.2 |

+--++++-+---+
$ neutron subnet-list
+--++--+--+
| id   | name   | cidr 
   | allocation_pools |

+--++--+--+
| d6015301-e5bf-4f1a-b3b3-5bde71a52496 | overlap-subnet0001 | 
192.168.123.0/24 | {start: 192.168.123.2, end: 192.168.123.254} |
| faddcc32-7bb6-4cb2-862e-7738e5c54f6d | overlap-subnet | 
192.168.123.0/24 | {start: 192.168.123.2, end: 192.168.123.254} |

+--++--+--+
$ neutron router-create overlap-router0001
Created a new router:
+---+--+
| Field | Value|
+---+--+
| admin_state_up| True |
| external_gateway_info |  |
| id| 88339018-d286-45ec-b2d2-ccb78ae78837 |
| name  | overlap-router0001   |
| status| ACTIVE   |
| tenant_id | 57367642563150   |
+---+--+
$ neutron router-interface-add overlap-router0001 overlap-subnet
Added interface b637cb32-c33a-4565-a6f3-b7ea22a02be0 to router 
overlap-router0001.

$ neutron router-interface-add overlap-router0001 overlap-subnet0001
400-{u'QuantumError': u'Bad router request: Cidr 192.168.123.0/24 of 
subnet d6015301-e5bf-4f1a-b3b3-5bde71a52496 overlaps with cidr 
192.168.123.0/24 of subnet faddcc32-7bb6-4cb2-862e-7738e5c54f6d'}


rick jones

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Alternative time for the Heat/orchestration meeting

2014-01-13 Thread Steve Baker
On 19/12/13 14:51, Steve Baker wrote:
 The Heat meeting is currently weekly on Wednesdays at 2000 UTC. This
 is not very friendly for contributors in some timezones, specifically
 Asia.

 In today's Heat meeting we decided to try having every second meeting
 at an alternate time. I've set up this poll to gather opinions on some
 (similar) time options.
 http://doodle.com/rdrb7gpnb2wydbmg

 In this case Europe has drawn the short straw, since we have a large
 number of US contributors and a PTL in New Zealand (me).

 The meeting on January the 8th will be at the usual time, and the one
 after that can be at the alternate time which we end up agreeing to.


We're still looking for the least-worst alternate Heat meeting time. The
previous poll has been updated with some new options, so if you would
like to sometimes attend Heat meetings then please update your poll
answer here:

http://doodle.com/rdrb7gpnb2wydbmg

This week's Heat meeting will be at the same old time and the intention
is to have the first alternate-time meeting on the 23rd.

cheers
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [neutron] PCI pass-through network support

2014-01-13 Thread Jiang, Yunhong
Ian, not sure if I get your question. Why should scheduler get the number of 
flavor types requested? The scheduler will only translate the PCI flavor to the 
pci property match requirement like it does now, (either vendor_id, device_id, 
or item in extra_info), then match the translated pci flavor, i.e. pci 
requests, to the pci stats.

Thanks
--jyh

From: Ian Wells [mailto:ijw.ubu...@cack.org.uk]
Sent: Monday, January 13, 2014 10:57 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] [neutron] PCI pass-through network support

It's worth noting that this makes the scheduling a computationally hard 
problem. The answer to that in this scheme is to reduce the number of inputs to 
trivialise the problem.  It's going to be O(f(number of flavor types requested, 
number of pci_stats pools)) and if you group appropriately there shouldn't be 
an excessive number of pci_stats pools.  I am not going to stand up and say 
this makes it achievable - and if it doesn't them I'm not sure that anything 
would make overlapping flavors achievable - but I think it gives us some hope.
--
Ian.

On 13 January 2014 19:27, Jiang, Yunhong 
yunhong.ji...@intel.commailto:yunhong.ji...@intel.com wrote:
Hi, Robert, scheduler keep count based on pci_stats instead of the pci flavor.

As stated by Ian at 
https://www.mail-archive.com/openstack-dev@lists.openstack.org/msg13455.html 
already, the flavor will only use the tags used by pci_stats.

Thanks
--jyh

From: Robert Li (baoli) [mailto:ba...@cisco.commailto:ba...@cisco.com]
Sent: Monday, January 13, 2014 8:22 AM

To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] [neutron] PCI pass-through network support

As I have responded in the other email, and If I understand PCI flavor 
correctly, then the issue that we need to deal with is the overlapping issue. A 
simplest case of this overlapping is that you can define a flavor F1 as 
[vendor_id='v', product_id='p'], and a flavor F2 as [vendor_id = 'v'] .  Let's 
assume that only the admin can define the flavors. It's not hard to see that a 
device can belong to the two different flavors in the same time. This 
introduces an issue in the scheduler. Suppose the scheduler (counts or stats 
based) maintains counts based on flavors (or the keys corresponding to the 
flavors). To request a device with the flavor F1,  counts in F2 needs to be 
subtracted by one as well. There may be several ways to achieve that. But 
regardless, it introduces tremendous overhead in terms of system processing and 
administrative costs.

What are the use cases for that? How practical are those use cases?

thanks,
Robert

On 1/10/14 9:34 PM, Ian Wells 
ijw.ubu...@cack.org.ukmailto:ijw.ubu...@cack.org.uk wrote:



 OK - so if this is good then I think the question is how we could change the 
 'pci_whitelist' parameter we have - which, as you say, should either *only* 
 do whitelisting or be renamed - to allow us to add information.  Yongli has 
 something along those lines but it's not flexible and it distinguishes poorly 
 between which bits are extra information and which bits are matching 
 expressions (and it's still called pci_whitelist) - but even with those 
 criticisms it's very close to what we're talking about.  When we have that I 
 think a lot of the rest of the arguments should simply resolve themselves.



 [yjiang5_1] The reason that not easy to find a flexible/distinguishable 
 change to pci_whitelist is because it combined two things. So a stupid/naive 
 solution in my head is, change it to VERY generic name, 
 'pci_devices_information',

 and change schema as an array of {'devices_property'=regex exp, 'group_name' 
 = 'g1'} dictionary, and the device_property expression can be 'address ==xxx, 
 vendor_id == xxx' (i.e. similar with current white list),  and we can squeeze 
 more into the pci_devices_information in future, like 'network_information' 
 = xxx or Neutron specific information you required in previous mail.


We're getting to the stage that an expression parser would be useful, 
annoyingly, but if we are going to try and squeeze it into JSON can I suggest:

{ match = { class = Acme inc. discombobulator }, info = { group = we like 
teh groups, volume = 11 } }


 All keys other than 'device_property' becomes extra information, i.e. 
 software defined property. These extra information will be carried with the 
 PCI devices,. Some implementation details, A)we can limit the acceptable 
 keys, like we only support 'group_name', 'network_id', or we can accept any 
 keys other than reserved (vendor_id, device_id etc) one.


Not sure we have a good list of reserved keys at the moment, and with two dicts 
it isn't really necessary, I guess.  I would say that we have one match parser 
which looks something like this:

# does this PCI device match the expression given?
def match(expression, pci_details, extra_specs):
   for (k, v) in expression:
 

Re: [openstack-dev] [infra] javascript templating library choice for status pages

2014-01-13 Thread Michael Krotscheck

On 01/13/2014 05:05 AM, Sean Dague wrote:
Honestly, I've not done enough large scale js projects to know whether 
we'd consider status.js to be big or not. I just know it's definitely 
getting too big for += all the html together and doing document.writes.

Yes indeed.
I guess the real question I had is is there an incremental path 
towards any of the other frameworks? I can see how to incrementally 
bring in templates, but again my personal lack of experience on these 
others means I don't know.
Short answer: Yes, and the incremental path will be more/less difficult 
depending on which application framework you want to move to. [/captain 
obvious]


Long answer: Pretty much all the template frameworks out there use 
{{mustache-style}} syntax. Some application frameworks use handlebars 
directly (ember), for some it's pluggable (knockout), others have their 
own framework with similar markup (angular). From what I saw of 
status.js, it's really not complicated enough to preclude a refactor to 
any of the above.


Michael
Also, what Monty said.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [neutron] PCI pass-through network support

2014-01-13 Thread Ian Wells
If there are N flavor types there are N match expressions so I think it's
pretty much equivalent in terms of complexity.  It looks like some sort of
packing problem to me, trying to fit N objects into M boxes, hence my
statement that it's not going to be easy, but that's just a gut feeling -
some of the matches can be vague, such as only the vendor ID or a vendor
and two device types, so it's not as simple as one flavor matching one
stats row.
-- 
Ian.


On 13 January 2014 21:00, Jiang, Yunhong yunhong.ji...@intel.com wrote:

  Ian, not sure if I get your question. Why should scheduler get the
 number of flavor types requested? The scheduler will only translate the PCI
 flavor to the pci property match requirement like it does now, (either
 vendor_id, device_id, or item in extra_info), then match the translated pci
 flavor, i.e. pci requests, to the pci stats.



 Thanks

 --jyh



 *From:* Ian Wells [mailto:ijw.ubu...@cack.org.uk]
 *Sent:* Monday, January 13, 2014 10:57 AM

 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] [nova] [neutron] PCI pass-through network
 support



 It's worth noting that this makes the scheduling a computationally hard
 problem. The answer to that in this scheme is to reduce the number of
 inputs to trivialise the problem.  It's going to be O(f(number of flavor
 types requested, number of pci_stats pools)) and if you group appropriately
 there shouldn't be an excessive number of pci_stats pools.  I am not going
 to stand up and say this makes it achievable - and if it doesn't them I'm
 not sure that anything would make overlapping flavors achievable - but I
 think it gives us some hope.
 --

 Ian.



 On 13 January 2014 19:27, Jiang, Yunhong yunhong.ji...@intel.com wrote:

 Hi, Robert, scheduler keep count based on pci_stats instead of the pci
 flavor.



 As stated by Ian at
 https://www.mail-archive.com/openstack-dev@lists.openstack.org/msg13455.htmlalready,
  the flavor will only use the tags used by pci_stats.



 Thanks

 --jyh



 *From:* Robert Li (baoli) [mailto:ba...@cisco.com]
 *Sent:* Monday, January 13, 2014 8:22 AM


 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] [nova] [neutron] PCI pass-through network
 support



 As I have responded in the other email, and If I understand PCI flavor
 correctly, then the issue that we need to deal with is the overlapping
 issue. A simplest case of this overlapping is that you can define a flavor
 F1 as [vendor_id='v', product_id='p'], and a flavor F2 as [vendor_id = 'v']
 .  Let's assume that only the admin can define the flavors. It's not hard
 to see that a device can belong to the two different flavors in the same
 time. This introduces an issue in the scheduler. Suppose the scheduler
 (counts or stats based) maintains counts based on flavors (or the keys
 corresponding to the flavors). To request a device with the flavor F1,
  counts in F2 needs to be subtracted by one as well. There may be several
 ways to achieve that. But regardless, it introduces tremendous overhead in
 terms of system processing and administrative costs.



 What are the use cases for that? How practical are those use cases?



 thanks,

 Robert



 On 1/10/14 9:34 PM, Ian Wells ijw.ubu...@cack.org.uk wrote:




 
  OK - so if this is good then I think the question is how we could change
 the 'pci_whitelist' parameter we have - which, as you say, should either
 *only* do whitelisting or be renamed - to allow us to add information.
  Yongli has something along those lines but it's not flexible and it
 distinguishes poorly between which bits are extra information and which
 bits are matching expressions (and it's still called pci_whitelist) - but
 even with those criticisms it's very close to what we're talking about.
  When we have that I think a lot of the rest of the arguments should simply
 resolve themselves.
 
 
 
  [yjiang5_1] The reason that not easy to find a flexible/distinguishable
 change to pci_whitelist is because it combined two things. So a
 stupid/naive solution in my head is, change it to VERY generic name,
 ‘pci_devices_information’,
 
  and change schema as an array of {‘devices_property’=regex exp,
 ‘group_name’ = ‘g1’} dictionary, and the device_property expression can be
 ‘address ==xxx, vendor_id == xxx’ (i.e. similar with current white list),
  and we can squeeze more into the “pci_devices_information” in future, like
 ‘network_information’ = xxx or “Neutron specific information” you required
 in previous mail.


 We're getting to the stage that an expression parser would be useful,
 annoyingly, but if we are going to try and squeeze it into JSON can I
 suggest:

 { match = { class = Acme inc. discombobulator }, info = { group = we
 like teh groups, volume = 11 } }

 
  All keys other than ‘device_property’ becomes extra information, i.e.
 software defined property. These extra information will be carried with the
 PCI 

[openstack-dev] [infra] hacking checks for infra projects

2014-01-13 Thread Sergey Lukjanov
Hi folks,

I think that we can enable hacking checks for python infra projects to help
themselves write better code. Probably we should enable only specific
subset of hacking checks instead of all of them.

TL;DR

I'd like to discuss the need of hacking usage for python infra projects.
Currently I'm keeping in mind several of them as examples - Zuul and
Nodepool. The question was catched up by James E. Blair when I've created
some patches for Zuul [1], so, thank you, James, for catching this.

There are some pros and cons of enabling hacking for them. As a big pros I
see following the common OpenStack processes and additional checks that
will help us to write a better code. On the other side, these projects are
not so big for having such strong checks.

My personal opinion is that it'll be better to enable hacking checks
(probably not all of them) for such projects and I'm volunteering to work
on it and to help coordinate efforts if needed.

BTW I've already done some part of this work for Zuul in [1], it's head of
the CR chain.

So, please, share your thoughts.

Thanks.

[1] https://review.openstack.org/#/c/63921/

-- 
Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Oslo] Improving oslo-incubator update.py

2014-01-13 Thread Doug Hellmann
On Mon, Jan 13, 2014 at 1:16 PM, Robert Myers myer0...@gmail.com wrote:

 We could always use relative imports in oslo :) Then you could put it
 where ever you wanted to without needing to rewrite the import statements.


That may be a good idea, but doesn't really solve the problem at hand.

Doug





 On Mon, Jan 13, 2014 at 11:07 AM, Doug Hellmann 
 doug.hellm...@dreamhost.com wrote:

 [resurrecting an old thread]


 On Wed, Nov 27, 2013 at 6:26 AM, Flavio Percoco fla...@redhat.comwrote:

 On 27/11/13 10:59 +, Mark McLoughlin wrote:

 On Wed, 2013-11-27 at 11:50 +0100, Flavio Percoco wrote:

 On 26/11/13 22:54 +, Mark McLoughlin wrote:
 On Fri, 2013-11-22 at 12:39 -0500, Doug Hellmann wrote:
  On Fri, Nov 22, 2013 at 4:11 AM, Flavio Percoco fla...@redhat.com
 wrote:
  1) Store the commit sha from which the module was copied from.
  Every project using oslo, currently keeps the list of modules
 it
  is using in `openstack-modules.conf` in a `module` parameter.
 We
  could store, along with the module name, the sha of the commit
 it
  was last synced from:
  
  module=log,commit
  
  or
  
  module=log
  log=commit
  
 
  The second form will be easier to manage. Humans edit the module
 field and
  the script will edit the others.
 
 How about adding it as a comment at the end of the python files
 themselves and leaving openstack-common.conf for human editing?

 I think having the commit sha will give us a starting point from which
 we could start updating that module from.


 Sure, my only point was about where the commit sha comes from - i.e.
 whether it's from a comment at the end of the python module itself or in
 openstack-common.conf


 And, indeed you said 'at the end of the python files'. Don't ask me
 how the heck I misread that.

 The benefit I see from having them in the openstack-common.conf is
 that we can register a `StrOpt` for each object dynamically and get
 the sha using oslo.config. If we put it as a comment at the end of the
 python file, we'll have to read it and 'parse' it, I guess.



  It will mostly help with
 getting a diff for that module and the short commit messages where it
 was modified.

 Here's a pseudo-buggy-algorithm for the update process:

 (1) Get current sha for $module
 (2) Get list of new commits for $module
 (3) for each commit of $module:
 (3.1) for each modified_module in $commit
 (3.1.1) Update those modules up to $commit
 (1)(modified_module)
 (3.2) Copy the new file
 (3.3) Update openstack-common with the latest sha

 This trusts the granularity and isolation of the patches proposed in
 oslo-incubator. However, in cases like 'remove vim mode lines' it'll
 fail assuming that updating every module is necessary - which is true
 from a git stand point.


 This is another variant of the kind of inter-module dependency smarts
 that update.py already has ... I'd be inclined to just omit those smarts
 and just require the caller to explicitly list the modules they want to
 include.

 Maybe update.py could include some reporting to help with that choice
 like module foo depends on modules bar and blaa, maybe you want to
 include them too and commit XXX modified module foo, but also module
 bar and blaa, maybe you want to include them too.


 But, if we get to the point of suggesting the user to update module
 foo because it was modified in commit XXX, we'd have everything needed
 to make it recursive and update those modules as well.

 I agree with you on making it explicit, though. What about making it
 interactive then? update.py could ask users if they want to update
 module foo because it was modified in commit XXX and do it right away,
 which is not very different from updating module foo, print a report
 and let the user choose afterwards.

 (/me feels like Gollum now)

 I prefer the interactive way though, at least it doesn't require the
 user to run update several times for each module. We could also add a
 `--no-stop` flag that does exactly what you suggested.


 I spent some time trying to think through how we could improve the update
 script for [1], and I'm stumped on how to figure out *accurately* what
 state the project repositories are in today.

 We can't just compute the hash of the modules in the project receiving
 copies, and then look for them in the oslo-incubator repo, because we
 modify the files as we copy them out (to update the import statements and
 replace oslo with the receiving project name in some places like config
 option defaults).

 We could undo those changes before computing the hash, but the problem is
 further complicated because syncs are not being done of all modules
 together. The common code in a project doesn't move forward in step with
 the oslo-incubator repository as a whole. For example, sometimes only the
 openstack/common/log.py module is copied and not all of openstack/common.
 So log.py might be newer than a 

[openstack-dev] Meeting time congestion

2014-01-13 Thread Lyle, David
With all the warranted meeting time shuffling that has been happening recently, 
and the addition of so many projects and sub-teams, the meeting calendar for 
#openstack-meeting and #openstack-meeting-alt [1] is relatively full.  So 
recently, when trying to move the Horizon meeting time, the poll determined 
slot is unavailable.

Should we run logged meetings from individual team rooms like 
#openstack-horizon?  The advantage is it's available. The downside is that less 
people from other team linger in #openstack-horizon than #openstack-meeting.

Should another meeting room be added?  This solution only scales so far.

Thoughts?

David

[1] 
https://www.google.com/calendar/ical/bj05mroquq28jhud58esggq...@group.calendar.google.com/public/basic.ics

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Partially Shared Networks

2014-01-13 Thread Jay Pipes
On Mon, 2014-01-13 at 11:47 -0800, Rick Jones wrote:
 On 01/13/2014 07:32 AM, Jay Pipes wrote:
  On Mon, 2014-01-13 at 10:23 +, Stephen Gran wrote:
  Hi,
 
  I don't think that's what's being asked for. Just that there be more
  than the current check for '(isowner of network) or (shared)'
 
  If the data point could be 'enabled for network' for a given tenant,
  that would be more flexible.
 
  Agreed, but I believe Mathieu is thinking more in terms of how such a
  check could be implemented. What makes this problematic (at least in my
  simplistic understanding of Neutron wiring) is that there is no
  guarantee that tenant A's subnet does not overlap with tenant B's
  subnet. Because Neutron allows overlapping subnets (since Neutron uses
  network namespaces for isolating traffic), code would need to be put in
  place that says, basically, if this network is shared between tenants,
  then do not allow overlapping subnets, since a single, shared network
  namespace will be needed that routes traffic between the tenants.
 
  Or at least, that's what I *think* is part of the problem...
 
 Are such checks actually necessary?  That is to say, unless it will 
 completely fubar something internally ina database or something (versus 
 just having confused routing), I would think that it would be but a 
 nicety for Neutron runtime to warn the user(s) they were about to try to 
 connect overlapping subnets to the same router.  Nice to report it 
 perhaps as a warning, but not an absolutely required bit of 
 functionality to go forward.

Sure, good points.

 If Tenant A and Tenant B were separate, recently merged companies, they 
 would have to work-out, in advance, issues of address overlap before 
 they could join their two networks.  At one level at least, we could 
 consider their trying to do the same sort of thing within the context of 
 Neutron as being the same.
 
 FWIW, here is an intra-tenant attempt to assign two overlapping subnets 
 to the same router.  Of course I'm probably playing with older bits in 
 this particular sandbox and they won't reflect the current top-of-trunk:
 
 $ nova list
 +--++++-+---+
 | ID   | Name   | Status 
 | Task State | Power State | Networks  |
 +--++++-+---+
 | d97a46ed-19eb-4a87-8536-eb9ca4ba3895 | overlap-net_lg | ACTIVE 
 | None   | Running | overlap-net=192.168.123.2 |
 | ad8d6c9c-9a4c-442e-aebf-fd30475b7675 | overlap-net0001_lg | ACTIVE 
 | None   | Running | overlap-net0001=192.168.123.2 |
 +--++++-+---+
 $ neutron subnet-list
 +--++--+--+
 | id   | name   | cidr 
 | allocation_pools |
 +--++--+--+
 | d6015301-e5bf-4f1a-b3b3-5bde71a52496 | overlap-subnet0001 | 
 192.168.123.0/24 | {start: 192.168.123.2, end: 192.168.123.254} |
 | faddcc32-7bb6-4cb2-862e-7738e5c54f6d | overlap-subnet | 
 192.168.123.0/24 | {start: 192.168.123.2, end: 192.168.123.254} |
 +--++--+--+
 $ neutron router-create overlap-router0001
 Created a new router:
 +---+--+
 | Field | Value|
 +---+--+
 | admin_state_up| True |
 | external_gateway_info |  |
 | id| 88339018-d286-45ec-b2d2-ccb78ae78837 |
 | name  | overlap-router0001   |
 | status| ACTIVE   |
 | tenant_id | 57367642563150   |
 +---+--+
 $ neutron router-interface-add overlap-router0001 overlap-subnet
 Added interface b637cb32-c33a-4565-a6f3-b7ea22a02be0 to router 
 overlap-router0001.
 $ neutron router-interface-add overlap-router0001 overlap-subnet0001
 400-{u'QuantumError': u'Bad router request: Cidr 192.168.123.0/24 of 
 subnet d6015301-e5bf-4f1a-b3b3-5bde71a52496 overlaps with cidr 
 192.168.123.0/24 of subnet faddcc32-7bb6-4cb2-862e-7738e5c54f6d'}

OK, so it looks like the plumbing is already in place, and 

Re: [openstack-dev] [Oslo] Improving oslo-incubator update.py

2014-01-13 Thread Kevin L. Mitchell
On Mon, 2014-01-13 at 12:07 -0500, Doug Hellmann wrote:
 We can't just compute the hash of the modules in the project receiving
 copies, and then look for them in the oslo-incubator repo, because we
 modify the files as we copy them out (to update the import statements
 and replace oslo with the receiving project name in some places like
 config option defaults). 

Why not embed the hash into the files as we copy them out, maybe as a
specially formatted comment at the end of the file?

That said, I've always thought that oslo-incubator should be a library
we use directly, rather than a repository we copy out of.
-- 
Kevin L. Mitchell kevin.mitch...@rackspace.com
Rackspace


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >