[openstack-dev] Can be create instance and network by RESTful API ?

2013-06-21 Thread SDN Project
Dear Developers

I have one question.
Can be create instance and network by RESTful API ?
I would like to know “How to create Instance and Network” by RESTful API.

Figure.
[External Linux-based Server] -- OpenStack RESTful API  [OpenStack 
Controller Server]

If you have ideas, please reply to me.

Thank you very much.

Best regards,
TaeHwan Koo


이 메일은 지정된 수취인만을 위해 작성되었으며, 중요한 정보나 저작권을 포함하고 있을 수 있습니다. 어떠한 권한 없이, 본 문서에 포함된 
정보의 전부 또는 일부를 무단으로 제3자에게 공개, 배포, 복사 또는 사용하는 것을 엄격히 금지합니다. 만약, 본 메일이 잘못 전송된 경우, 
발신인 또는 당사에 알려주시고, 본 메일을 즉시 삭제하여 주시기 바랍니다.
This E-mail may contain confidential information and/or copyright material. 
This email is intended for the use of the addressee only. If you receive this 
email by mistake, please either delete it without reproducing, distributing or 
retaining copies thereof or notify the sender immediately.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Efficiently pin running VMs to physical CPUs automatically

2013-06-21 Thread Daniel P. Berrange
On Thu, Jun 20, 2013 at 12:48:16PM -0400, Russell Bryant wrote:
 On 06/20/2013 10:36 AM, Giorgio Franceschi wrote:
  Hello, I created a blueprint for the implementation of:
  
  A tool for pinning automatically each running virtual CPU to a physical
  one in the most efficient way, balancing load across sockets/cores and
  maximizing cache sharing/minimizing cache misses. Ideally able to be run
  on-demand, as a periodic job, or be triggered by events on the host (vm
  spawn/destroy).
  
  Find it at https://blueprints.launchpad.net/nova/+spec/auto-cpu-pinning
  
  Any inputappreciated!
 
 I'm actually surprised to see a new tool for this kind of thing.
 
 Have you seen numad?

The approach used by 'pinhead' tool dscribed in the blueprint seems
to be pretty much equivalent to what 'numad' is already providing
for Libvirt KVM and LXC guests.

NB, numad is actually a standalone program for optimizing NUMA
placement of any processes on a server. Libvirt talks to it when
starting a guest to request info on where best to place the guest.

Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Removing OS_AUTH_SYSTEM

2013-06-21 Thread Chmouel Boudjnah
Hello,

We have discussed this some time ago to remove the OS_AUTH_SYSTEM from
novaclient since this was implemented for RAX and these days RAX has
moved to pyrax.

Since last time I have looked into this it seems that there was some
updates to it :

https://github.com/openstack/python-novaclient/blob/master/novaclient/auth_plugin.py

This made me wondering if it was needed by other people and why?

This is some preliminary works to move novaclient to use
keystoneclient instead of implementing its own[1] client to keystone.
If the OS_AUTH_SYSTEM feature was really needed[2] we should then
moving it to keystoneclient.

Thoughts?

Chmouel.

[1] weirdo with bunch of obsoletes stuff I may need to add.
[2] and IMO this goes against a one true open cloud.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Infra][Nova] Running Nova DB API tests on different backends

2013-06-21 Thread Roman Podolyaka
Hello Sean, all,

Currently there are ~30 test classes in DB API tests, containing ~370 test
cases. setUpClass()/tearDownClass() would be definitely an improvement, but
applying of all DB schema migrations for MySQL 30 times is going to take a
long time...

Thanks,
Roman


On Fri, Jun 21, 2013 at 3:02 PM, Sean Dague s...@dague.net wrote:

 On 06/21/2013 07:40 AM, Roman Podolyaka wrote:

 Hi, all!

 In Nova we've got a DB access layer known as DB API and tests for it.
 Currently, those tests are run only for SQLite in-memory DB, which is
 great for speed, but doesn't allow us to spot backend-specific errors.

 There is a blueprint
 (https://blueprints.launchpad.**net/nova/+spec/db-api-tests-**
 on-all-backendshttps://blueprints.launchpad.net/nova/+spec/db-api-tests-on-all-backends
 )
 by Boris Pavlovic, which goal is to run the DB API tests on all DB
 backends (e. g. SQLite, MySQL and PosgreSQL). Recently, I've been
 working on implementation of this BP
 (https://review.openstack.org/**#/c/33236/https://review.openstack.org/#/c/33236/
 ).

 The chosen approach for implementing this is best explained by going
 through a list of problems which must be solved:

 1. Tests should be executed concurrently by testr.

 testr creates a few worker processes each running a portion of test
 cases. When SQLite in-memory DB is used for testing, each of those
 processes has it's own DB in its address space, so no race conditions
 are possible. If we used a shared MySQL/PostgreSQL DB, the test suite
 would fail due to various race conditions. Thus, we must create a
 separate DB for each of test running processes and drop those, when all
 tests end.

 The question is, where we should create/drop those DBs? There are a few
 possible places in our code:
 1) setUp()/tearDown() methods of test cases. These are executed for
 each test case (there are ~370 tests in test_db_api). So it must be a
 bad idea to create/apply migrations/drop DB 370 times, if MySQL or
 PostgreSQL are used instead of SQLite in-memory DB
 2) testr supports creation of isolated test environments
 (https://testrepository.**readthedocs.org/en/latest/**
 MANUAL.html#remote-or-**isolated-test-environmentshttps://testrepository.readthedocs.org/en/latest/MANUAL.html#remote-or-isolated-test-environments
 ).
 Long story short: we can specify commands to execute before tests are
 run, after test have ended and how to run tests
  3) module/package level setUp()/tearDown(), but these are probably
 supported only in nosetest


 How many Classes are we talking about? We're actually going after a
 similar problem in Tempest that setUp isn't cheap, so Matt Treinish has an
 experimental patch to testr which allows class level partitioning instead.
 Then you can use setupClass / teardownClass for expensive resource setup.


  So:
 1) before tests are run, a few test DBs are created (the number of
 created DBs is equal to the used concurrency level value)
 2) for each test running process an env variable, containing the
 connection string to the created DB, is set;
 3) after all test running processes have ended, the created DBs are
 dropped.

 2. Tests cleanup should be fast.

 For SQLite in-memory DB we use create DB/apply migrations/run test/drop
 DB pattern, but that would be too slow for running tests on MySQL or
 PostgreSQL.

 Another option would be to create DB only once for each of test running
 processes, apply DB migrations and then run each test case within a DB
 transaction which is rolled back after a test ends. Combining with
 something like fsync = off option of PostgreSQL this approach works
 really fast (on my PC it takes ~5 s to run DB API tests on SQLite and
 ~10 s on PostgreSQL).


 I like the idea of creating a transaction in setup, and triggering
 rollback in teardown, that's pretty clever.


  3. Tests should be easy to run for developers as well as for Jenkins.

 DB API tests are the only tests which should be run on different
 backends. All other test cases can be run on SQLite. The convenient way
 to do this is to create a separate tox env, running only DB API tests.
 Developers specify the DB connection string which effectively defines
 the backend that should be used for running tests.

 I'd rather not run those tests 'opportunistically' in py26 and py27 as
 we do for migrations, because they are going to be broken for some time
 (most problems are described here
 https://docs.google.com/a/**mirantis.com/document/d/**1H82lIxd54CRmy-**
 22oPRUS1sBkEtiguMU8N0whtye-BE/**edithttps://docs.google.com/a/mirantis.com/document/d/1H82lIxd54CRmy-22oPRUS1sBkEtiguMU8N0whtye-BE/edit
 ).
 So it would be really nice to have a separate non-voting gate test.


 Seperate tox env is the right approach IMHO, that would let it run
 issolated non-voting until we get to the bottom of the issues. For
 simplicity I'd still use the opportunistic db user / pass, as that will
 mean it could run upstream today.

 -Sean

 --
 Sean Dague
 

[openstack-dev] Consolidate CLI Authentication

2013-06-21 Thread Ciocari, Juliano (Brazil RD-ECL)
Hi,

I have some questions regarding the Consolidate CLI Authentication 
(https://etherpad.openstack.org/keystoneclient-cli-auth and 
https://review.openstack.org/#/c/21942/).

It looks like that the code for keystone client is almost ready for merge. What 
are the plans for the other clients (nova, glance, etc) to use this code (if 
any)? Is there any related change expected on horizon?

Thanks,
- Juliano



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Adding 'rm' to compute filter

2013-06-21 Thread Karajgi, Rohit
Hi,

Referring to the Jenkins failure logs on 
https://review.openstack.org/#/c/32549/3,
Log at 
http://logs.openstack.org/32549/3/check/gate-nova-python27/25158/console.html

The command that the test tried to execute using nova's rootwrap was:
COMMAND=/home/jenkins/workspace/gate-nova-python27/.tox/py27/bin/nova-rootwrap 
/etc/nova/rootwrap.conf rm 
/tmp/tmp.WVIZziaxuv/tmp_2n7x0/tmpbuRC0e/instance-fake.log

I am not sure if the CI infrastructure will allow this as it is attempting to 
perform 'rm' operation as a root user which is unsafe. But the test above fails.

Also, some thoughts hit me by relooking at the patch:

log_file_path = '%s/%s.log' % (CONF.libvirt_log_path, instance_name)

Assuming this libvirt_log_path = /var/log/libvirt ,  and as  /var/log is owned 
by 'root' user, then in the utils.execute, run_as_root=True is acceptable.

If the libvirt_log_path is configured something else, say /opt/data/logs/xyz, 
which does not require root access to perform 'rm', then we don't need 
'run_as_root' as True.

As mentioned above, in compute filter adding '/bin/rm'  with root privilege in 
the code is unsafe if some wrong tests are added to Jenkins, they might end up 
doing 'rm' on 
another directory as a root user.

Thoughts on how this issue be addressed in CI, or code?


Best Regards,
Rohit Karajgi | Technical Analyst | NTT Data Global Technology Services Private 
Ltd | w. +91.20.6604.1500 x 627 |  m. +91 992.242.9639 | 
rohit.kara...@nttdata.com

__
Disclaimer:This email and any attachments are sent in strictest confidence for 
the sole use of the addressee and may contain legally privileged, confidential, 
and proprietary data.  If you are not the intended recipient, please advise the 
sender by replying promptly to this email and then delete and destroy this 
email and any attachments without any further use, copying or forwarding

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [PTLs] Proposed simplification around blueprint tracking

2013-06-21 Thread Thierry Carrez
Thierry Carrez wrote:
 A script will automatically and regularly align series goal with
 target milestone, so that the series and milestone views are
 consistent (if someone sets target milestone to havana-3 then the
 series goal will be set to havana).

Now if the Launchpad API was exporting the series goal property, that
would be easier... investigating workarounds.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] about the ovs plugin ovs setup for the tunnel network type

2013-06-21 Thread Dan Wendlandt
For the list, I'll post the same response i gave you when you pinged me
off-list about this:

t was a long time ago when I wrote that, and the original goal was to use a
single bridge, but there some something about how OVS worked at the time
that required the two bridges.  I think it was related to how the OVS
bridge needed to learn to associate MAC addresses + VLANs to particular
tunnel ports, but I don't remember the details.  This stuff was pretty
simple, so I'm guessing if you mess around with it for a little bit, you'll
either find that it now works (due to some change in OVS) or that it still
doesn't (in which case, it will be obvious why).


On Thu, Jun 20, 2013 at 9:00 AM, Armando Migliaccio
amigliac...@vmware.comwrote:

 Something similar was discussed a while back on this channel:

 http://lists.openstack.org/pipermail/openstack-dev/2013-May/008752.html

 See if it helps.

 Cheers,
 Armando

 On Wed, Jun 5, 2013 at 1:11 PM, Jani, Nrupal nrupal.j...@intel.comwrote:

  Hi there,

 ** **

 I am little new to the openstack networking project, previously known as
 quantumJ

 ** **

 Anyway I have few simple questions regarding the way ovs gets configured
 the way it is in the current form in kvm!!

 ** **

 Here it goes,

 ** **

 **-  **As I understand, OVS setups two datapaths instances
 br-int  br-tun  uses patch port to connect them. Additionally it uses
 local vlans in the br-int for the vm-vm traffic!!

 **o   **I understand the reason behind the current setup but I am not
 sure why it needs to be like it?

 **§  **can’t the same features can be supported with single instance
 like br-int  fllows are setup correctly to get things right including
 quantum security groups?

 ** **

 ** **

 I know there must be some technical reasons behind all these but I just
 want get some history  also want to know whether anyone is planning to
 enhance it in future?

 ** **

 Thx,

 ** **

 Nrupal.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
~~~
Dan Wendlandt
Nicira, Inc: www.nicira.com
twitter: danwendlandt
~~~
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] SecurityImpact tagging in gerrit

2013-06-21 Thread Bryan D. Payne
This is a quick note to announce that the OpenStack gerrit system supports
a SecurityImpact tag.  If you are familiar with the DocImpact tag, this
works in a similar fashion.

Please use this in the commit message for any commits that you feel would
benefit from a security review.  Commits with this tag in the commit
message will automatically trigger an email message to the OpenStack
Security Group, allowing you to quickly tap into some of the security
expertise in our community.

PTLs -- Please help spread the word an encourage use of this within your
projects.

Cheers,
-bryan
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] SecurityImpact tagging in gerrit

2013-06-21 Thread Daniel P. Berrange
On Fri, Jun 21, 2013 at 12:08:43PM -0400, Yun Mao wrote:
 Interesting. Does it automatically make the commit in stealth mode so
 that it's not seen in public? Thanks,

This tag is about asking for design input / code review from people with
security expertize for new work. As such the code is all public.

Fixes for security flaws in existing code which need to be kept private
should not be sent via Gerrit. They should be reported privately as per
the guidelines here:

  http://www.openstack.org/projects/openstack-security/

 On Fri, Jun 21, 2013 at 11:26 AM, Bryan D. Payne bdpa...@acm.org wrote:
 
  This is a quick note to announce that the OpenStack gerrit system supports
  a SecurityImpact tag.  If you are familiar with the DocImpact tag, this
  works in a similar fashion.
 
  Please use this in the commit message for any commits that you feel would
  benefit from a security review.  Commits with this tag in the commit
  message will automatically trigger an email message to the OpenStack
  Security Group, allowing you to quickly tap into some of the security
  expertise in our community.
 
  PTLs -- Please help spread the word an encourage use of this within your
  projects.
 
  Cheers,
  -bryan


Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Cells design issue

2013-06-21 Thread Kevin L. Mitchell
On Fri, 2013-06-21 at 09:16 -0700, Armando Migliaccio wrote:
 In my view a cell should only know about the queue it's connected to,
 and let the 'global' message queue to do its job of dispatching the
 messages to the right recipient: that would solve the problem
 altogether.

There is no global message queue in the context of cells.

 Were federated queues and topic routing not considered fit for the
 purpose? I guess the drawback with this is that it is tight to Rabbit.

Again, there's no single message queue in the context of cells.  I'm
assuming that was to avoid a bottleneck, but Chris Behrens would be able
to say better exactly why this design choice was made.  All I'm doing in
this discussion is trying to address one element of the current design;
I'm not trying to redesign cell communication.
-- 
Kevin L. Mitchell kevin.mitch...@rackspace.com


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Cells design issue

2013-06-21 Thread Chris Behrens

On Jun 21, 2013, at 9:16 AM, Armando Migliaccio amigliac...@vmware.com wrote:

 In my view a cell should only know about the queue it's connected to, and let 
 the 'global' message queue to do its job of dispatching the messages to the 
 right recipient: that would solve the problem altogether.
 
 Were federated queues and topic routing not considered fit for the purpose? I 
 guess the drawback with this is that it is tight to Rabbit.

If you're referring to the rabbit federation plugin, no, it was not considered. 
  I'm not even sure that via rabbit queues is the right way to talk cell to 
cell.  But I really do not want to get into a full blown cells communication 
design discussion here.  We can do that in another thread, if we need to do so. 
:)

It is what it is today and this thread is just about how to express the 
configuration for it.

Regarding Mark's config suggestion:

 On Mon, Jun 17, 2013 at 2:14 AM, Mark McLoughlin mar...@redhat.com wrote:
 I don't know whether I like it yet or not, but here's how it might look:
 
  [cells]
  parents = parent1
  children = child1, child2
 
  [cell:parent1]
  transport_url = qpid://host1/nova
 
  [cell:child1]
  transport_url = qpid://host2/child1_nova
 
  [cell:child2]
  transport_url = qpid://host2/child2_nova
[…]

Yeah, that's what I was picturing if going that route.  I guess the code for it 
is not bad at all.  But with oslo.config, can I reload (re-parse) the config 
file later, or does the service need to be restarted?

- Chris


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Basic configuration with VMs with a local private network

2013-06-21 Thread Salvatore Orlando
I reckon the admin guide [1] contains sufficiently up-to-date information
for the grizzly release.
Please let me know if you find it lacks important information. We'll be
more than happy to make the necessary amendments.

Your scenario appears to be fairly simple. On the compute node you will
need nova-compute and the openvswitch plugin's L2 agent.
Quantum server, and all the other Openstack services, should run on the
controller node.
Then you should just create your network and subnet using the Quantum API.
Quantum ports will be created when VMs are booted.

Salvatore


[1]
http://docs.openstack.org/trunk/openstack-network/admin/content/index.html


On 21 June 2013 09:53, Julio Carlos Barrera Juez 
juliocarlos.barr...@i2cat.net wrote:

 Hi.

 We are trying to get a basic scenario with two VMs with private IP
 addresses configured in a Compute node controlled by a Controller node. We
 want to achieve a basic private network with some VMs. We tried using Open
 vSwitch Quantum plugin to configure the network, but we have not achieved
 our objective by now.

 Is there any guide or or basic scenario like this  tutorial? We have found
 a bad documentation about basic networking in OpenStack using existing
 Quantum plugins, and the Open vSwitch documentation about it is too old (~2
 years).

 Thank you in advance.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] re: discussion about passing metadata into provider stacks as parameters

2013-06-21 Thread Zane Bitter

On 21/06/13 07:49, Angus Salkeld wrote:

On 20/06/13 22:19 -0400, cbjc...@linux.vnet.ibm.com wrote:


So anyway, let's get back to the topic this thread was discussing
about - passing meta data into provider stacks.

It seems that we have all reached an agreement that deletepolicy and
updatepolicy will be passed as params, and metadata will be exposed to
provider templates through a function

In terms of implemetation,

MetaData:

- add a resolve method to template.py to handle
{'Fn::ProvidedResource': 'Metadata'}


I think the name needs a little thought, how about:

{'Fn::ResourceFacade': 'Metadata'}


It was my thought that we would handle DeletePolicy and UpdatePolicy in 
the same way as Metadata:


{'Fn::ResourceFacade': 'DeletePolicy'}
{'Fn::ResourceFacade': 'UpdatePolicy'}

And, in fact, none of this should be hardcoded, so it should just work 
like Fn::Select on the resource facade's template snippet.


Which actually suggests another possible syntax:

{'Fn::Select': ['DeletePolicy', {'OS::Heat::ResourceFacade'}]

but I'm persuaded that accessing these will be common enough that it's 
worth sticking with the simpler Fn::ResourceFacade syntax.


cheers,
Zane.



-Angus


DeletePolicy/UpdatePolicy:

- add stack_resource.StackResource.compose_policy_params() - Json
encoded delete and update policies

- have create_with_template update params with delete/update policies
composed by compose_policy_params
(json-parameters implementation is already in review, hope it will be
available soon)


I will start the implementation if there is no objection.


Liang



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat] autoscaling question

2013-06-21 Thread Patrick Petit

Dear All,

I'd like to have some confirmation about the mechanism that is going to 
be used to inform Heat's clients about instance create and destroy in an 
auto-scaling group. I am referring to the wiki page at 
https://wiki.openstack.org/wiki/Heat/AutoScaling.


I assume, but I may be wrong, that the same eventing mechanism than the 
one being used for stack creation will be used...


An instance create in an auto-scaling group will generate an IN_PROGRESS 
event for the instance being created followed by CREATE_COMPLETE or 
CREATE_FAILED based on the value returned by cfn-signal. Similarly, an 
instance destroy will generate a DELETE_IN_PROGRESS event for the 
instance being destroyed followed by a DELETE_COMPLETE or DELETE_FAILED 
in case the instance can't be destroyed in the group.


Adding a group id in the event details will be helpful to figure out 
what group the instance belongs to.


Thanks in advance for the clarification.
Patrick

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Cells design issue

2013-06-21 Thread Mark McLoughlin
On Fri, 2013-06-21 at 09:30 -0700, Chris Behrens wrote:
  On Mon, Jun 17, 2013 at 2:14 AM, Mark McLoughlin mar...@redhat.com
 wrote:
  I don't know whether I like it yet or not, but here's how it might
 look:
  
   [cells]
   parents = parent1
   children = child1, child2
  
   [cell:parent1]
   transport_url = qpid://host1/nova
  
   [cell:child1]
   transport_url = qpid://host2/child1_nova
  
   [cell:child2]
   transport_url = qpid://host2/child2_nova
 […]
 
 Yeah, that's what I was picturing if going that route.  I guess the
 code for it is not bad at all.  But with oslo.config, can I reload
 (re-parse) the config file later, or does the service need to be
 restarted?

Support for reloading should get merged soon:

https://review.openstack.org/32231

Cheers,
Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Configuring Quantum REST Proxy Plugin

2013-06-21 Thread Sumit Naiksatam
Thanks Salvatore. That's right, the configuration for server and port
resides in:
etc/quantum/plugins/bigswitch/restproxy.ini

Let us know if you need further help.

~Sumit.

On Fri, Jun 21, 2013 at 8:37 AM, Salvatore Orlando sorla...@nicira.com wrote:
 Hi Julio,

 If I get your message correctly, you have a proxy which is pretty much a
 shim layer between the big switch plugin (QuantumRestProxyV2) and the
 OpenNaaS server.
 In this case all you need to do is to configure the [restproxy] session of
 etc/quantum/plugins/bigswitch/restproxy.ini with the endpoint of your
 OpenNaaS server.

 Regards,
 Salvatore


 On 18 June 2013 14:13, Julio Carlos Barrera Juez
 juliocarlos.barr...@i2cat.net wrote:

 Hi.

 We're trying to configure Quantum REST Proxy Plugin to use an external
 Network service developed by ourselves in the context of OpenNaaS Project
 [1]. We have developed a REST server to listen Proxy requests. We want to
 modify Plugin configuration as described in OpenStack official documentation
 [2] and OpenStack Wiki [3].

 It is possible to configure path of the URL in the plugin configuration
 like server host and port?

 Thank you!


 [1] OpenNaaS Project - http://www.opennaas.org/
 [2] OpenStack official documentation -
 http://docs.openstack.org/trunk/openstack-network/admin/content/bigswitch_floodlight_plugin.html
 [3] OpenStack Wiki -
 https://wiki.openstack.org/wiki/Quantum/RestProxyPlugin#Quantum_Rest_Proxy_Plugin

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Efficiently pin running VMs to physical CPUs automatically

2013-06-21 Thread Alessandro Pilotti
 It seems that numad is libvirt specific - is that the case?

Hi, Hyper-V 2012 supports NUMA as well. It'd be great to plan an hypervisor 
independent solution from the start.


On 21.06.2013, at 11:13, Bob Ball bob.b...@citrix.com wrote:

 It seems that numad is libvirt specific - is that the case?
 
 I'm not sure if there is a daemon for other hypervisors but would it make 
 sense to have this functionality in OpenStack so we can extend it to work for 
 each hypervisor allowing it to control the affinity in their own way?  I 
 guess this would need the Pinhead tool to either support multiple hypervisors 
 or provide the pinning strategy to Nova which could then invoke the 
 individual drivers.
 
 Outside numa optimisations I think there are good reasons for Nova to support 
 modifying the affinity / pinning rules - for example I can imagine that some 
 flavours might be permitted dedicated or isolated vCPUs?  Integrating this 
 tool would allow us to provide it further hints/rules defined by the flavour 
 or administrator.
 
 Bob
 
 -Original Message-
 From: Russell Bryant [mailto:rbry...@redhat.com] 
 Sent: 20 June 2013 17:48
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] Efficiently pin running VMs to physical CPUs 
 automatically
 
 On 06/20/2013 10:36 AM, Giorgio Franceschi wrote:
 Hello, I created a blueprint for the implementation of:
 
 A tool for pinning automatically each running virtual CPU to a physical
 one in the most efficient way, balancing load across sockets/cores and
 maximizing cache sharing/minimizing cache misses. Ideally able to be run
 on-demand, as a periodic job, or be triggered by events on the host (vm
 spawn/destroy).
 
 Find it at https://blueprints.launchpad.net/nova/+spec/auto-cpu-pinning
 
 Any inputappreciated!
 
 I'm actually surprised to see a new tool for this kind of thing.
 
 Have you seen numad?
 
 -- 
 Russell Bryant
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Celery

2013-06-21 Thread Jessica Lucci
Hello all,

Included here is a link to a Celery wiki, explaining what the Celery project is 
and how it works. Currently, celery is being used in a distributed pattern for 
the WIP task flow project. As such, links to both the distributed project, and 
its' parent task flow project have been included for your viewing pleasure. 
Please feel free to ask any questions/address any concerns regarding either 
celery or the task flow project as a whole. (:

Celery: https://wiki.openstack.org/wiki/Celery
Distributed: https://wiki.openstack.org/wiki/DistributedTaskManagement
TaskFlow: https://wiki.openstack.org/wiki/TaskFlow

Thanks!
Jessica Lucci
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] The future of run_tests.sh

2013-06-21 Thread Joe Gordon
It sounds like the censuses in this thread is:

In the long run, we want to kill run_tests.sh in favor of explaining how to
use the underlying tools in a TESTING file.

But in the short term, we should start moving toward using a TESTING file
(such as https://review.openstack.org/#/c/33456/) but keep run_test.sh for
the time being as there are things it does that we don't have simple ways
of doing yet.  Since run_tests.sh will be around for a while it does make
sense to move it into oslo.


best,
Joe


On Tue, Jun 18, 2013 at 11:44 AM, Monty Taylor mord...@inaugust.com wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1



 On 06/18/2013 08:44 AM, Julien Danjou wrote:
  FWIW, I think we never really had a run_tests.sh in Ceilometer
  like other projects might have, and we don't have one anymore for
  weeks, and that never looked like a problem.
 
  We just rely on tox and on a good working listing in
  requirements.txt and test-requirements.txt, so you can build a venv
  yourself if you'd like.

 A couple of followups to things in this thread so far:

 - - Running tests consistently both in and out of virtualenv.

 Super important. Part of the problem is that setuptools test command
 is a broken pile of garbage. So we have a patch coming to pbr that
 will sort that out - and at least as a next step, tox and run_tests.sh
 can both run python setup.py test and it will work both in and out of
 a venv, regardless of whether the repo uses nose or testr.

 - - Individual tests

 nose and tox and testr and run_tests.sh all support running individual
 tests just fine. The invocation is slightly different for each. For me
 testr is hte friendliest because it defaults to regexes - so testr
 run test_foo will happily run
 nova.tests.integration.deep_directory.foo.TestFoo.test_foo. But - all
 four mechanisms work here fine.

 - - pbr

 Dropping in to a debugger while running via testr is currently
 problematic, but is currently on the table to be sorted. In the
 meantime, the workaround is to run testtools.run directly, which
 run_tests.sh does for you if you specify a single test. I think this
 is probably the single greatest current reason to keep run_tests.sh at
 the moment - because as much as you can learn the cantrips around
 doing it, it's not a good UI.

 - - nova vs. testr

 In general, things are moving towards testr being the default. I don't
 think there will be anybody cutting off people's hands for using nose,
 but I strongly recommend taking a second to learn testr a bit. It's
 got some great features and is built on top of a completely machine
 parsable test result streaming protocol, which means we can do some
 pretty cool stuff with it.
 -BEGIN PGP SIGNATURE-
 Version: GnuPG v1.4.12 (GNU/Linux)
 Comment: Using GnuPG with undefined - http://www.enigmail.net/

 iEYEARECAAYFAlHAqn4ACgkQ2Jv7/VK1RgGZ9gCdHe8AhG8uQi7nkBz1UbZHUjvJ
 KskAoKddVUPBZnXAtzNpBiwazRid0gu7
 =eGE3
 -END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Celery

2013-06-21 Thread Joshua Harlow
Sweet, thanks jessica for the awesome docs and work.

From: Jessica Lucci 
jessica.lu...@rackspace.commailto:jessica.lu...@rackspace.com
Reply-To: OpenStack Development Mailing List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Friday, June 21, 2013 10:33 AM
To: 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: [openstack-dev] Celery

Hello all,

Included here is a link to a Celery wiki, explaining what the Celery project is 
and how it works. Currently, celery is being used in a distributed pattern for 
the WIP task flow project. As such, links to both the distributed project, and 
its' parent task flow project have been included for your viewing pleasure. 
Please feel free to ask any questions/address any concerns regarding either 
celery or the task flow project as a whole. (:

Celery: https://wiki.openstack.org/wiki/Celery
Distributed: https://wiki.openstack.org/wiki/DistributedTaskManagement
TaskFlow: https://wiki.openstack.org/wiki/TaskFlow

Thanks!
Jessica Lucci
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] The future of run_tests.sh

2013-06-21 Thread Monty Taylor
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 06/21/2013 01:44 PM, Joe Gordon wrote:
 It sounds like the censuses in this thread is:
 
 In the long run, we want to kill run_tests.sh in favor of
 explaining how to use the underlying tools in a TESTING file.

I agree. I'd like to add that 'long run' here is potentially a couple
of cycles away. I think we definitely don't want to get rid of a thing
that a project is currently using without an answer for all of its use
cases.

 But in the short term, we should start moving toward using a
 TESTING file (such as https://review.openstack.org/#/c/33456/) but
 keep run_test.sh for the time being as there are things it does
 that we don't have simple ways of doing yet.  Since run_tests.sh
 will be around for a while it does make sense to move it into
 oslo.
 
 
 best, Joe
 
 
 On Tue, Jun 18, 2013 at 11:44 AM, Monty Taylor
 mord...@inaugust.com mailto:mord...@inaugust.com wrote:
 
 
 
 On 06/18/2013 08:44 AM, Julien Danjou wrote:
 FWIW, I think we never really had a run_tests.sh in Ceilometer 
 like other projects might have, and we don't have one anymore
 for weeks, and that never looked like a problem.
 
 We just rely on tox and on a good working listing in 
 requirements.txt and test-requirements.txt, so you can build a
 venv yourself if you'd like.
 
 A couple of followups to things in this thread so far:
 
 - Running tests consistently both in and out of virtualenv.
 
 Super important. Part of the problem is that setuptools test
 command is a broken pile of garbage. So we have a patch coming to
 pbr that will sort that out - and at least as a next step, tox and
 run_tests.sh can both run python setup.py test and it will work
 both in and out of a venv, regardless of whether the repo uses nose
 or testr.
 
 - Individual tests
 
 nose and tox and testr and run_tests.sh all support running
 individual tests just fine. The invocation is slightly different
 for each. For me testr is hte friendliest because it defaults to
 regexes - so testr run test_foo will happily run 
 nova.tests.integration.deep_directory.foo.TestFoo.test_foo. But -
 all four mechanisms work here fine.
 
 - pbr
 
 Dropping in to a debugger while running via testr is currently 
 problematic, but is currently on the table to be sorted. In the 
 meantime, the workaround is to run testtools.run directly, which 
 run_tests.sh does for you if you specify a single test. I think
 this is probably the single greatest current reason to keep
 run_tests.sh at the moment - because as much as you can learn the
 cantrips around doing it, it's not a good UI.
 
 - nova vs. testr
 
 In general, things are moving towards testr being the default. I
 don't think there will be anybody cutting off people's hands for
 using nose, but I strongly recommend taking a second to learn testr
 a bit. It's got some great features and is built on top of a
 completely machine parsable test result streaming protocol, which
 means we can do some pretty cool stuff with it.
 
 
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.12 (GNU/Linux)
Comment: Using GnuPG with undefined - http://www.enigmail.net/

iEYEARECAAYFAlHElUUACgkQ2Jv7/VK1RgGMggCfYIuErSqwiCUKhgCnZKSyjVlw
2gYAoNDkQR6VP8mP2w6rGY6WwRTpOwxy
=svBU
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] FW: [Keystone][Folsom] Token re-use

2013-06-21 Thread Kant, Arun


From: Adam Young [mailto:ayo...@redhat.com]
Sent: Thursday, June 20, 2013 6:30 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] FW: [Keystone][Folsom] Token re-use

On 06/20/2013 04:50 PM, Ali, Haneef wrote:

1)  I'm really not sure how that will solve the original issue (Token table 
size increase).  Of course we can have a job to remove the expired token.
It is not expiry that is the issue, but revocation.  Expirey is handled by the 
fact that the token is a signed document with a timestamp in it.  We don't 
really need to store expired tokens at all.
[Arun] One of the issue is unlimited number of active tokens possible through 
keystone for same credentials. Which can possibly be turned into DoS attack on 
cloud services. So we can possibly look into keystone for solution as token 
generation is one of its key responsibility. Removal of expired tokens is 
separate aspect which will be needed after some point regardless of token is 
re-used or not.



2)  We really have to think how the other services are using keystone.  
Keystone createToken volume is going to increase. Fixing one issue going to 
create another one.
Yes it will.  But in the past, the load was on Keystone token validate, and PKI 
has removed that load.  Right now, the greater load on Keystone is coming from 
token create, but that is because token caching is not in place.  With proper 
caching, Keystone would be hit only once for most workloads.  It is currently 
hit for every Remote call.  It is not the token generation that is the issue, 
but the issuing of the tokens that needs to be throttled back.
[Arun] We cannot just have solution for  happy path/situations. Being available 
in cloud , there are going to be varying types of clients and cannot just 
expect that each of them will have caching or will always be able to work with 
PKI token format (like third party services/applications running *on the 
cloud*). Throttling of token issuance request will require complex 
rate-limiting logic because of various input combinations and business rules 
associated with it. There can be another solution where keystone re-uses active 
token based on some selection logic and still able to server auth request 
without rate limiting errors.



1.   If I  understood correctly  swift is using memcache to increase the  
validateToken performance.  What will happen to it?  Obviously load  to  
validateToken will also increase.
Validate token happens in process with PKI tokens, not via remote call. 
Memcache just prevents swift from having to make that check more than once per 
token.  Revocation still needs to be checked every time.
[Arun] There are issues with PKI token approach as well (lifespan of token, 
data size limit, role and status changes after token generation). If shorter 
timespan is used, then essentially we will be increasing createToken requests.




2.  In few cases I have seen VM creation taking more than 5 min.  ( 
download image from glance and create vm).   Short lived token ( 5 min) will be 
a real fun  in this case.
That is what trusts are for.  Nova should not be using a bearer token to 
perform operations on behalf of the user.  Nova should be getting a delegated 
token via a trust to perform those operations.  If a vm takes 5 minutes, it 
should not matter if the tokens time out, as Nova will get a token when it 
needs it. Bearer tokens are  a poor design approach, and we have work going on 
that will remedy that.
[Arun] Not sure how delegated token or current v3 trust/ role model is going to 
work here as token needs to have user roles (or at least delegated permissions 
with user's *all* privilege) to work on user behalf. Are we talking about 
impersonating user by Nova application in some way?
In short-lived (non-PKI format), we are just diverting request load from 
validate token to create token which is relatively expensive operation.

We need some smarter mechanism to limit proliferation of tokens as they are 
essentially user's credentials for a limited time.



Thanks
Haneef



From: Ravi Chunduru [mailto:ravi...@gmail.com]
Sent: Thursday, June 20, 2013 11:49 AM
To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] FW: [Keystone][Folsom] Token re-use

+1
On Thu, Jun 20, 2013 at 11:37 AM, Dolph Mathews 
dolph.math...@gmail.commailto:dolph.math...@gmail.com wrote:

On Wed, Jun 19, 2013 at 2:20 PM, Adam Young 
ayo...@redhat.commailto:ayo...@redhat.com wrote:
I really want to go the other way on this:  I want token to be very short 
lived, ideally something like 1 minute, but probably 5 minutes to account for 
clock skew.  I want to get rid of token revocation list checking.  I'd like to 
get away from revocation altogether:  tokens are not stored in the backend.  If 
they are ephemeral, we can just check that the token has a valid signature and 
that the time has not expired.

+10







On 06/19/2013 12:59 PM, Ravi Chunduru wrote:
Thats still an open item in this 

Re: [openstack-dev] Quantum's new name is….

2013-06-21 Thread Stefano Maffulli
On 06/19/2013 09:14 AM, Mark McClain wrote:
 The OpenStack Networking team is happy to announce that the Quantum
 project will be changing its name to Neutron. You'll soon see Neutron
 in lots of places as we work to implement the name change within
 OpenStack.

Congratulations for the cool name. I just changed the name of the
mailman topic :)

Tag your subject lines with [Neutron] if you intend to discuss OpenStack
Networking

Cheers,
stef




-- 
Ask and answer questions on https://ask.openstack.org

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] XML Support for Nova v3 API

2013-06-21 Thread Doug Hellmann
On Thu, Jun 20, 2013 at 12:44 PM, Russell Bryant rbry...@redhat.com wrote:

 On 06/20/2013 12:00 PM, Thierry Carrez wrote:
  Christopher Yeoh wrote:
  Just wondering what people thought about how necessary it is to keep XML
  support for the Nova v3 API, given that if we want to drop it doing so
  during the v2-v3 transition is pretty much the ideal time to do so.
 
  Although I hate XML as much as anyone else, I think it would be
  interesting to raise that question on the general user mailing-list.
 
  We have been discussing that in the past, and while there was mostly
  consensus against XML (in OpenStack API) on the development list, when
  the issue was raised with users, in the end they made up a
  sufficiently-good rationale for us to keep it in past versions of the
 API :)
 

 Yes, and I suspect we'd arrive the same result again.

 I'd rather hear ideas for things that would make it easier to support
 both.  The window is open for changes to make that easier.


Supporting both was one of the benefits I identified in WSME. Think of it
as a declarative layer for the API, just like SQLAlchemy has declarative
table definitions. As a developer, you never have to think about the format
of the data on the wire because by the time you get it in the API endpoint,
it's an object.

Doug



 --
 Russell Bryant

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] XML Support for Nova v3 API

2013-06-21 Thread Doug Hellmann
On Thu, Jun 20, 2013 at 2:00 PM, Vishvananda Ishaya
vishvana...@gmail.comwrote:


 On Jun 20, 2013, at 10:22 AM, Brant Knudson b...@acm.org wrote:

 How about a mapping of JSON concepts to XML like:

 collections:
 object pair name=pair-name the-value /pair ... /object
 array element the-value /element ... /array

 values:
 stringtext/string
 true/
 false/
 null/
 numbernumber/number

 This type of mapping would remove any ambiguities. Ambiguities and
 complexity are problems I've seen with the XML-JSON mapping in Keystone.
 Plus the fact that it's so not-XML would convince users to switch to JSON.
 With a simple mapping, I don't think it would be necessary to test all the
 interfaces for both XML and JSON, just test the mapping code.


 +1 for something like this. JSON primary + autgenerated XML. I think the
 ideal version would be autogeneration of xml from jsonschema and some
 method for prettifying the xml representation via jsonschema tags. The
 jsonschema + tags approach is probably a bit further off (maybe for v4?),
 so having an auto conversion which is ugly but functional seems better than
 no XML support at all.

 Vish


Let's please not invent something new for this. We're building a high level
platform. We shouldn't have to screw around with making so many low level
frameworks to do things for which tools already exist. WSME will handle
serialization, cleanly, in both XML and JSON already. Let's just use that.

Doug






 On Thu, Jun 20, 2013 at 11:11 AM, Jorge Williams 
 jorge.willi...@rackspace.com wrote:


 On Jun 20, 2013, at 10:51 AM, Russell Bryant wrote:

  On 06/20/2013 11:20 AM, Brian Elliott wrote:
  On Jun 19, 2013, at 7:34 PM, Christopher Yeoh cbky...@gmail.com
 wrote:
 
  Hi,
 
  Just wondering what people thought about how necessary it is to keep
 XML support for the Nova v3 API, given that if we want to drop it doing so
 during the v2-v3 transition is pretty much the ideal time to do so.
 
  The current plan is to keep it and is what we have been doing so far
 when porting extensions, but there are pretty obvious long term development
 and test savings if we only have one API format to support.
 
  Regards,
 
  Chris
 
 
  Can we support CORBA?
 
  No really, it'd be great to drop support for it while we can.
 
  I agree personally ... but this has come up before, and when polling the
  larger audience (and not just the dev list), there is still a large
  amount of demand for XML support (or at least that was my
  interpretation).  So, I think it should stay.
 
  I'm all for anything that makes supporting both easier.  It doesn't have
  to be the ideal XML representation.  If we wanted to adopt different
  formatting to make supporting it easier (automatic conversion from json
  in the code I guess), I'd be fine with that.
 


 I agree, we can change the XML representation to make it easy to convert
 between XML and JSON.  If I could go back in time, that would definitely be
 something I would do different.  3.0 gives us an opportunity to start over
 in that regard.Extensions may still be tricky because you still want
 to use namespaces, but having a simpler mapping may simplify the process of
 supporting both.

 -jOrGe W.


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Networking] Allocation of IPs

2013-06-21 Thread Mark McClain
There will be a deployment option where you can configure the default IP 
allocator.  Additionally, the allocator will be configurable at subnet creation 
time.

mark


On Jun 20, 2013, at 4:51 PM, Edgar Magana emag...@plumgrid.com wrote:

 Could it be possible to add a flag to disable the allocation for the IP?
 If the no allocation flag is enabled, all ports will have an empty value 
 for IPs.
  It will increase the config parameters in quantum, should we try it?
 
 Edgar
 
 From: Mark McClain mark.mccl...@dreamhost.com
 Reply-To: OpenStack List openstack-dev@lists.openstack.org
 Date: Thursday, June 20, 2013 1:13 PM
 To: OpenStack List openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Networking] Allocation of IPs
 
 There's work under way to make IP allocation pluggable. One of the options 
 will include not having an allocator for a subnet.
 
 mark
 
 On Jun 20, 2013, at 2:36 PM, Edgar Magana emag...@plumgrid.com wrote:
 
 Developers,
 
 So far in Networking (formerly Quantum) IPs are pre-allocated when a new 
 port is created by the following def:
 _allocate_ips_for_port(self, context, network, port):
 
 If we are using a real DHCP (not the dnsmasq process) that does not accept 
 static IP allocation because it only allocates IPs based on its own 
 algorithm, how can we tell Networking to not allocate an IP at all?
 I don’t think that is possible based on the code but I would like to know if 
 somebody has gone through the same problem and have a workaround solution.
 
 Cheers,
 
 Edgar
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___ OpenStack-dev mailing list 
 OpenStack-dev@lists.openstack.org 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [quantum] Deadlock on quantum port-create

2013-06-21 Thread Jay Buffington
I'm moving a thread we had with some vmware guys to this list to make it
public.

We had a problem with quantum deadlocking when it got several requests in
quick
succession.  Aaron suggested we set sql_dbpool_enable = True.  We did and it
seemed to resolve our issue.

What are the downsides of turning on sql_dbpool_enable?  Should it be on by
default?

Thanks,
Jay


 We are currently experience the following problem in our environment:
 issuing 5 'quantum port-create' commands in parallel effectively
deadlocks quantum:

 $ for n in $(seq 5); do echo 'quantum --insecure port-create
stage-net1'; done | parallel
 An unknown exception occurred.
 Request Failed: internal server error while processing your request.
 An unexpected error occurred in the NVP Plugin:Unable to get logical
switches

On Jun 21, 2013, at 9:36 AM, Aaron Rosen aro...@vmware.com wrote:
 We've encountered this issue as well. I'd try enabling:
 # Enable the use of eventlet's db_pool for MySQL. The flags
sql_min_pool_size,
 # sql_max_pool_size and sql_idle_timeout are relevant only if this is
enabled.

 sql_dbpool_enable = True

 in nvp.ini to see if that helps at all. In our internal cloud we removed
the
 creations of the lports in nvp from the transaction. Salvatore is working
on
 an async back-end to the plugin that will solve this and improve the
plugin
 performance.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] XML Support for Nova v3 API

2013-06-21 Thread Vishvananda Ishaya

On Jun 21, 2013, at 12:38 PM, Doug Hellmann doug.hellm...@dreamhost.com wrote:

 
 
 
 On Thu, Jun 20, 2013 at 2:00 PM, Vishvananda Ishaya vishvana...@gmail.com 
 wrote:
 
 On Jun 20, 2013, at 10:22 AM, Brant Knudson b...@acm.org wrote:
 
 How about a mapping of JSON concepts to XML like:
 
 collections:
 object pair name=pair-name the-value /pair ... /object
 array element the-value /element ... /array
 
 values:
 stringtext/string
 true/
 false/
 null/
 numbernumber/number
 
 This type of mapping would remove any ambiguities. Ambiguities and 
 complexity are problems I've seen with the XML-JSON mapping in Keystone. 
 Plus the fact that it's so not-XML would convince users to switch to JSON. 
 With a simple mapping, I don't think it would be necessary to test all the 
 interfaces for both XML and JSON, just test the mapping code.
 
 +1 for something like this. JSON primary + autgenerated XML. I think the 
 ideal version would be autogeneration of xml from jsonschema and some method 
 for prettifying the xml representation via jsonschema tags. The jsonschema + 
 tags approach is probably a bit further off (maybe for v4?), so having an 
 auto conversion which is ugly but functional seems better than no XML support 
 at all.
 
 Vish
 
 Let's please not invent something new for this. We're building a high level 
 platform. We shouldn't have to screw around with making so many low level 
 frameworks to do things for which tools already exist. WSME will handle 
 serialization, cleanly, in both XML and JSON already. Let's just use that.
 
 Doug

Doug,

Switching to WSME for v3 is out of scope at this point I think. Definitely 
worth considering for v4 though.

Vish

  
 
 
 
 
 On Thu, Jun 20, 2013 at 11:11 AM, Jorge Williams 
 jorge.willi...@rackspace.com wrote:
 
 On Jun 20, 2013, at 10:51 AM, Russell Bryant wrote:
 
  On 06/20/2013 11:20 AM, Brian Elliott wrote:
  On Jun 19, 2013, at 7:34 PM, Christopher Yeoh cbky...@gmail.com wrote:
 
  Hi,
 
  Just wondering what people thought about how necessary it is to keep XML 
  support for the Nova v3 API, given that if we want to drop it doing so 
  during the v2-v3 transition is pretty much the ideal time to do so.
 
  The current plan is to keep it and is what we have been doing so far 
  when porting extensions, but there are pretty obvious long term 
  development and test savings if we only have one API format to support.
 
  Regards,
 
  Chris
 
 
  Can we support CORBA?
 
  No really, it'd be great to drop support for it while we can.
 
  I agree personally ... but this has come up before, and when polling the
  larger audience (and not just the dev list), there is still a large
  amount of demand for XML support (or at least that was my
  interpretation).  So, I think it should stay.
 
  I'm all for anything that makes supporting both easier.  It doesn't have
  to be the ideal XML representation.  If we wanted to adopt different
  formatting to make supporting it easier (automatic conversion from json
  in the code I guess), I'd be fine with that.
 
 
 
 I agree, we can change the XML representation to make it easy to convert 
 between XML and JSON.  If I could go back in time, that would definitely be 
 something I would do different.  3.0 gives us an opportunity to start over 
 in that regard.Extensions may still be tricky because you still want 
 to use namespaces, but having a simpler mapping may simplify the process of 
 supporting both.
 
 -jOrGe W.
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] XML Support for Nova v3 API

2013-06-21 Thread Doug Hellmann
On Fri, Jun 21, 2013 at 3:46 PM, Vishvananda Ishaya
vishvana...@gmail.comwrote:


 On Jun 21, 2013, at 12:38 PM, Doug Hellmann doug.hellm...@dreamhost.com
 wrote:




 On Thu, Jun 20, 2013 at 2:00 PM, Vishvananda Ishaya vishvana...@gmail.com
  wrote:


 On Jun 20, 2013, at 10:22 AM, Brant Knudson b...@acm.org wrote:

 How about a mapping of JSON concepts to XML like:

 collections:
 object pair name=pair-name the-value /pair ... /object
 array element the-value /element ... /array

 values:
 stringtext/string
 true/
 false/
 null/
 numbernumber/number

 This type of mapping would remove any ambiguities. Ambiguities and
 complexity are problems I've seen with the XML-JSON mapping in Keystone.
 Plus the fact that it's so not-XML would convince users to switch to JSON.
 With a simple mapping, I don't think it would be necessary to test all the
 interfaces for both XML and JSON, just test the mapping code.


 +1 for something like this. JSON primary + autgenerated XML. I think the
 ideal version would be autogeneration of xml from jsonschema and some
 method for prettifying the xml representation via jsonschema tags. The
 jsonschema + tags approach is probably a bit further off (maybe for v4?),
 so having an auto conversion which is ugly but functional seems better than
 no XML support at all.

 Vish


 Let's please not invent something new for this. We're building a high
 level platform. We shouldn't have to screw around with making so many low
 level frameworks to do things for which tools already exist. WSME will
 handle serialization, cleanly, in both XML and JSON already. Let's just use
 that.

 Doug


 Doug,

 Switching to WSME for v3 is out of scope at this point I think. Definitely
 worth considering for v4 though.

 Vish


Absolutely - we agreed about that weeks ago. I assumed, however, that
decision meant we would just continue to use the existing serialization
code. I thought this discussion was moving toward writing something new,
and I wanted to head that off.

Doug









 On Thu, Jun 20, 2013 at 11:11 AM, Jorge Williams 
 jorge.willi...@rackspace.com wrote:


 On Jun 20, 2013, at 10:51 AM, Russell Bryant wrote:

  On 06/20/2013 11:20 AM, Brian Elliott wrote:
  On Jun 19, 2013, at 7:34 PM, Christopher Yeoh cbky...@gmail.com
 wrote:
 
  Hi,
 
  Just wondering what people thought about how necessary it is to keep
 XML support for the Nova v3 API, given that if we want to drop it doing so
 during the v2-v3 transition is pretty much the ideal time to do so.
 
  The current plan is to keep it and is what we have been doing so far
 when porting extensions, but there are pretty obvious long term development
 and test savings if we only have one API format to support.
 
  Regards,
 
  Chris
 
 
  Can we support CORBA?
 
  No really, it'd be great to drop support for it while we can.
 
  I agree personally ... but this has come up before, and when polling
 the
  larger audience (and not just the dev list), there is still a large
  amount of demand for XML support (or at least that was my
  interpretation).  So, I think it should stay.
 
  I'm all for anything that makes supporting both easier.  It doesn't
 have
  to be the ideal XML representation.  If we wanted to adopt different
  formatting to make supporting it easier (automatic conversion from json
  in the code I guess), I'd be fine with that.
 


 I agree, we can change the XML representation to make it easy to convert
 between XML and JSON.  If I could go back in time, that would definitely be
 something I would do different.  3.0 gives us an opportunity to start over
 in that regard.Extensions may still be tricky because you still want
 to use namespaces, but having a simpler mapping may simplify the process of
 supporting both.

 -jOrGe W.


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat] Celery

2013-06-21 Thread Jessica Lucci
Hello all,

Included here is a link to a Celery wiki (distributed task queue) explaining 
what the Celery project is and how it works. Currently, celery is being used in 
a distributed pattern for the WIP task flow project. As such, links to both the 
distributed project, and its' parent task flow project, have been included for 
your viewing pleasure. Please feel free to ask any questions/address any 
concerns regarding either celery or the task flow project as a whole.

Celery: https://wiki.openstack.org/wiki/Celery
Distributed: https://wiki.openstack.org/wiki/DistributedTaskManagement
TaskFlow: https://wiki.openstack.org/wiki/TaskFlow

Thanks!
Jessica Lucci
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] disabling a tenant still allow user token

2013-06-21 Thread Dolph Mathews
On Fri, Jun 21, 2013 at 5:25 AM, Chmouel Boudjnah chmo...@enovance.comwrote:

 Hello,

 [moving on the public mailing list since this bug is anyway public]

 On 3 Jun 2013, at 17:25, Dolph Mathews dolph.math...@gmail.com wrote:

 Apologies for the delayed response on this. We have several related open
 bugs and I wanted to investigate them all at once, and perhaps fix them all
 in one pass.
 Disabling a tenant/project should result in existing tokens scoped to that
 tenant/project being immediately invalidated, so I think Chmouel's analysis
 is absolutely valid.
 Regarding list_users_in_project -- as Guang suggested, the semantics of
 that call are inherently complicated,



 looking into this it seems that we have already such function :


 https://github.com/openstack/keystone/blob/master/keystone/identity/backends/sql.py#L608

 Should it get fixed?

 so ideally we can just ask the token driver to revoke tokens with some
 context (a user OR a tenant OR a user+tenant combination). We've been going
 down that direction, but have been incredibly inconsistent in how it's
 utilized. I'd like to have a framework to consistently apply the
 consequences of disabling/deleting any entity in the system.


 agreed, I think this should be doable if we can modify :


 https://github.com/openstack/keystone/blob/master/keystone/token/core.py#L169

 changing the default user_id to None

 as for the getting the tokens for a specify project/tenant if we are not
 using a list_users_in_project would that mean we need to parse all the
 tokens to get the metadatas/extras tenant_id or there is some more
 efficient ways?


Currently the memcache token backend and SQL token backend each have their
own advantages, and I'd like to get the best of both worlds and use each as
intended. So, store tokens with these fields indexed appropriately in SQL
and cache them in memcache (and if memcache/etc isn't available, in-memory
in python).



 Chmouel.


 -Dolph


 On Wed, May 29, 2013 at 9:59 AM, Yee, Guang guang@hp.com wrote:

 Users does not really belong to a project. They have access to, or
 associated with, a project via role grant(s). Therefore, when disabling a
 project, we should only invalidate the tokens scoped to that project. But
 yes, you should be able to use the same code to invalidate the tokens when
 disabling a project.

 ** **


 https://github.com/openstack/keystone/blob/master/keystone/common/controller.py#L164
 

 ** **

 We have to be careful with list_users_in_project as user can associate
 with project with either direct role grant, or indirectly via group
 membership and group grant.  This is going to get complicated with the
 addition of inherited role grants.

 ** **

 ** **

 Guang

 ** **

 ** **

 *From:* Chmouel Boudjnah [mailto:chmo...@enovance.com]
 *Sent:* Wednesday, May 29, 2013 2:23 AM
 *To:* Adam Young; Dolph Mathews; Henry Nash; Joseph Heck; Yee, Guang;
 d...@enovance.com
 *Subject:* disabling a tenant still allow user token

 ** **

 Hi,

 Apologies for the direct email but I will be happy to move this on
 openstack-dev@ before to make sure it's not security involved.

 I'd like to bring you this bug :

 https://bugs.launchpad.net/keystone/+bug/1179955

 to your attention.

 Basically for the TL;DR when disabling a tenant don't disable the tokens
 of the user attached to it.

 We could probably do that :


 https://github.com/openstack/keystone/blob/master/keystone/common/controller.py#L164

 when updating a tenant. but I need to find a way to list users attached
 to a tenant (without having to list all the users).

 not being able to list_users_in_project() is it something intended by
 keystone?

 Do you see a workaround for how to delete tokens of all users belonging
 to a tenants?

 Let me know what do you think.

 Cheers,
 Chmouel.






-- 

-Dolph
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev