Re: [openstack-dev] [Mistral] Local vs. Scalable Engine

2014-02-24 Thread Renat Akhmerov
“In process” is fine to me.

Winson, please register a blueprint for this change and put the link in here so 
that everyone can see what it all means exactly. My feeling is that we can 
approve and get it done pretty soon.

Renat Akhmerov
@ Mirantis Inc.



On 25 Feb 2014, at 12:40, Dmitri Zimine  wrote:

> I agree with Winson's points. Inline.
> 
> On Feb 24, 2014, at 8:31 PM, Renat Akhmerov  wrote:
> 
>> 
>> On 25 Feb 2014, at 07:12, W Chan  wrote:
>> 
>>> As I understand, the local engine runs the task immediately whereas the 
>>> scalable engine sends it over the message queue to one or more executors.  
>> 
>> Correct.
> 
> Note: that "local" is confusing here, "in process" will reflect what it is 
> doing better. 
> 
>> 
>>> In what circumstances would we see a Mistral user using a local engine 
>>> (other than testing) instead of the scalable engine?
>> 
>> Yes, mostly testing we it could be used for demonstration purposes also or 
>> in the environments where installing RabbitMQ is not desirable.
>> 
>>> If we are keeping the local engine, can we move the abstraction to the 
>>> executor instead, having drivers for a local executor and remote executor?  
>>> The message flow from the engine to the executor would be consistent, it's 
>>> just where the request will be processed.  
>> 
>> I think I get the idea and it sounds good to me. We could really have 
>> executor in both cases but the transport from engine to executor can be 
>> different. Is that what you’re suggesting? And what do you call driver here?
> 
> +1 to "abstraction to the executor", indeed the local and remote engines 
> today differ only by how they invoke executor, e.g. transport / driver.
> 
>> 
>>> And since we are porting to oslo.messaging, there's already a fake driver 
>>> that allows for an in process Queue for local execution.  The local 
>>> executor can be a derivative of that fake driver for non-testing purposes.  
>>> And if we don't want to use an in process queue here to avoid the 
>>> complexity, we can have the client side module of the executor determine 
>>> whether to dispatch to a local executor vs. RPC call to a remote executor.
>> 
>> Yes, that sounds interesting. Could you please write up some etherpad with 
>> details explaining your idea?
>> 
>> 
>> 
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Future of the Nova API

2014-02-24 Thread Christopher Yeoh
On Mon, 24 Feb 2014 17:37:04 -0800
Dan Smith  wrote:

> > onSharedStorage = True
> > on_shared_storage = False
> 
> This is a good example. I'm not sure it's worth breaking users _or_
> introducing a new microversion for something like this. This is
> definitely what I would call a "purity" concern as opposed to
> "usability".

If it was just one case it wouldn't matter but when we're inconsistent
across the whole API it is a usability issue because it makes it so
much harder for a user of the API to "learn" it. They may for example
remember that they need to pass a server id, but they also have to
remember for a particular call whether it should be server_id,
instance_uuid, or id. So referring to the documentation (assuming it is
present and correct) becomes required even after using the API for an
extended period of time. It also makes it much more error prone -
simple typos are much less likely to be picked up by reviewers.

Imagine we had to use a python library where sometimes the method and
parameter names were in snake_case, others CamelCase. Sometimes a mix
of the two in the same call. Sometimes it would refer to a widget as
widget and other times you had to refer to it as thingy or the call
failed. And if you passed the wrong parameters in it would sometimes
just quietly ignore the bad ones and proceed as if everything was ok.

Oh and other times it returned saying it had done the work you asked it
to, when it really it meant I'll look at it, but it might not be able
to (more on this below). I think most developers and reviewers would be
banging their heads on their desks after a while.

> Things like the twenty different datetime formats we expose _do_ seem
> worth the change to me as it requires the client to parse a bunch of
> different formats depending on the situation. However, we could solve
> that with very little code by just exposing all the datetimes again in
> proper format:
> 
>  {
>   "updated_at": "%(random_weirdo)s",
>   "updated_at_iso": "%(isotime)s",
>  }
> 
> Doing the above is backwards compatible and doesn't create code
> organizations based on any sort of pasta metaphor. If we introduce a
> discoverable version tag so the client knows if they will be
> available, I think we're good.

Except we also now need to handle the case where both are passed in and
end up disagreeing. And what about the user confusion where they see in
most cases updated_at means one thing so they start assuming that it
always means that, meaning they then get it wrong in the odd case out.
Again, harder to code against, harder to review and is the unfortunate
side effect of being too lax in what we accept.

> URL inconsistencies seem "not worth the trouble" and I tend to think
> that the "server" vs. "instance" distinction probably isn't either,
> but I guess I'm willing to consider it.

So again I think it comes down consistency increases usability - eg
knowing that if you want to operate on a "foo" that you always access
it through /foo rather than most of the time except for those cases when
someone (almost certainly accidentally) ended up writing an interface
where you modify a foo through /bar. The latter makes it much harder to
understand an API.

> Personally, I would rather do what we can/need in order to provide
> features in a compatible way, fix real functional issues (like the
> datetimes), and not ask users to port to a new API to clear up a bunch
> of CamelCase inconsistencies. Just MHO.

So to pick another example of something we can't change in a backwards
compatible way - success return codes.

In the V2 we have often returned 200 (OK) or 201 (Created) when we
actually really mean 202 Accepted. The first two meaning we've done
what you wanted, the last meaning we've got your request, but hey it
might still fail. This is often the case where we have async call
underneath somewhere. We can't change the return code now because
existing apps will break on testing for 200 or 201 if we start
returning 202. 

The more experienced users (eg those who have got bitten by the bug)
know that the 200 doesn't really mean the operation requested has
succeeded, but the new naive user doesn't. And so in testing everything
works fine (lighter load, not hitting quotas, fewer races etc). But then
occasionally in production things fail because they're not testing that
the operation has succeeded just proceeding as if it has because our
API told them it has. That's not a very user friendly API.

Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Significance of subnet_id for LBaaS Pool

2014-02-24 Thread Rabi Mishra
Hi All,

'subnet_id' attribute of LBaaS Pool resource has been documented as "The 
network that pool members belong to"

However, with 'HAProxy' driver, it allows to add members belonging to different 
subnets/networks to a lbaas Pool.  

It also allows to create VIP from a separate subnet than the pool. I could see 
there is a validation in horizon that restricts vip to the subnet of the pool. 

My understanding is that a Pool with a specific subnet, would allow members 
from the same  subnet and the VIP would be also of the same subnet.

Can someone please help clarify the design consideration?



[stack@devstack-rabi devstack]$ neutron lb-pool-create --name http-pool 
--lb-method ROUND_ROBIN --protocol HTTP --subnet-id 547f99da-7dd5
[stack@devstack-rabi devstack]$ neutron lb-pool-list
+--+---+--+-+--+++
| id   | name  | provider | lb_method   | 
protocol | admin_state_up | status |
+--+---+--+-+--+++
| 8235339a-4158-468b-9377-5ece0826e7a6 | http-pool | haproxy  | ROUND_ROBIN | 
HTTP | True   | ACTIVE |
+--+---+--+-+--+++

[stack@devstack-rabi devstack]$ neutron lb-member-create --address 10.0.0.2 
--protocol-port 80 http-pool
Created a new member:
++--+
| Field  | Value|
++--+
| address| 10.0.0.2 |
| admin_state_up | True |
| id | e9515a09-1a95-4875-b45f-3b2bab559eb8 |
| pool_id| 8235339a-4158-468b-9377-5ece0826e7a6 |
| protocol_port  | 80   |
| status | PENDING_CREATE   |
| status_description |  |
| tenant_id  | c46ae2b06ee54d06828c346f77fb5628 |
| weight | 1|
++--+
[stack@devstack-rabi devstack]$ neutron lb-member-create --address 10.10.0.2 
--protocol-port 80 http-pool
Created a new member:
++--+
| Field  | Value|
++--+
| address| 10.10.0.2|
| admin_state_up | True |
| id | 4f4ecd2d-b734-4a98-95ac-06d9d09ddb62 |
| pool_id| 8235339a-4158-468b-9377-5ece0826e7a6 |
| protocol_port  | 80   |
| status | PENDING_CREATE   |
| status_description |  |
| tenant_id  | c46ae2b06ee54d06828c346f77fb5628 |
| weight | 1|
++--+
[stack@devstack-rabi devstack]$ neutron lb-member-list --sort-key address 
--sort-dir asc
+--+---+---++++
| id   | address   | protocol_port | weight | 
admin_state_up | status |
+--+---+---++++
| 4f4ecd2d-b734-4a98-95ac-06d9d09ddb62 | 10.10.0.2 |80 |  1 | 
True   | ACTIVE |
| e9515a09-1a95-4875-b45f-3b2bab559eb8 | 10.0.0.2  |80 |  1 | 
True   | ACTIVE |
+--+---+---++++


[stack@devstack-rabi devstack]$ neutron lb-vip-create --name http-vip 
--protocol-port 80 --protocol HTTP --subnet-id 
b1557101-c8f1-415a-846d-6d165a8e8fc2 8235339a-4158-468b-9377-5ece0826e7a6

Created a new vip:
+-+--+
| Field   | Value|
+-+--+
| address | 10.10.0.4|
| admin_state_up  | True |
| connection_limit| -1   |
| description |  |
| id  | 409e72e6-5a3c-4a7b-be0b-6a8784193dfc |
| name| http-vip |
| pool_id | 8235339a-4158-468b-9377-5ece0826e7a6 |
| port_id | 9dfc3a6f-4641-4f1d-835b-bda3aea9c6ce |
| protocol| HTTP |
| protocol_port   | 80   |
| 

Re: [openstack-dev] [Mistral] Porting executor and engine to oslo.messaging

2014-02-24 Thread Dmitri Zimine
Winson, 

While you're looking into this and working on the design, may be also think 
through other executor/engine communications.

We talked about executor communicating to engine over 3 channels (DB, REST, 
RabbitMQ) which I wasn't happy about ;) and put it off for some time. May be it 
can be rationalized as part of your design. 

DZ. 

On Feb 24, 2014, at 11:21 AM, W Chan  wrote:

> Renat,
> 
> Regarding your comments on change https://review.openstack.org/#/c/75609/, I 
> don't think the port to oslo.messaging is just a swap from pika to 
> oslo.messaging.  OpenStack services as I understand is usually implemented as 
> an RPC client/server over a messaging transport.  Sync vs async calls are 
> done via the RPC client call and cast respectively.  The messaging transport 
> is abstracted and concrete implementation is done via drivers/plugins.  So 
> the architecture of the executor if ported to oslo.messaging needs to include 
> a client, a server, and a transport.  The consumer (in this case the mistral 
> engine) instantiates an instance of the client for the executor, makes the 
> method call to handle task, the client then sends the request over the 
> transport to the server.  The server picks up the request from the exchange 
> and processes the request.  If cast (async), the client side returns 
> immediately.  If call (sync), the client side waits for a response from the 
> server over a reply_q (a unique queue for the session in the transport).  
> Also, oslo.messaging allows versioning in the message. Major version change 
> indicates API contract changes.  Minor version indicates backend changes but 
> with API compatibility.  
> 
> So, where I'm headed with this change...  I'm implementing the basic 
> structure/scaffolding for the new executor service using oslo.messaging 
> (default transport with rabbit).  Since the whole change will take a few 
> rounds, I don't want to disrupt any changes that the team is making at the 
> moment and so I'm building the structure separately.  I'm also adding 
> versioning (v1) in the module structure to anticipate any versioning changes 
> in the future.   I expect the change request will lead to some discussion as 
> we are doing here.  I will migrate the core operations of the executor 
> (handle_task, handle_task_error, do_task_action) to the server component when 
> we agree on the architecture and switch the consumer (engine) to use the new 
> RPC client for the executor instead of sending the message to the queue over 
> pika.  Also, the launcher for ./mistral/cmd/task_executor.py will change as 
> well in subsequent round.  An example launcher is here 
> https://github.com/uhobawuhot/interceptor/blob/master/bin/interceptor-engine. 
>  The interceptor project here is what I use to research how oslo.messaging 
> works.  I hope this is clear. The blueprint only changes how the request and 
> response are being transported.  It shouldn't change how the executor 
> currently works.
> 
> Finally, can you clarify the difference between local vs scalable engine?  I 
> personally do not prefer to explicitly name the engine scalable because this 
> requirement should be in the engine by default and we do not need to 
> explicitly state/separate that.  But if this is a roadblock for the change, I 
> can put the scalable structure back in the change to move this forward.
> 
> Thanks.
> Winson
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Local vs. Scalable Engine

2014-02-24 Thread Dmitri Zimine
I agree with Winson's points. Inline.

On Feb 24, 2014, at 8:31 PM, Renat Akhmerov  wrote:

> 
> On 25 Feb 2014, at 07:12, W Chan  wrote:
> 
>> As I understand, the local engine runs the task immediately whereas the 
>> scalable engine sends it over the message queue to one or more executors.  
> 
> Correct.

Note: that "local" is confusing here, "in process" will reflect what it is 
doing better. 

> 
>> In what circumstances would we see a Mistral user using a local engine 
>> (other than testing) instead of the scalable engine?
> 
> Yes, mostly testing we it could be used for demonstration purposes also or in 
> the environments where installing RabbitMQ is not desirable.
> 
>> If we are keeping the local engine, can we move the abstraction to the 
>> executor instead, having drivers for a local executor and remote executor?  
>> The message flow from the engine to the executor would be consistent, it's 
>> just where the request will be processed.  
> 
> I think I get the idea and it sounds good to me. We could really have 
> executor in both cases but the transport from engine to executor can be 
> different. Is that what you’re suggesting? And what do you call driver here?

+1 to "abstraction to the executor", indeed the local and remote engines today 
differ only by how they invoke executor, e.g. transport / driver.

> 
>> And since we are porting to oslo.messaging, there's already a fake driver 
>> that allows for an in process Queue for local execution.  The local executor 
>> can be a derivative of that fake driver for non-testing purposes.  And if we 
>> don't want to use an in process queue here to avoid the complexity, we can 
>> have the client side module of the executor determine whether to dispatch to 
>> a local executor vs. RPC call to a remote executor.
> 
> Yes, that sounds interesting. Could you please write up some etherpad with 
> details explaining your idea?
> 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron]Do you think tanent_id should be verified

2014-02-24 Thread Lingxian Kong
2014-02-25 11:25 GMT+08:00 Dong Liu :

> Thanks Jay, now I know maybe neutron will not handle tenant
> creating/deleting notifications which from keystone.
>
> There is another question, such as creating subnet request body:
> {
>   "subnet": {
> "name": "test_subnet",
> "enable_dhcp": true,
> "network_id": "57596b26-080d-4802-8cce-4318b7e543d5",
> "ip_version": 4,
> "cidr": "10.0.0.0/24",
> "tenant_id": "4209c294d1bb4c36acdfaa885075e0f1"
>

So, this is exactly what I mean for 'temant_id' here that should be
validated.
I insist this could be done via some middleware or else.

  }
> }
> As we know, the tenant_id can only be specified by admin tenant.
>
> In my test, the tenant_id I filled in the body can be any string (e.g., a
> name, an uuid, etc.) But I think this tenant existence (I mean if the
> tenant exists in keystone) should be verified, if not, the subnet I created
> will be an useless resource.
>
> Regards,
> Dong Liu
>
>
> On 2014-02-25 0:22, Jay Pipes Wrote:
>
>> On Mon, 2014-02-24 at 16:23 +0800, Lingxian Kong wrote:
>>
>>> I think 'tenant_id' should always be validated when creating neutron
>>> resources, whether or not Neutron can handle the notifications from
>>> Keystone when tenant is deleted.
>>>
>>
>> -1
>>
>> Personally, I think this cross-service request is likely too expensive
>> to do on every single request to Neutron. It's already expensive enough
>> to use Keystone when not using PKI tokens, and adding another round trip
>> to Keystone for this kind of thing is not appealing to me. The tenant is
>> already "validated" when it is used to get the authentication token used
>> in requests to Neutron, so other than the scenarios where a tenant is
>> deleted in Keystone (which, with notifications in Keystone, there is now
>> a solution for), I don't see much value in the extra expense this would
>> cause.
>>
>> Best,
>> -jay
>>
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
*---*
*Lingxian Kong*
Huawei Technologies Co.,LTD.
IT Product Line CloudOS PDU
China, Xi'an
Mobile: +86-18602962792
Email: konglingx...@huawei.com; anlin.k...@gmail.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Notification When Creating/Deleting a Tenant in openstack

2014-02-24 Thread Nader Lahouti
Hi Lance,

And I'm doing the same. With the resource ID from the notification, I'm
using the keystoneclient to get the project name.


Regards,
Nader.



On Mon, Feb 24, 2014 at 10:50 AM, Lance D Bragstad wrote:

> Response below.
>
>
> Best Regards,
>
> Lance Bragstad
> ldbra...@us.ibm.com
>
> Nader Lahouti  wrote on 02/24/2014 11:31:10 AM:
>
> > From: Nader Lahouti 
> > To: "OpenStack Development Mailing List (not for usage questions)"
> > ,
> > Date: 02/24/2014 11:37 AM
> > Subject: Re: [openstack-dev] [keystone] Notification When Creating/
> > Deleting a Tenant in openstack
>
> >
> > Hi Swann,
> >
> > I was able to listen to keystone notification by setting
> > notifications in the keystone.conf file. I only needed the
> > notification (CURD) for project and handle it in my plugin code so
> > don't need ceilometer to handle them.
> > The other issue is that the notification is only for limited to
> > resource_id  and don't have other information such as project name.
>
> The idea behind this when we originally implemented notifications in
> Keystone was to
> provide the resource being changed, such as 'user', 'project', 'trust' and
> the uuid of that
> resource. From there your plugin and could request more information from
> Keystone by doing a
> GET on that resource. This way would could keep the payload of the
> notification sent minimal
> in case all the information on the resource wasn't required.
>
> >
> > Thanks,
> > Nader.
> >
> >
>
> > On Mon, Feb 24, 2014 at 2:10 AM, Swann Croiset 
> wrote:
> >
> > Hi Nader,
> >
> > These notifications must be handled by Ceilometer like others [1].
> > it is surprising that it does not already identity meters indeed...
> > probably nobody needs them before you.
> > I guess it remains to open a BP and code them like I recently did for
> Heat [2]
> >
> >
> > http://docs.openstack.org/developer/ceilometer/measurements.html
> >
> https://blueprints.launchpad.net/ceilometer/+spec/handle-heat-notifications
> >
>
> > 2014-02-20 19:10 GMT+01:00 Nader Lahouti :
> >
> > Thanks Dolph for link. The document shows the format of the message
> > and doesn't give any info on how to listen to the notification.
> > Is there any other document showing the detail on how to listen or
> > get these notifications ?
> >
> > Regards,
> > Nader.
> >
> > On Feb 20, 2014, at 9:06 AM, Dolph Mathews 
> wrote:
>
> > Yes, see:
> >
> >   http://docs.openstack.org/developer/keystone/event_notifications.html
> >
> > On Thu, Feb 20, 2014 at 10:54 AM, Nader Lahouti  > > wrote:
> > Hi All,
> >
> > I have a question regarding creating/deleting a tenant in openstack
> > (using horizon or CLI). Is there any notification mechanism in place
> > so that an application get informed of such an event?
> >
> > If not, can it be done using plugin to send create/delete
> > notification to an application?
> >
> > Appreciate your suggestion and help.
> >
> > Regards,
> > Nader.
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> >
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Plugin architecture for custom actions?

2014-02-24 Thread Renat Akhmerov
Yes, it’s one of the important things that we would like to implement. It’s not 
the highest priority right now but we definitely need to do that. Dmitri Zimine 
and I talked about it and at some point Dmitri wanted to start working on it. 
However, we can decide to rearrange our plans within the team and assign this 
task to someone else.

We already have a BP 
(https://blueprints.launchpad.net/mistral/+spec/mistral-pluggable-task-actions) 
but it doesn’t describe how this plugin architecture should be implemented.

Renat Akhmerov
@ Mirantis Inc.



On 25 Feb 2014, at 08:43, W Chan  wrote:

> Will Mistral be supporting custom actions developed by users?  If so, should 
> the Actions module be refactored to individual plugins with a dynamic process 
> for action type mapping/lookup?
> 
> Thanks.
> Winson
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Local vs. Scalable Engine

2014-02-24 Thread Renat Akhmerov

On 25 Feb 2014, at 07:12, W Chan  wrote:

> As I understand, the local engine runs the task immediately whereas the 
> scalable engine sends it over the message queue to one or more executors.  

Correct.

> In what circumstances would we see a Mistral user using a local engine (other 
> than testing) instead of the scalable engine?

Yes, mostly testing we it could be used for demonstration purposes also or in 
the environments where installing RabbitMQ is not desirable.

> If we are keeping the local engine, can we move the abstraction to the 
> executor instead, having drivers for a local executor and remote executor?  
> The message flow from the engine to the executor would be consistent, it's 
> just where the request will be processed.  

I think I get the idea and it sounds good to me. We could really have executor 
in both cases but the transport from engine to executor can be different. Is 
that what you’re suggesting? And what do you call driver here?

> And since we are porting to oslo.messaging, there's already a fake driver 
> that allows for an in process Queue for local execution.  The local executor 
> can be a derivative of that fake driver for non-testing purposes.  And if we 
> don't want to use an in process queue here to avoid the complexity, we can 
> have the client side module of the executor determine whether to dispatch to 
> a local executor vs. RPC call to a remote executor.

Yes, that sounds interesting. Could you please write up some etherpad with 
details explaining your idea?



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Porting executor and engine to oslo.messaging

2014-02-24 Thread Renat Akhmerov

On 25 Feb 2014, at 02:21, W Chan  wrote:

> Renat,
> 
> Regarding your comments on change https://review.openstack.org/#/c/75609/, I 
> don't think the port to oslo.messaging is just a swap from pika to 
> oslo.messaging.  OpenStack services as I understand is usually implemented as 
> an RPC client/server over a messaging transport.  Sync vs async calls are 
> done via the RPC client call and cast respectively.  The messaging transport 
> is abstracted and concrete implementation is done via drivers/plugins.  So 
> the architecture of the executor if ported to oslo.messaging needs to include 
> a client, a server, and a transport.  The consumer (in this case the mistral 
> engine) instantiates an instance of the client for the executor, makes the 
> method call to handle task, the client then sends the request over the 
> transport to the server.  The server picks up the request from the exchange 
> and processes the request.  If cast (async), the client side returns 
> immediately.  If call (sync), the client side waits for a response from the 
> server over a reply_q (a unique queue for the session in the transport).  
> Also, oslo.messaging allows versioning in the message. Major version change 
> indicates API contract changes.  Minor version indicates backend changes but 
> with API compatibility.  

My main concern about this patch is not related with messaging infrastructure. 
I believe you know better than me how it should look like. I’m mostly concerned 
with the way of making changes you chose. From my perspective, it’s much better 
to make atomic changes where every changes doesn’t affect too much in existing 
architecture. So the first step could be to change pika to oslo.messaging with 
minimal structural changes without introducing versioning (could be just TODO 
comment saying that the framework allows it and we may want to use it in the 
future, to be decide), without getting rid of the current engine structure 
(local, scalable). Some of the things in the file structure and architecture 
came from the decisions made by many people and we need to be careful about 
changing them.


> So, where I'm headed with this change...  I'm implementing the basic 
> structure/scaffolding for the new executor service using oslo.messaging 
> (default transport with rabbit).  Since the whole change will take a few 
> rounds, I don't want to disrupt any changes that the team is making at the 
> moment and so I'm building the structure separately.  I'm also adding 
> versioning (v1) in the module structure to anticipate any versioning changes 
> in the future.   I expect the change request will lead to some discussion as 
> we are doing here.  I will migrate the core operations of the executor 
> (handle_task, handle_task_error, do_task_action) to the server component when 
> we agree on the architecture and switch the consumer (engine) to use the new 
> RPC client for the executor instead of sending the message to the queue over 
> pika.  Also, the launcher for ./mistral/cmd/task_executor.py will change as 
> well in subsequent round.  An example launcher is here 
> https://github.com/uhobawuhot/interceptor/blob/master/bin/interceptor-engine. 
>  The interceptor project here is what I use to research how oslo.messaging 
> works.  I hope this is clear. The blueprint only changes how the request and 
> response are being transported.  It shouldn't change how the executor 
> currently works.

Please create a document describing the approach you’re pursuing here. I would 
expect to see the main goals you want to achieve upon completion.

> Finally, can you clarify the difference between local vs scalable engine?  I 
> personally do not prefer to explicitly name the engine scalable because this 
> requirement should be in the engine by default and we do not need to 
> explicitly state/separate that.  But if this is a roadblock for the change, I 
> can put the scalable structure back in the change to move this forward.

Separation for local and scalable implementations appeared for historical 
reasons because from the beginning we didn’t see how it all would look like and 
hence we tried different approaches to implement the engine. At some point we 
got 2 working versions: the one that didn’t distribute anything (local) and 
another one that could distribute tasks over task executors via asynchronous HA 
transport (scalable). Later on we decided to leave them both since scalable is 
needed by the requirements and local might be useful for demonstration purposes 
and testing since it doesn’t require RabbitMQ to be installed. So we decided to 
refactor both and make them work similarly except the way they run tasks.

Thanks.

Renat Akhmerov
@Mirantis Inc.___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Help a poor Nova Grizzy Backport Bug Fix

2014-02-24 Thread Michael Davies
Hi all,

I have a Nova Grizzly backport bug[1] in review[2] that has been hanging
around for 4 months waiting for one more +2 from a stable team person.

If there's someone kind enough to bump this through, it'd be appreciated ;)

Thanks in advance,

Michael...

[1] https://launchpad.net/bugs/1188543
[2] https://review.openstack.org/#/c/54460/
-- 
Michael Davies   mich...@the-davies.net
Rackspace Australia
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron]The mechanism of physical_network & segmentation_id is logical?

2014-02-24 Thread 黎林果
Yes. You are right.

The bp has implemented this function.

Thank you very much.

2014-02-25 11:01 GMT+08:00 Robert Kukura :
> On 02/24/2014 09:11 PM, 黎林果 wrote:
>> Bob,
>>
>> Thank you very much. I have understood.
>>
>> Another question:
>> When create network with provider, if the network type is VLAN, the
>> provider:segmentation_id must be specified.
>>
>> In function: def _process_provider_create(self, context, attrs)
>>
>> I think it can come from the db too. If get from db failed, then throw
>> exception.
>
> I think you are suggesting that if the provider:network_type and
> provider:physical_network are specified, but provider:segmentation_id is
> not specified, then a value should be allocated from the tenant network
> pool. Is that correct?
>
> If so, that sounds similar to
> https://blueprints.launchpad.net/neutron/+spec/provider-network-partial-specs,
> which is being implemented in the ML2 plugin for icehouse. I would not
> expect a similar feature to be implemented for the openvswitch
> monolithic plugin, since that is being deprecated.
>
>>
>> what's your opinion?
>
> If I understand it correctly, I agree this feature could be useful.
>
> -Bob
>
>>
>> Thanks!
>>
>> 2014-02-24 21:50 GMT+08:00 Robert Kukura :
>>> On 02/24/2014 07:09 AM, 黎林果 wrote:
 Hi stackers,

   When create a network, if we don't set provider:network_type,
 provider:physical_network or provider:segmentation_id, the
 network_type will be from cfg, but the other tow is from db's first
 record. Code is

 (physical_network,
  segmentation_id) = ovs_db_v2.reserve_vlan(session)



   There has tow questions.
   1, network_vlan_ranges = physnet1:100:200
  Can we config much physical_networks by cfg?
>>>
>>> Hi Lee,
>>>
>>> You can configure multiple physical_networks. For example:
>>>
>>> network_vlan_ranges=physnet1:100:200,physnet1:1000:3000,physnet2:2000:4000,physnet3
>>>
>>> This makes ranges of VLAN tags on physnet1 and physnet2 available for
>>> allocation as tenant networks (assuming tenant_network_type = vlan).
>>>
>>> This also makes physnet1, physnet2, and physnet3 available for
>>> allocation of VLAN (and flat for OVS) provider networks (with admin
>>> privilege). Note that physnet3 is available for allocation of provider
>>> networks, but not for tenant networks because it does not have a range
>>> of VLANs specified.
>>>

   2, If yes, the physical_network should be uncertainty. Dose this logical?
>>>
>>> Each physical_network is considered to be a separate VLAN trunk, so VLAN
>>> 2345 on physnet1 is a different isolated network than VLAN 2345 on
>>> physnet2. All the specified (physical_network,segmentation_id) tuples
>>> form a pool of available tenant networks. Normal tenants have no
>>> visibility of which physical_network trunk their networks get allocated on.
>>>
>>> -Bob
>>>


 Regards!

 Lee Li

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

>>>
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Re: [neutron]The mechanism of physical_network & segmentation_id is logical?

2014-02-24 Thread Yuzhou (C)

2014-02-24 21:50 GMT+08:00 Robert Kukura :
> On 02/24/2014 07:09 AM, 黎林果 wrote:
>> Hi stackers,
>>
>>   When create a network, if we don't set provider:network_type,
>> provider:physical_network or provider:segmentation_id, the
>> network_type will be from cfg, but the other tow is from db's first
>> record. Code is
>>
>> (physical_network,
>>  segmentation_id) = ovs_db_v2.reserve_vlan(session)
>>
>>
>>
>>   There has tow questions.
>>   1, network_vlan_ranges = physnet1:100:200
>>  Can we config much physical_networks by cfg?
>
> Hi Lee,
>
> You can configure multiple physical_networks. For example:
>
> network_vlan_ranges=physnet1:100:200,physnet1:1000:3000,physnet2:2000:4000,physnet3
>
> This makes ranges of VLAN tags on physnet1 and physnet2 available for
> allocation as tenant networks (assuming tenant_network_type = vlan).
>
> This also makes physnet1, physnet2, and physnet3 available for
> allocation of VLAN (and flat for OVS) provider networks (with admin
> privilege). Note that physnet3 is available for allocation of provider
> networks, but not for tenant networks because it does not have a range
> of VLANs specified.
>
>>
>>   2, If yes, the physical_network should be uncertainty. Dose this logical?
>
> Each physical_network is considered to be a separate VLAN trunk, so VLAN
> 2345 on physnet1 is a different isolated network than VLAN 2345 on
> physnet2. All the specified (physical_network,segmentation_id) tuples
> form a pool of available tenant networks. Normal tenants have no
> visibility of which physical_network trunk their networks get allocated on.
>
> -Bob
>
>>
>>
>> Regards!
>>
>> Lee Li


Why say  VLAN 2345 on physnet1 is a different isolated network than VLAN 2345 on
physnet2?

I think different physnet make traffic output to different physical NIC,but 
these traffic
have same vlan tag 2345! So why isolated?

Regards

Zhou Yu

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron]Do you think tanent_id should be verified

2014-02-24 Thread Dong Liu
Thanks Jay, now I know maybe neutron will not handle tenant 
creating/deleting notifications which from keystone.


There is another question, such as creating subnet request body:
{
  "subnet": {
"name": "test_subnet",
"enable_dhcp": true,
"network_id": "57596b26-080d-4802-8cce-4318b7e543d5",
"ip_version": 4,
"cidr": "10.0.0.0/24",
"tenant_id": "4209c294d1bb4c36acdfaa885075e0f1"
  }
}
As we know, the tenant_id can only be specified by admin tenant.

In my test, the tenant_id I filled in the body can be any string (e.g., 
a name, an uuid, etc.) But I think this tenant existence (I mean if the 
tenant exists in keystone) should be verified, if not, the subnet I 
created will be an useless resource.


Regards,
Dong Liu

On 2014-02-25 0:22, Jay Pipes Wrote:

On Mon, 2014-02-24 at 16:23 +0800, Lingxian Kong wrote:

I think 'tenant_id' should always be validated when creating neutron
resources, whether or not Neutron can handle the notifications from
Keystone when tenant is deleted.


-1

Personally, I think this cross-service request is likely too expensive
to do on every single request to Neutron. It's already expensive enough
to use Keystone when not using PKI tokens, and adding another round trip
to Keystone for this kind of thing is not appealing to me. The tenant is
already "validated" when it is used to get the authentication token used
in requests to Neutron, so other than the scenarios where a tenant is
deleted in Keystone (which, with notifications in Keystone, there is now
a solution for), I don't see much value in the extra expense this would
cause.

Best,
-jay



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Mistral] std:repeat action

2014-02-24 Thread manas kelshikar
Hi everyone,

I have put down my thoughts about the standard repeat action blueprint.

https://blueprints.launchpad.net/mistral/+spec/mistral-std-repeat-action

I have added link to an etherpad document which explore a few alternatives
of the approach. I have explored details of how the std:repeat action
should behave as defined in the blueprint. Further there are some thoughts
on how it could be designed to remove ambiguity in the chaining.

Please take a look.

Thanks,
Manas
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Question about USB passthrough

2014-02-24 Thread Liuji (Jeremy)
Hi,

It is a good implementation of USB redirection function of 
http://usbip.sourceforge.net/.

But now I more concern about how to provide USB passthrough.

Now that USB devices are used so widely in private/hybrid cloud like used as 
USB key, and there are no technical issues in libvirt/qemu.
I think it a valuable feature in openstack.

So, are there any further suggestions?

Thanks,
Jeremy Liu

> -Original Message-
> From: gustavo panizzo  [mailto:gfa...@zumbi.com.ar]
> Sent: Tuesday, February 25, 2014 2:25 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Cc: Liuji (Jeremy); bpavlo...@mirantis.com; Luohao (brian); Yuanjing (D)
> Subject: Re: [openstack-dev] [nova] Question about USB passthrough
> 
> On 02/24/2014 01:10 AM, Liuji (Jeremy) wrote:
> > Hi, Boris and all other guys:
> >
> > I have found a BP about USB device passthrough in
> https://blueprints.launchpad.net/nova/+spec/host-usb-passthrough.
> > I have also read the latest nova code and make sure it doesn't support USB
> passthrough by now.
> >
> > Are there any progress or plan for USB passthrough?
> use usbip, it works today and is awesome!
> 
> http://usbip.sourceforge.net/
> 
> >
> >
> > Thanks,
> > Jeremy Liu
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> --
> 1AE0 322E B8F7 4717 BDEA BF1D 44BB 1BA7 9F6C 6333

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron]The mechanism of physical_network & segmentation_id is logical?

2014-02-24 Thread Robert Kukura
On 02/24/2014 09:11 PM, 黎林果 wrote:
> Bob,
> 
> Thank you very much. I have understood.
> 
> Another question:
> When create network with provider, if the network type is VLAN, the
> provider:segmentation_id must be specified.
> 
> In function: def _process_provider_create(self, context, attrs)
> 
> I think it can come from the db too. If get from db failed, then throw
> exception.

I think you are suggesting that if the provider:network_type and
provider:physical_network are specified, but provider:segmentation_id is
not specified, then a value should be allocated from the tenant network
pool. Is that correct?

If so, that sounds similar to
https://blueprints.launchpad.net/neutron/+spec/provider-network-partial-specs,
which is being implemented in the ML2 plugin for icehouse. I would not
expect a similar feature to be implemented for the openvswitch
monolithic plugin, since that is being deprecated.

> 
> what's your opinion?

If I understand it correctly, I agree this feature could be useful.

-Bob

> 
> Thanks!
> 
> 2014-02-24 21:50 GMT+08:00 Robert Kukura :
>> On 02/24/2014 07:09 AM, 黎林果 wrote:
>>> Hi stackers,
>>>
>>>   When create a network, if we don't set provider:network_type,
>>> provider:physical_network or provider:segmentation_id, the
>>> network_type will be from cfg, but the other tow is from db's first
>>> record. Code is
>>>
>>> (physical_network,
>>>  segmentation_id) = ovs_db_v2.reserve_vlan(session)
>>>
>>>
>>>
>>>   There has tow questions.
>>>   1, network_vlan_ranges = physnet1:100:200
>>>  Can we config much physical_networks by cfg?
>>
>> Hi Lee,
>>
>> You can configure multiple physical_networks. For example:
>>
>> network_vlan_ranges=physnet1:100:200,physnet1:1000:3000,physnet2:2000:4000,physnet3
>>
>> This makes ranges of VLAN tags on physnet1 and physnet2 available for
>> allocation as tenant networks (assuming tenant_network_type = vlan).
>>
>> This also makes physnet1, physnet2, and physnet3 available for
>> allocation of VLAN (and flat for OVS) provider networks (with admin
>> privilege). Note that physnet3 is available for allocation of provider
>> networks, but not for tenant networks because it does not have a range
>> of VLANs specified.
>>
>>>
>>>   2, If yes, the physical_network should be uncertainty. Dose this logical?
>>
>> Each physical_network is considered to be a separate VLAN trunk, so VLAN
>> 2345 on physnet1 is a different isolated network than VLAN 2345 on
>> physnet2. All the specified (physical_network,segmentation_id) tuples
>> form a pool of available tenant networks. Normal tenants have no
>> visibility of which physical_network trunk their networks get allocated on.
>>
>> -Bob
>>
>>>
>>>
>>> Regards!
>>>
>>> Lee Li
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Future of the Nova API

2014-02-24 Thread Kenichi Oomichi

> -Original Message-
> From: Christopher Yeoh [mailto:cbky...@gmail.com]
> Sent: Tuesday, February 25, 2014 6:35 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [nova] Future of the Nova API
> 
> > - twice the code
> > - different enough to be annoying to convert existing clients to use
> > - not currently different enough to justify the pain
> 
> For starters, It's not twice the code because we don't do things like
> proxying and because we are able to logically separate out input
> validation jsonschema.
> 
> v2 API: ~14600 LOC
> v3 API: ~7300 LOC (~8600 LOC if nova-network as-is added back in,
> though the actually increase would almost certainly be a lot smaller)
> 
> And that's with a lot of the jsonschema patches not landed. So its
> actually getting *smaller*. Long term which looks the better from a
> maintenance point of view

The merits of jsonschema validation are not only less-code but also
clarifying API attributes Nova has.
Throght jsonschema validation development, we needed to clarify all API
attributes of each API and write all of them to API schema defined with
jsonschema. For example, 
https://review.openstack.org/#/c/68560/6/nova/api/openstack/compute/schemas/v3/scheduler_hints.py
clarifies that API extension scheduler_hints of "create a server" API
contains 7 API attributes and these data types. I think we don't have
enough API document which shows all API attributes.

So now I have a question what should deployers answer a question if
their users ask
  "What API attributes can we specify to your OpenStack API?"
to the deployers. Should we/deployers dig all v2 API code for all
API attributes? I think many people on this ML have this kind of
experience.
If all jsonschema patches are landed, we can show all API attributes
for deployers/users by just specifying API schema directory.


Thanks
Ken'ichi Ohmichi


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Nova Bug Scrub meeting

2014-02-24 Thread Tracy Jones
Hi all - i have set up the nova bug scrub meeting for Wednesdays at 1630 UTC in 
the #openstack-meeting-3 IRC channel


The first meeting will be all about triaging the 117 un-triaged bugs (here).  


https://wiki.openstack.org/wiki/Meetings/NovaBugScrub#Weekly_OpenStack_Nova_Bug_Scrub_Meeting


Weekly on Wednesday at 1630 UTC
IRC channel: #openstack-meeting-3
Chair (to contact for more information): Tracy Jones
See Meetings/NovaBugScrub for an agenda


Come join the fun!

Tracy___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Plugin architecture for custom actions?

2014-02-24 Thread Georgy Okrokvertskhov
Hi Winson,

I think this is a good idea to support pluggable interface for actions. I
think you can submit a BP for that.
There is a python library stevedore developed in OpenStack community. I
don't know the details but it looks like this library is intended to help
build plugins.

Thanks
Georgy


On Mon, Feb 24, 2014 at 5:43 PM, W Chan  wrote:

> Will Mistral be supporting custom actions developed by users?  If so,
> should the Actions module be refactored to individual plugins with a
> dynamic process for action type mapping/lookup?
>
> Thanks.
> Winson
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Georgy Okrokvertskhov
Architect,
OpenStack Platform Products,
Mirantis
http://www.mirantis.com
Tel. +1 650 963 9828
Mob. +1 650 996 3284
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Future of the Nova API

2014-02-24 Thread Russell Bryant
CC'ing the openstack-operators mailing list to get a wider set of
feedback on this question.

On 02/24/2014 05:26 PM, Christopher Yeoh wrote:
>> 1) Continue as we have been, and plan to release v3 once we have a
>> compelling enough feature set.
> 
> So I think we should release in Juno even if its only with tasks and
> nova-network added. Because this allows new users to start using the
> API immediately rather than having to code against V2 (with its extra
> barriers to use) and then take the hit to upgrade later.

OK, let's go a bit further with the case of marking the v3 API stable in
Juno.  If we did that, what is a reasonable timeframe of v2 being
deprecated before it could be removed?

>From a selfish developer perspective, the answer is "remove v2
immediately".  From a selfish user perspective, the answer is "never
remove it".  Where is the reasonable middle ground?  How long would it
take to have enough clients migrated that we could remove the old API?

I'm interested in answers from pretty much everyone on this, including
deployers, as well as users of our APIs.  I'm also especially interested
in an opinion from large public clouds based on OpenStack.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron]The mechanism of physical_network & segmentation_id is logical?

2014-02-24 Thread 黎林果
Bob,

Thank you very much. I have understood.

Another question:
When create network with provider, if the network type is VLAN, the
provider:segmentation_id must be specified.

In function: def _process_provider_create(self, context, attrs)

I think it can come from the db too. If get from db failed, then throw
exception.

what's your opinion?

Thanks!

2014-02-24 21:50 GMT+08:00 Robert Kukura :
> On 02/24/2014 07:09 AM, 黎林果 wrote:
>> Hi stackers,
>>
>>   When create a network, if we don't set provider:network_type,
>> provider:physical_network or provider:segmentation_id, the
>> network_type will be from cfg, but the other tow is from db's first
>> record. Code is
>>
>> (physical_network,
>>  segmentation_id) = ovs_db_v2.reserve_vlan(session)
>>
>>
>>
>>   There has tow questions.
>>   1, network_vlan_ranges = physnet1:100:200
>>  Can we config much physical_networks by cfg?
>
> Hi Lee,
>
> You can configure multiple physical_networks. For example:
>
> network_vlan_ranges=physnet1:100:200,physnet1:1000:3000,physnet2:2000:4000,physnet3
>
> This makes ranges of VLAN tags on physnet1 and physnet2 available for
> allocation as tenant networks (assuming tenant_network_type = vlan).
>
> This also makes physnet1, physnet2, and physnet3 available for
> allocation of VLAN (and flat for OVS) provider networks (with admin
> privilege). Note that physnet3 is available for allocation of provider
> networks, but not for tenant networks because it does not have a range
> of VLANs specified.
>
>>
>>   2, If yes, the physical_network should be uncertainty. Dose this logical?
>
> Each physical_network is considered to be a separate VLAN trunk, so VLAN
> 2345 on physnet1 is a different isolated network than VLAN 2345 on
> physnet2. All the specified (physical_network,segmentation_id) tuples
> form a pool of available tenant networks. Normal tenants have no
> visibility of which physical_network trunk their networks get allocated on.
>
> -Bob
>
>>
>>
>> Regards!
>>
>> Lee Li
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Horizon] Live migration

2014-02-24 Thread Dmitry Borodaenko
Dear Horizon developers,

I think that the blueprint to add live migrations support to
Horizon[0] was incorrectly labeled as a duplicate of the earlier
migrate-instance blueprint[1].

[0] https://blueprints.launchpad.net/horizon/+spec/live-migration
[1] https://blueprints.launchpad.net/horizon/+spec/migrate-instance

These two blueprints are not duplicates. As I commented in the
blueprint whiteboard, live migration is a significantly different
migration mode that is currently not implemented in Horizon. The
current behaviour is misleading and may confuse users looking for live
migrations into triggering disruptive cold migrations instead.

I think we should reopen this blueprint and put it back into the queue.

Thanks,

-- 
Dmitry Borodaenko

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Object Model discussion

2014-02-24 Thread Stephen Balukoff
Hi y'all,

Jay, in the L7 example you give, it looks like you're setting SSL
parameters for a given load balancer front-end. Do you have an example you
can share where where certain traffic is sent to one set of back-end nodes,
and other traffic is sent to a different set of back-end nodes based on the
URL in the client request? (I'm trying to understand how this can work
without the concept of 'pools'.)  Also, what if the first group of nodes
needs a different health check run against it than the second group of
nodes?

As far as hiding implementation details from the user:  To a certain degree
I agree with this, and to a certain degree I do not: OpenStack is a cloud
OS fulfilling the needs of supplying IaaS. It is not a PaaS. As such, the
objects that users deal with largely are analogous to physical pieces of
hardware that make up a cluster, albeit these are virtualized or
conceptualized. Users can then use these conceptual components of a cluster
to build the (virtual) infrastructure they need to support whatever
application they want. These objects have attributes and are expected to
act in a certain way, which again, are usually analogous to actual hardware.

If we were building a PaaS, the story would be a lot different--  but what
we are building is a cloud OS that provides Infrastructure (as a service).

I think the concept of a 'load balancer' or 'load balancer service' is one
of these building blocks that has attributes and is expected to act in a
certain way. (Much the same way cinder provides "block devices" or swift
provides an "object store.") And yes, while you can do away with a lot of
the implementation details and use a very simple model for the simplest use
case, there are a whole lot of load balancer use cases more complicated
than that which don't work with the current model (or even a small
alteration to the current model). If you don't allow for these more
complicated use cases, you end up with users stacking home-built software
load balancers behind the cloud OS load balancers in order to get the
features they actually need. (I understand this is a very common topology
with ELB, because ELB simply isn't capable of doing advanced things, from
the user's perspective.) In my opinion, we should be looking well beyond
what ELB can do. :P Ideally, almost all users should not have to hack
together their own load balancer because the cloud OS load balancer can't
do what they need it to do.

I'm all for having the simplest workflow possible for the basic user-- and
using the principle of least surprise when assuming defaults so that when
they grow and their needs change, they won't often have to completely
rework the load balancer component in their cluster. But the model we use
should be sufficiently sophisticated to support advanced workflows.

Also, from a cloud administrator's point of view, the cloud OS needs to be
aware of all the actual hardware components, virtual components, and other
logical constructs that make up the cloud in order to be able to
effectively maintain it. Again, almost all the details of this should be
hidden from the user. But these details must not be hidden from the cloud
administrator. This means implementation details will be represented
somehow, and will be visible to the cloud administrator.

Yes, the focus needs to be on making the user's experience as simple as
possible. But we shouldn't sacrifice powerful capabilities for a simpler
experience. And if we ignore the needs of the cloud administrator, then we
end up with a cloud that is next to impossible to practically administer.

Do y'all disagree with this, and if so, could you please share your
reasoning?

Thanks,
Stephen




On Mon, Feb 24, 2014 at 1:24 PM, Eugene Nikanorov
wrote:

> Hi Jay,
>
> Thanks for suggestions. I get the idea.
> I'm not sure the essence of this API is much different then what we have
> now.
> 1) We operate on parameters of loadbalancer rather then on
> vips/pools/listeners. No matter how we name them, the notions are there.
> 2) I see two opposite preferences: one is that user doesn't care about
> 'loadbalancer' in favor of pools/vips/listeners ('pure logical API')
> another is vice versa (yours).
> 3) The approach of providing $BALANCER_ID to pretty much every call
> solves all my concerns, I like it.
> Basically that was my initial code proposal (it's not exactly the same,
> but it's very close).
> The idea of my proposal was to have that 'balancer' resource plus being
> able to operate on vips/pools/etc.
> In this direction we could evolve from existing API to the API in your
> latest suggestion.
>
> Thanks,
> Eugene.
>
>
> On Tue, Feb 25, 2014 at 12:35 AM, Jay Pipes  wrote:
>
>> Thanks, Eugene! I've given the API a bit of thought today and jotted
>> down some thoughts below.
>>
>> On Fri, 2014-02-21 at 23:57 +0400, Eugene Nikanorov wrote:
>> > Could you provide some examples -- even in the pseudo-CLI
>> > commands like
>> > I did below. It's really d

Re: [openstack-dev] 答复: [OpenStack-dev][Nova] Can we add one configuration item for cache-using in libvirt/hypervisor?

2014-02-24 Thread Rui Chen
*I think domain attribute is more appropriate than nova.conf node config,
need to consider across host task like **migrate and live-migrate :)*


2014-02-24 10:45 GMT+08:00 zhangyu (AI) :

>  Sure, hard-coding seems weird…
>
>
>
> However, a global configuration here dominates all domains. It might be a
> little too strong in cases in which we want to apply various configurations
> to different domains.
>
>
>
> Could we add any new attributes in the info for creating a domain for
> this? Or any other suggestion?
>
>
>
> Thanks!
>
>
>
> *发件人:* wu jiang [mailto:win...@gmail.com]
> *发送时间:* 2014年2月24日 10:31
> *收件人:* OpenStack Development Mailing List
> *主题:* [openstack-dev] [OpenStack-dev][Nova] Can we add one configuration
> item for cache-using in libvirt/hypervisor?
>
>
>
> Hi all,
>
>
>
> Recently, I met one scenario which needs to close the cache on linux
> hypervisor.
>
>
>
> But some codes written in libvirt/driver.py (including suspend/snapshot)
> are hard-coded.
>
> For example:
>
> ---
>
> def suspend(self, instance):
>
> """Suspend the specified instance."""
>
> dom = self._lookup_by_name(instance['name'])
>
> self._detach_pci_devices(dom,
>
> pci_manager.get_instance_pci_devs(instance))
>
> dom.managedSave(0)
>
>
>
> So, can we add one configuration item in nova.conf, like
> *DOMAIN_SAVE_BYPASS_CACHE*, to let operator can handle it?
>
>
>
> That would be improved flexibility of Nova.
>
>
>
>
>
> Thanks
>
> wingwj
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Mistral] Plugin architecture for custom actions?

2014-02-24 Thread W Chan
Will Mistral be supporting custom actions developed by users?  If so,
should the Actions module be refactored to individual plugins with a
dynamic process for action type mapping/lookup?

Thanks.
Winson
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Future of the Nova API

2014-02-24 Thread Russell Bryant
On 02/24/2014 08:31 PM, Christopher Yeoh wrote:
> On Mon, 24 Feb 2014 18:17:34 -0500
> Sean Dague  wrote:
> 
>> On 02/24/2014 06:13 PM, Chris Friesen wrote:
>>> On 02/24/2014 04:59 PM, Sean Dague wrote:
>>>
 So, that begs a new approach. Because I think at this point even
 if we did put out Nova v3, there can never be a v4. It's too much,
 too big, and doesn't fit in the incremental nature of the project.
>>>
>>> Does it necessarily need to be that way though?  Maybe we bump the
>>> version number every time we make a non-backwards-compatible change,
>>> even if it's just removing an API call that has been deprecated for
>>> a while.
>>
>> So I'm not sure how this is different than the keep v2 and use
>> microversioning suggestion that is already in this thread.
> 
> For non backwards compatible changes I think the difference is in how
> the user accesses the API. When only make major changes when bumping
> the major version then they know for sure that if they access
> 
> /v3/foo
> 
> then they're app will work. If /v3 doesn't exist then they know it's
> not supported.
> 
> Whereas if we make backwards incompatible changes within a major
> version then they have to start checking the microversion first.



A point of clarification on the micro-version idea.  IMO, the only
changes acceptable under such a scheme are backwards compatible ones.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Future of the Nova API

2014-02-24 Thread Russell Bryant
On 02/24/2014 08:16 PM, Christopher Yeoh wrote:
> On Mon, 24 Feb 2014 16:20:12 -0800
> Dan Smith  wrote:
>>> So the deprecation message in the patch says:
>>>
>>>LOG.warning(_('XML support has been deprecated and will be
>>>  removed in the Juno release.'))
>>>
>>> perhaps that should be changed :-)
>>
>> Maybe, but I think we can continue with the plan to rip it out in
>> Juno. In the past when we've asked, there has been an overwhelming
>> amount of "meh" regarding removing it. We've considered it several
>> times, and we've now drawn a line in the sand. Personally, I'm fine
>> with doing this, and I think the support from the core team that +A'd
>> the heck out of it probably means that there is wide support for it.
> 
> Sure, I was using it as an example of where we have been willing to use
> a fixed deprecation schedule for the API. 

Well, I got a little carried away when I wrote that message.  I think we
should change it.  I don't think it's a responsible thing to do to
remove it unless we do another good round of assessing what the impact
would be and then only removing it when the impact is very minimal.

I'd really like some help from the public cloud providers to get some
insight into the percentage of their users that use XML.  We likely have
some work to do in Nova to make it easier to collect this data.

> If we look at the Havana
> user survey I think the results say:
> 
> http://www.slideshare.net/openstack/havana-survey-resultsfinal-19312081
> 
> JSON: 150
> XML: 62
> Both: 33
> 
> So thats around 40% of those surveyed who would be affected. 
> So if we can draw a line in the sand based on those sorts of numbers,
> why is it impossible to do it for the V2 API as a whole? Albeit I
> think more than one cycle would be needed.
> 

Sadly, I think those results are near useless.  For example:

 - it doesn't differentiate based on the type of responder.  Is it
   deployers saying they deploy the XML API (they don't have a choice).
   There is no way to know about cloud usage here.

 - is there possible confusion with EC2?  (EC2 is XML)

However, getting good data here *is* important.

Also note that none of the major SDKs use XML.  In addition to our own
client libraries, the following only use JSON:

Apache jclouds (Java)
openstack.net (C#)
pkgcloud (node.js)
php-opencloud (PHP)
Fog (Ruby)

>>> So either we can't fix them or in cases where we preserve backwards
>>> compatibility we end up with dual maintenance cost (our test load
>>> still doubles), but often having to be implemented in a way which
>>> costs us more in terms of readability because the code becomes
>>> spaghetti.
>>
>> I think it can be done without it turning into a mess. Non-trivial for
>> sure, but not impossible. And if not, I still expect most users would
>> prefer stability over purity.
> 
> We're not choosing between stability of purity though. As I've argued
> elsewhere its not about 'purity', its about usability. And say we do
> manage to do it without it turning into complete mess, we still have
> the dual maintenance cost which seems to be the primary concern about
> having both the V2 and V3 API released.
> 
> By supporting backwards incompatible changes inside the V2 API we're
> just hiding the fact that we in fact have two different APIs. We're not
> actually reducing the maintenance cost and it comes at increased user
> confusion, not less. In some areas of testing we'll be increasing
> the work needed to be done. Eg we need to make sure we're doing
> something sane when someone passes say:
> 
> onSharedStorage = True
> on_shared_storage = False
> 
> should the old or new behaviour get priority? Or should we instead
> return a 400? We don't need to have that logic (or testing) when we
> cleanly separate the new API from the old one. Similar issues when
> passing a mixture of old and new formats. Should that be valid? If not
> we need to explicitly check and reject.

I think in cases like this, we should only have the old behavior.  I'm
not sure I see much value in making these changes in v2 at all.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Future of the Nova API

2014-02-24 Thread Dan Smith
> onSharedStorage = True
> on_shared_storage = False

This is a good example. I'm not sure it's worth breaking users _or_
introducing a new microversion for something like this. This is
definitely what I would call a "purity" concern as opposed to "usability".

Things like the twenty different datetime formats we expose _do_ seem
worth the change to me as it requires the client to parse a bunch of
different formats depending on the situation. However, we could solve
that with very little code by just exposing all the datetimes again in
proper format:

 {
  "updated_at": "%(random_weirdo)s",
  "updated_at_iso": "%(isotime)s",
 }

Doing the above is backwards compatible and doesn't create code
organizations based on any sort of pasta metaphor. If we introduce a
discoverable version tag so the client knows if they will be available,
I think we're good.

URL inconsistencies seem "not worth the trouble" and I tend to think
that the "server" vs. "instance" distinction probably isn't either, but
I guess I'm willing to consider it.

Personally, I would rather do what we can/need in order to provide
features in a compatible way, fix real functional issues (like the
datetimes), and not ask users to port to a new API to clear up a bunch
of CamelCase inconsistencies. Just MHO.

--Dan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Incubation Request: Murano

2014-02-24 Thread Georgy Okrokvertskhov
Hi Thierry,


Let me clarify the situation with existing programs and projects overlap.
First of all, I would like to separate questions about what program Murano
as a project can fit and about any overlap with existing projects in the
official programs.


We position Application Catalog as a project that provides functionality
for application publishing, distribution and management. What we were
suggesting is that Murano as a project might fit in a Catalog program which
technically does not exist yet, but this could possibly be one of the ways
how current Image program might evolve.


On the project level, I don't see any overlap with existing Glance project
functionality. The Murano team is actively working with the Glance team to
define a roadmap towards more generic metadata repository functionality for
storing metadata not only images but other artifacts like application
packages, Heat templates etc. Once this roadmap is realized, Murano would
use a Glance repository for storing metadata.


Let me also explain why we listed Orchestration program as a possible place
for Murano. The Orchestration mission states that the goal "is to create a
human- and machine-accessible service for managing the entire lifecycle of
infrastructure and applications within OpenStack clouds". An application
catalog provides self-service capabilities for a cloud user to manage
applications on top of the cloud. In this form, the mission of the
Orchestration program can be applied to the Murano project.


The fact that Murano fits the Orchestration mission does not mean that
there is an overlap with existing projects in this program. Murano uses
Heat to perform actual deployment. In this sense Murano does not deploy
most things directly. Murano uses application definition to generate a Heat
template from Heat template snippets. A good analogy here is the TripleO
project which combines Heat templates based on desired OpenStack
configuration and uses Heat to perform actual work.


The key functionality in Murano is an application package definition. An
application package consists of a UI definition, metadata to control its
appearance in the Catalog, requirements which help Catalog to find a
required or dependent  applications, rules to control Heat template
definitions from snippets and scripts which are part of application
packages too. An essential requirement is to keep Murano project solid as
it covers all aspects of working with application starting from UI
appearance and ending with controlling a heat template-based deployment.


I also wouldn't completely disregard an option to create a new program
combining few projects to cover aspects of application management.


As you can see this is complicated topic with a number of possible
solutions. What Murano team is seeking to achieve is to get feedback of
community and TC on the most appropriate way to structure the governance
model for the project.

Thanks
Georgy


On Mon, Feb 24, 2014 at 2:24 AM, Thierry Carrez 
wrote:
>
> Mark Washenberger wrote:
> > Prior to this email, I was imagining that we would expand the Images
> > program to go beyond storing just block device images, and into more
> > structured items like whole Nova instance templates, Heat templates, and
> > Murano packages. In this scheme, Glance would know everything there is
> > to know about a resource--its type, format, location, size, and
> > relationships to other resources--but it would not know or offer any
> > links for how a resource is to be used.
>
> I'm a bit uncomfortable as well. Part of our role at the Technical
> Committee is to make sure additions do not overlap in scope and make
> sense as a whole.
>
> Murano seems to cover two functions. The first one is publishing,
> cataloging and discovering software stacks. The second one is to deploy
> those software stacks and potentially manage their lifecycle.
>
> In the OpenStack "integrated" release we already have Glance as a
> publication/catalog/discovery component and Heat as the workload
> orchestration end. Georgy clearly identified those two facets, since the
> incubation request lists those two programs as potential homes for Murano.
>
> The problem is, Orchestration doesn't care about the Catalog part of
> Murano, and Glance doesn't care about the Orchestration part of Murano.
> Murano spans the scope of two established programs. It's not different
> enough to really warrant its own program, and it's too monolithic to fit
> in our current landscape.
>
> I see two ways out: Murano can continue to live as a separate
> application that lives on top of OpenStack and consumes various
> OpenStack components. Or its functionality can be split and subsumed by
> Glance and Heat, with Murano developers pushing it there. There seems to
> be interest in both those programs to add features that Murano covers.
> The question is, could we replicate Murano's featureset completely in
> those existing components ? Or is there anything Murano-unique that
> 

Re: [openstack-dev] [nova] Future of the Nova API

2014-02-24 Thread Christopher Yeoh
On Mon, 24 Feb 2014 18:17:34 -0500
Sean Dague  wrote:

> On 02/24/2014 06:13 PM, Chris Friesen wrote:
> > On 02/24/2014 04:59 PM, Sean Dague wrote:
> > 
> >> So, that begs a new approach. Because I think at this point even
> >> if we did put out Nova v3, there can never be a v4. It's too much,
> >> too big, and doesn't fit in the incremental nature of the project.
> > 
> > Does it necessarily need to be that way though?  Maybe we bump the
> > version number every time we make a non-backwards-compatible change,
> > even if it's just removing an API call that has been deprecated for
> > a while.
> 
> So I'm not sure how this is different than the keep v2 and use
> microversioning suggestion that is already in this thread.

For non backwards compatible changes I think the difference is in how
the user accesses the API. When only make major changes when bumping
the major version then they know for sure that if they access

/v3/foo

then they're app will work. If /v3 doesn't exist then they know it's
not supported.

Whereas if we make backwards incompatible changes within a major
version then they have to start checking the microversion first. And
if experience is anything to go with we end up with user code that gets
overly conservative about checking versions (eg checking against exact
versions or not working with later versions), "just in case".

Note that bumping the major version in the future does not
necessarily mean a rework of the magnitude that we have had for V3. The
V2->V3 transition is different because we want to change a *lot* of the
API.

We now have an architecture in the V3 API which is quite a bit more
flexible. So say a theoretical example of where we wanted to change the
data return from shelve in a backwards incompatible way (in practice
I don't think we'd bump a major version just for one change). We could
present a /v4 interface that was exactly the same as /v3 except for
what shelve provides and the only code duplication would be that
required for the new shelve functionality. All the "v3 plugins" would
load into the /v4 namespace. (It's most likely not worth doing this for
the v2/v3 transition because so much has changed and we cant retrofit
better input validation).

So those who want the deprecated behaviour continue to access
everything via /v3, those who want the new behaviour access it via /v4.
Its a clean delineation for users and they never accidentally get new
backwards incompatible behaviour through the old resource path. Either
its there (and supported) or /v3 doesn't exist and they need to update
their app which if they don't use the shelve functionality is trivial
(just point at /v4 instead of /v3) which is really just the equivalent
of linking your program against libfoo.2.so instead of libfoo.1.so. And
we don't have extra test load except for the shelve delta because the
code is all exactly the same.

For backwards compatible changes, microversions are certainly useful
though.

Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Future of the Nova API

2014-02-24 Thread Christopher Yeoh
On Mon, 24 Feb 2014 16:20:12 -0800
Dan Smith  wrote:
> > So the deprecation message in the patch says:
> > 
> >LOG.warning(_('XML support has been deprecated and will be
> >  removed in the Juno release.'))
> > 
> > perhaps that should be changed :-)
> 
> Maybe, but I think we can continue with the plan to rip it out in
> Juno. In the past when we've asked, there has been an overwhelming
> amount of "meh" regarding removing it. We've considered it several
> times, and we've now drawn a line in the sand. Personally, I'm fine
> with doing this, and I think the support from the core team that +A'd
> the heck out of it probably means that there is wide support for it.

Sure, I was using it as an example of where we have been willing to use
a fixed deprecation schedule for the API. If we look at the Havana
user survey I think the results say:

http://www.slideshare.net/openstack/havana-survey-resultsfinal-19312081

JSON: 150
XML: 62
Both: 33

So thats around 40% of those surveyed who would be affected. 
So if we can draw a line in the sand based on those sorts of numbers,
why is it impossible to do it for the V2 API as a whole? Albeit I
think more than one cycle would be needed.

> > So either we can't fix them or in cases where we preserve backwards
> > compatibility we end up with dual maintenance cost (our test load
> > still doubles), but often having to be implemented in a way which
> > costs us more in terms of readability because the code becomes
> > spaghetti.
> 
> I think it can be done without it turning into a mess. Non-trivial for
> sure, but not impossible. And if not, I still expect most users would
> prefer stability over purity.

We're not choosing between stability of purity though. As I've argued
elsewhere its not about 'purity', its about usability. And say we do
manage to do it without it turning into complete mess, we still have
the dual maintenance cost which seems to be the primary concern about
having both the V2 and V3 API released.

By supporting backwards incompatible changes inside the V2 API we're
just hiding the fact that we in fact have two different APIs. We're not
actually reducing the maintenance cost and it comes at increased user
confusion, not less. In some areas of testing we'll be increasing
the work needed to be done. Eg we need to make sure we're doing
something sane when someone passes say:

onSharedStorage = True
on_shared_storage = False

should the old or new behaviour get priority? Or should we instead
return a 400? We don't need to have that logic (or testing) when we
cleanly separate the new API from the old one. Similar issues when
passing a mixture of old and new formats. Should that be valid? If not
we need to explicitly check and reject.

Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [All] Fixed recent gate issues

2014-02-24 Thread Alan Pevec
> https://review.openstack.org/74451 doesn't solve the issue completely, we have
> SKIP_EXERCISES=boot_from_volume,bundle,client-env,euca,swift,client-args
> but failure is now in Grenade's Javelin script:
>
> + swift upload javelin /etc/hosts
> ...(same Traceback)...
> [ERROR] /opt/stack/new/grenade/setup-javelin:151 Swift upload failed

What about just removing that test from Javelin in stable/havana?
https://review.openstack.org/76058

Cheers,
Alan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Hacking and PEP 257: Extra blank line at end of multi-line docstring

2014-02-24 Thread Ziad Sawalha
Seeking some clarification on the OpenStack hacking guidelines for multi-string 
docstrings.

Q: In OpenStack projects, is a blank line before the triple closing quotes 
recommended (and therefore optional - this is what PEP-257 seems to suggest), 
required, or explicitly rejected (which could be one way to interpret the 
hacking guidelines since they omit the blank line).

This came up in a commit review, and here are some references on the topic:

Quoting PEP-257: “The BDFL [3] recommends inserting a blank line between the 
last paragraph in a multi-line docstring and its closing quotes, placing the 
closing quotes on a line by themselves. This way, Emacs' fill-paragraph command 
can be used on it.”

Sample from pep257 (with extra blank line):

def complex(real=0.0, imag=0.0):
"""Form a complex number.

Keyword arguments:
real -- the real part (default 0.0)
imag -- the imaginary part (default 0.0)

"""
if imag == 0.0 and real == 0.0: return complex_zero
...

The multi-line docstring example in 
http://docs.openstack.org/developer/hacking/ has no extra blank line before the 
ending triple-quotes:

"""A multi line docstring has a one-line summary, less than 80 characters.

Then a new paragraph after a newline that explains in more detail any
general information about the function, class or method. Example usages
are also great to have here if it is a complex class for function.

When writing the docstring for a class, an extra line should be placed
after the closing quotations. For more in-depth explanations for these
decisions see http://www.python.org/dev/peps/pep-0257/

If you are going to describe parameters and return values, use Sphinx, the
appropriate syntax is as follows.

:param foo: the foo parameter
:param bar: the bar parameter
:returns: return_type -- description of the return value
:returns: description of the return value
:raises: AttributeError, KeyError
"""

Regards,

Ziad
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Incubation Request: Murano

2014-02-24 Thread Jay Pipes
On Mon, 2014-02-24 at 11:24 +0100, Thierry Carrez wrote:
> Mark Washenberger wrote:
> > Prior to this email, I was imagining that we would expand the Images
> > program to go beyond storing just block device images, and into more
> > structured items like whole Nova instance templates, Heat templates, and
> > Murano packages. In this scheme, Glance would know everything there is
> > to know about a resource--its type, format, location, size, and
> > relationships to other resources--but it would not know or offer any
> > links for how a resource is to be used.
> 
> I'm a bit uncomfortable as well. Part of our role at the Technical
> Committee is to make sure additions do not overlap in scope and make
> sense as a whole.
> 
> Murano seems to cover two functions. The first one is publishing,
> cataloging and discovering software stacks. The second one is to deploy
> those software stacks and potentially manage their lifecycle.
> 
> In the OpenStack "integrated" release we already have Glance as a
> publication/catalog/discovery component and Heat as the workload
> orchestration end. Georgy clearly identified those two facets, since the
> incubation request lists those two programs as potential homes for Murano.
> 
> The problem is, Orchestration doesn't care about the Catalog part of
> Murano, and Glance doesn't care about the Orchestration part of Murano.
> Murano spans the scope of two established programs. It's not different
> enough to really warrant its own program, and it's too monolithic to fit
> in our current landscape.
> 
> I see two ways out: Murano can continue to live as a separate
> application that lives on top of OpenStack and consumes various
> OpenStack components. Or its functionality can be split and subsumed by
> Glance and Heat, with Murano developers pushing it there. There seems to
> be interest in both those programs to add features that Murano covers.

There is a third component: the UI pieces. This naturally would belong
in the UX program.

> The question is, could we replicate Murano's featureset completely in
> those existing components ? Or is there anything Murano-unique that
> wouldn't fit in existing projects ?

Outside of its innovative UX form-construction component, the biggest
thing that makes Murano unique (IMO) is its use of flow control
constructs in its DSL. If I'm not mistaken, the Heat community has made
it clear that they do not intend to introduce flow control constructs
into HOT, and so there would be this piece that would need to live
outside of Heat, but still in the Orchestration program. So, I believe
that there is still a compelling reason for Murano to exist as a
separate project within the Orchestration program, with a mission to
provide a higher level DSL for deployment of complex application
topologies that includes flow control contructs.

And then someone will ask "well, isn't that partly what Solum is
designed for?". And we're back to a similar discussion ;)

Best,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][IPv6] tox error

2014-02-24 Thread Randy Tuttle
When I see a fox, it is usually running b-( = ))

Sent from my iPhone

> On Feb 24, 2014, at 6:08 PM, Shixiong Shang  
> wrote:
> 
> Hi, guys:
> 
> I run into this error while running fox…..but it gave me this error…Seems 
> like it is related to Neutron LB. Did you see this issue before? If so, how 
> to fix it?
> 
> Thanks!
> 
> Shixiong
> 
> 
> shshang@net-ubuntu2:~/github/neutron$ tox -v -e py27
> ……...
> tests.unit.test_wsgi.XMLDictSerializerTest.test_xml_with_utf8\xa2\xbe\xf7u\xb3
>  `@d\x17text/plain;charset=utf8\rimport 
> errors4neutron.tests.unit.linuxbridge.test_lb_neutron_agent\x85\xc5\x1a\\', 
> stderr=None
> error: testr failed (3)
> ERROR: InvocationError: '/home/shshang/github/neutron/.tox/py27/bin/python -m 
> neutron.openstack.common.lockutils python setup.py testr --slowest 
> --testr-args='
> 
>  summary 
> 
> ERROR:   py27: commands failed
> 
> 
> (py27)shshang@net-ubuntu2:~/github/neutron/.tox/py27/bin$ python
> Python 2.7.5+ (default, Sep 19 2013, 13:48:49)
> [GCC 4.8.1] on linux2
> Type "help", "copyright", "credits" or "license" for more information.
 import errors4neutron.tests.unit.linuxbridge.test_lb_neutron_agent
> Traceback (most recent call last):
>  File "", line 1, in 
> ImportError: No module named 
> errors4neutron.tests.unit.linuxbridge.test_lb_neutron_agent
> 
> 
> 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Future of the Nova API

2014-02-24 Thread Dan Smith
> So the deprecation message in the patch says:
> 
>LOG.warning(_('XML support has been deprecated and will be
>  removed in the Juno release.'))
> 
> perhaps that should be changed :-)

Maybe, but I think we can continue with the plan to rip it out in Juno.
In the past when we've asked, there has been an overwhelming amount of
"meh" regarding removing it. We've considered it several times, and
we've now drawn a line in the sand. Personally, I'm fine with doing
this, and I think the support from the core team that +A'd the heck out
of it probably means that there is wide support for it.

> In terms of user facing changes we can't do a whole lot - because they
> are inherently changes in how users communicate with API. And not just
> in terms of parameter names, but where and how they access the
> functionality (eg url paths change). In the past we've made
> mistakes as to where or how functionality should appear, leading to
> weird inconsistencies.
> 
> So either we can't fix them or in cases where we preserve backwards
> compatibility we end up with dual maintenance cost (our test load
> still doubles), but often having to be implemented in a way which costs
> us more in terms of readability because the code becomes spaghetti.

I think it can be done without it turning into a mess. Non-trivial for
sure, but not impossible. And if not, I still expect most users would
prefer stability over purity.

--Dan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Future of the Nova API

2014-02-24 Thread Christopher Yeoh
On Mon, 24 Feb 2014 15:54:42 -0800
Morgan Fainberg  wrote:

> Yes, micro-versioning is most likely a better approach, and I’m a fan
> of using that to gain the benefits of V3 without changing for the
> sake of change. Ideally in a versioned API we should be versioning a
> smaller surface area than “THE WHOLE API” if at all possible. If we
> kept the old “version” around and deprecated it (keep it for 2
> cycles, when it goes away the non-versioned call says “sorry, version
> unsupported”?, and it can continue to be versioned as needed) and
> continue to increment the versions as appropriate with changes, we
> will be holding true to our contract. The benefits of V3 can still be
> reaped, knowing where the API should move towards.

So we have a very large number of changes we want to make to the V2
API, and we've already done the work (including adding versioning to
make backwards compatible changes easier in the future) in the V3 API.

How is backporting all those changes to V2, marking the old behaviour as
deprecated and the removing them in 2 cycles (forcing them off the
old behaviour) any different from releasing the V3 API, marking the V2
as deprecated and removing it in the same timeframe? Except that the
former involves a lot more work?

Where there is compatibility between the V2 and V3 API the only change
which is required is the accessing it via /v3 instead of /v2/tenant_id

> don’t work for large surface area projects). I still stand by my
> statement that we can’t (shouldn’t knowingly) break the contract, we
> also can’t assume people will move to V3 (if we launch it) in a
> reasonable timeframe if the new API doesn’t really justify a massive
> re-write. 

If we can't assume people will make the changes to move to V3, then how
we can we assume they'll make the necessary changes with the
deprecation-in-place model when the amount of change required is
basically the same if we want to make the same improvements? 

Also in terms of consistency of the API we don't actually reap most of
the advantage until all of the changes have been made. Because until
that point we are still look like inconsistent API to users.

Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Mistral] Local vs. Scalable Engine

2014-02-24 Thread W Chan
As I understand, the local engine runs the task immediately whereas the
scalable engine sends it over the message queue to one or more executors.

In what circumstances would we see a Mistral user using a local engine
(other than testing) instead of the scalable engine?

If we are keeping the local engine, can we move the abstraction to the
executor instead, having drivers for a local executor and remote executor?
 The message flow from the engine to the executor would be consistent, it's
just where the request will be processed.

And since we are porting to oslo.messaging, there's already a fake driver
that allows for an in process Queue for local execution.  The local
executor can be a derivative of that fake driver for non-testing purposes.
 And if we don't want to use an in process queue here to avoid the
complexity, we can have the client side module of the executor determine
whether to dispatch to a local executor vs. RPC call to a remote executor.

Thoughts?

Winson
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Tripleo][CI] check-tripleo outage

2014-02-24 Thread Robert Collins
Today we had an outage of the tripleo test cloud :(.

tl;dr:
 - we were down for 14 hours
 - we don't know the fundamental cause
 - infra were not inconvenienced - yaaay
 - its all ok now.

Read on for more information, what little we have.

We don't know exactly why it happened yet, but the control plane
dropped off the network. Console showed node still had a correct
networking configuration, including openflow rules and bridges. The
node was arpingable, and could arping out, but could not be pinged.
Tcpdump showed the node sending a ping reply on it's raw ethernet
device, but other machines on the same LAN did not see the packet.

>From syslog we can see
Feb 24 06:28:31 ci-overcloud-notcompute0-gxezgcvv4v2q kernel:
[1454708.543053] hpsa :06:00.0: cmd_alloc returned NULL!
events

around the time frame that the drop-off would have happened, but they
go back many hours before and after that.

After exhausting everything that came to mind we rebooted the machine,
which promptly spat an NMI trace into the console:

[1502354.552431]  [] rcu_eqs_enter_common.isra.43+0x208/0x220
[1502354.552491]  [] rcu_irq_exit+0x5d/0x90
[1502354.552549]  [] irq_exit+0x80/0xc0
[1502354.552605]  [] smp_apic_timer_interrupt+0x45/0x60
[1502354.552665]  [] apic_timer_interrupt+0x6d/0x80
[1502354.552722]  [] ? panic+0x193/0x1d7
[1502354.552880]  [] hpwdt_pretimeout+0xe5/0xe5 [hpwdt]
[1502354.552939]  [] nmi_handle.isra.3+0x88/0x180
[1502354.552997]  [] do_nmi+0x191/0x330
[1502354.553053]  [] end_repeat_nmi+0x1e/0x2e
[1502354.553111]  [] ? intel_idle+0xc2/0x120
[1502354.553168]  [] ? intel_idle+0xc2/0x120
[1502354.553226]  [] ? intel_idle+0xc2/0x120
[1502354.553282]  <>  [] cpuidle_enter_state+0x40/0xc0
[1502354.553408]  [] cpuidle_idle_call+0xc9/0x210
[1502354.553466]  [] arch_cpu_idle+0xe/0x30
[1502354.553523]  [] cpu_startup_entry+0xe5/0x280
[1502354.553581]  [] rest_init+0x77/0x80
[1502354.553638]  [] start_kernel+0x40a/0x416
[1502354.553695]  [] ? repair_env_string+0x5c/0x5c
[1502354.553753]  [] ? early_idt_handlers+0x120/0x120
[1502354.553812]  [] x86_64_start_reservations+0x2a/0x2c
[1502354.553871]  [] x86_64_start_kernel+0x108/0x117
[1502354.553929] ---[ end trace 166b62e89aa1f54b ]---

'yay'. After that, a power reset in the console, it came up ok, just
needed a minor nudge to refresh it's heat configuration and we were up
and running again.

For some reason, neutron decided to rename it's agents at this point
and we had to remove and reattach the l3 agent before VM connectivity
was restored.
https://bugs.launchpad.net/tripleo/+bug/1284354

However, about 90 nodepool nodes were stuck in states like ACTIVE
deleting, and did not clear until we did a rolling restart of every
nova compute process.
https://bugs.launchpad.net/tripleo/+bug/1284356

Cheers,
Rob

-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Future of the Nova API

2014-02-24 Thread Christopher Yeoh
On Mon, 24 Feb 2014 17:47:51 -0500
Russell Bryant  wrote:
> On 02/24/2014 05:26 PM, Christopher Yeoh wrote:
> >>> - Whilst we have existing users of the API we also have a lot more
> >>>   users in the future. It would be much better to allow them to
> >>> use the API we want to get to as soon as possible, rather than
> >>> trying to evolve the V2 API and forcing them along the transition
> >>> that they could otherwise avoid.
> >>
> >> I'm not sure I understand this.  A key point is that I think any
> >> evolving of the V2 API has to be backwards compatible, so there's
> >> no forcing them along involved.
> > 
> > Well other people have been suggesting we can just deprecate parts
> > (be it proxying or other bits we really don't like) and then make
> > the backwards incompatible change. I think we've already said we'll
> > do it for XML for the V2 API and force them off to JSON.
> 
> Well, marking deprecated is different than removing it.  We have to
> get good data that shows that it's not actually being used before can
> actually remove it.  Marking it deprecated at least signals that we
> don't consider it actively maintained and that it may go away in the
> future.

So the deprecation message in the patch says:

   LOG.warning(_('XML support has been deprecated and will be
 removed in the Juno release.'))

perhaps that should be changed :-)

> I also consider the XML situation a bit different than changing
> specifics of a given API extension, for example.  We're talking about
> potentially removing an entire API vs changing an API while it's in
> use.

That's sort of true, but existing users will have to move to JSON.
Which I think would be a lot more work compared to making someone move
from V2 to V3.

> > Ultimately I think what this would means is punting any significant
> > API improvements several years down the track and effectively
> > throwing away a lot of the worked we've done in the last year on
> > the API
> 
> One of the important questions is how much improvement can we make to
> v2 without breaking backwards compatibility?
> 
> What can we *not* do in a backwards compatible manner?  How much does
> it hurt to give those things up?  How does that compare to the cost
> of dual maintenance?

In terms of user facing changes we can't do a whole lot - because they
are inherently changes in how users communicate with API. And not just
in terms of parameter names, but where and how they access the
functionality (eg url paths change). In the past we've made
mistakes as to where or how functionality should appear, leading to
weird inconsistencies.

So either we can't fix them or in cases where we preserve backwards
compatibility we end up with dual maintenance cost (our test load
still doubles), but often having to be implemented in a way which costs
us more in terms of readability because the code becomes spaghetti.

If it was just a handful changes then I'd agree a major version bump is
not necessary - and we wouldn't have started going down this path over
a year ago. But the user facing improvements are pretty much pervasive
through the API (with the exception of the more recent extensions where
we've got better at enforcing a consistent and sane API style).

Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Future of the Nova API

2014-02-24 Thread Morgan Fainberg
Yes, micro-versioning is most likely a better approach, and I’m a fan of using 
that to gain the benefits of V3 without changing for the sake of change. 
Ideally in a versioned API we should be versioning a smaller surface area than 
“THE WHOLE API” if at all possible. If we kept the old “version” around and 
deprecated it (keep it for 2 cycles, when it goes away the non-versioned call 
says “sorry, version unsupported”?, and it can continue to be versioned as 
needed) and continue to increment the versions as appropriate with changes, we 
will be holding true to our contract. The benefits of V3 can still be reaped, 
knowing where the API should move towards.

Don’t try and take on a giant task to make a “new API version” at once. 

We can maintain the contract and still progress the APIs forward. And to Sean’s 
comment that the V2 API hasn’t been as “stable in the traditional sense” in the 
past, I think we can forgive past issues since we now have the framework to 
show us when/if things end up being incompatible (and I agree with the fact 
that big-bang changes don’t work for large surface area projects). I still 
stand by my statement that we can’t (shouldn’t knowingly) break the contract, 
we also can’t assume people will move to V3 (if we launch it) in a reasonable 
timeframe if the new API doesn’t really justify a massive re-write. Maintaining 
2, nearly identical, APIs is going to be problematic for both the developers 
and deployers. In my view (as a deployer, consumer, and developer) this means 
we should keep V2, and work on benefiting from the lessons learned in 
developing V3 while moving to correct the issues we have in a maintainable / 
friendly way (to developers, deployers, and consumers).
—
Morgan Fainberg
Principal Software Engineer
Core Developer, Keystone
m...@metacloud.com


On February 24, 2014 at 15:22:01, Sean Dague (s...@dague.net) wrote:

On 02/24/2014 06:13 PM, Chris Friesen wrote:  
> On 02/24/2014 04:59 PM, Sean Dague wrote:  
>  
>> So, that begs a new approach. Because I think at this point even if we  
>> did put out Nova v3, there can never be a v4. It's too much, too big,  
>> and doesn't fit in the incremental nature of the project.  
>  
> Does it necessarily need to be that way though? Maybe we bump the  
> version number every time we make a non-backwards-compatible change,  
> even if it's just removing an API call that has been deprecated for a  
> while.  

So I'm not sure how this is different than the keep v2 and use  
microversioning suggestion that is already in this thread.  

-Sean  

--  
Sean Dague  
Samsung Research America  
s...@dague.net / sean.da...@samsung.com  
http://dague.net  

- signature.asc, 493 bytes
___  
OpenStack-dev mailing list  
OpenStack-dev@lists.openstack.org  
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev  
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Future of the Nova API

2014-02-24 Thread Chris Friesen

On 02/24/2014 05:17 PM, Sean Dague wrote:

On 02/24/2014 06:13 PM, Chris Friesen wrote:

On 02/24/2014 04:59 PM, Sean Dague wrote:


So, that begs a new approach. Because I think at this point even if we
did put out Nova v3, there can never be a v4. It's too much, too big,
and doesn't fit in the incremental nature of the project.


Does it necessarily need to be that way though?  Maybe we bump the
version number every time we make a non-backwards-compatible change,
even if it's just removing an API call that has been deprecated for a
while.


So I'm not sure how this is different than the keep v2 and use
microversioning suggestion that is already in this thread.


It differs in that it allows the user to determine whether the changes 
are forwards or backwards compatible.  For instance, you might use an 
API version that looks like {major}.{minor}.{bugfix} with the following 
rules:


A new bugfix release is both forwards and backwards compatible.

A new minor release is backwards compatible. So code written against 
version x.y will work with version x.y+n.  New minor releases would 
generally add functionality.


A new major release is not necessarily backwards compatible.  Code 
written against version x may not work with version x+1.  New major 
releases remove or change functionality.


Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Future of the Nova API

2014-02-24 Thread Sean Dague
On 02/24/2014 06:31 PM, Jay Pipes wrote:
> On Mon, 2014-02-24 at 17:59 -0500, Sean Dague wrote:
>> So we do really need to be pragmatic here as well. Because our
>> experience with v3 so far has been doing a major version bump on Nova is
>> a minimum of 2 years, and that doesn't reach a completion point that
>> anyone's happy with to switch over.
> 
> I don't see why version 4 need repeat the timeline of v3 development.
> I'm not saying it isn't possible, just that one doesn't necessarily lead
> to the other.
> 
> Best,
> -jay

I guess having watched this evolve, it's not clear to me how to do it in
a shorter time frame. Maybe we just made tons of mistakes in the
process, but it seems like anything large like this is really 3 - 4
cycles. That was the cells timeline, that's been the baremetal story
timeline. Even the scheduler forklift, that everyone thought could
happen in a single cycle, is probably going to be 3 cycles start to finish.

Ways in which we could do this quicker would be appreciated. Though I do
like getting a few hours of sleep a night.

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Future of the Nova API

2014-02-24 Thread Jay Pipes
On Tue, 2014-02-25 at 09:11 +1030, Christopher Yeoh wrote:
> On Mon, 24 Feb 2014 11:48:41 -0500
> Jay Pipes  wrote:
> > It's not about "forcing" providers to support all of the public API.
> > It's about providing a single, well-documented, consistent HTTP REST
> > API for *consumers* of that API. Whether a provider chooses to, for
> > example, deploy with nova-network or Neutron, or Xen vs. KVM, or
> > support block migration for that matter *should have no effect on the
> > public API*. The fact that those choices currently *do* effect the
> > public API that is consumed by the client is a major indication of
> > the weakness of the API.
> 
> So for the nova-network/neutron issue its more a result of either
> support for neutron was never implemented or new nova-network features
> were added without corresponding neutron support. I agree its not a
> good place to be in, but isn't really relevant to whether we have
> extensions or not.

OK, fair enough.

> Similarly with a Xen vs KVM situation I don't think its an extension
> related issue. In V2 we have features in *core* which are only supported
> by some virt backends. It perhaps comes down to not being willing to
> say either that we will force all virt backends to support all features
> in the API or they don't get in the tree. Or alternatively be willing
> to say no to any feature in the API which can not be currently
> implemented in all virt backends. The former greatly increases the
> barrier to getting a hypervisor included, the latter restricts Nova
> development to the speed of the slowest developing and least
> mature hypervisor supported.

Actually, the problem is not feature parity. The problem lies where two
drivers implement the same or similar functionality, but the public API
for a user to call the functionality is slightly different depending on
which driver is used by the deployer.

There's nothing wrong at all (IMO) in having feature disparity amongst
drivers. However, problems arise when the public API does any of the
following:

 * exposes two ways of doing the same thing, depending on underlying
driver
 * exposes things in a way that is specific to one particular
vendor/driver to the exclusion of others
 * exposes things that should not be exposed to the end-user or tenant,
but that belong in the realm of the deployer

See my original response on the ML about that for examples of all of the
above from the current Nova API(s).

Best,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Future of the Nova API

2014-02-24 Thread Jay Pipes
On Mon, 2014-02-24 at 17:59 -0500, Sean Dague wrote:
> So we do really need to be pragmatic here as well. Because our
> experience with v3 so far has been doing a major version bump on Nova is
> a minimum of 2 years, and that doesn't reach a completion point that
> anyone's happy with to switch over.

I don't see why version 4 need repeat the timeline of v3 development.
I'm not saying it isn't possible, just that one doesn't necessarily lead
to the other.

Best,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Future of the Nova API

2014-02-24 Thread Jay Pipes
On Mon, 2014-02-24 at 14:01 -0800, Morgan Fainberg wrote:
> TL;DR, “don’t break the contract”. If we are seriously making
> incompatible changes (and we will be regardless of the direction) the
> only reasonable option is a new major version.

100% agreement.

Note that when I asked Chris when we would tackle the issue of
extensions, I was definitely looking at the next major version of the
Compute API, not v3 or v2. Sorry if I muddied the conversation in that
regard.

Best,
-jay



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] tox error (errors4neutron)

2014-02-24 Thread Shixiong Shang
Hi, guys:

I run into this error while running tox…..Seems like it is related to Neutron 
LB. Did you see this issue before? If so, how to fix it?

Thanks!

Shixiong


shshang@net-ubuntu2:~/github/neutron$ tox -v -e py27
……...
tests.unit.test_wsgi.XMLDictSerializerTest.test_xml_with_utf8\xa2\xbe\xf7u\xb3 
`@d\x17text/plain;charset=utf8\rimport 
errors4neutron.tests.unit.linuxbridge.test_lb_neutron_agent\x85\xc5\x1a\\', 
stderr=None
error: testr failed (3)
ERROR: InvocationError: '/home/shshang/github/neutron/.tox/py27/bin/python -m 
neutron.openstack.common.lockutils python setup.py testr --slowest 
--testr-args='

 summary 

ERROR:   py27: commands failed


(py27)shshang@net-ubuntu2:~/github/neutron/.tox/py27/bin$ python
Python 2.7.5+ (default, Sep 19 2013, 13:48:49)
[GCC 4.8.1] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import errors4neutron.tests.unit.linuxbridge.test_lb_neutron_agent
Traceback (most recent call last):
 File "", line 1, in 
ImportError: No module named 
errors4neutron.tests.unit.linuxbridge.test_lb_neutron_agent
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Future of the Nova API

2014-02-24 Thread Sean Dague
On 02/24/2014 06:13 PM, Chris Friesen wrote:
> On 02/24/2014 04:59 PM, Sean Dague wrote:
> 
>> So, that begs a new approach. Because I think at this point even if we
>> did put out Nova v3, there can never be a v4. It's too much, too big,
>> and doesn't fit in the incremental nature of the project.
> 
> Does it necessarily need to be that way though?  Maybe we bump the
> version number every time we make a non-backwards-compatible change,
> even if it's just removing an API call that has been deprecated for a
> while.

So I'm not sure how this is different than the keep v2 and use
microversioning suggestion that is already in this thread.

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Future of the Nova API

2014-02-24 Thread Chris Friesen

On 02/24/2014 04:59 PM, Sean Dague wrote:


So, that begs a new approach. Because I think at this point even if we
did put out Nova v3, there can never be a v4. It's too much, too big,
and doesn't fit in the incremental nature of the project.


Does it necessarily need to be that way though?  Maybe we bump the 
version number every time we make a non-backwards-compatible change, 
even if it's just removing an API call that has been deprecated for a while.


Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][IPv6] tox error

2014-02-24 Thread Shixiong Shang
Hi, guys:

I run into this error while running fox…..but it gave me this error…Seems like 
it is related to Neutron LB. Did you see this issue before? If so, how to fix 
it?

Thanks!

Shixiong


shshang@net-ubuntu2:~/github/neutron$ tox -v -e py27
……...
tests.unit.test_wsgi.XMLDictSerializerTest.test_xml_with_utf8\xa2\xbe\xf7u\xb3 
`@d\x17text/plain;charset=utf8\rimport 
errors4neutron.tests.unit.linuxbridge.test_lb_neutron_agent\x85\xc5\x1a\\', 
stderr=None
error: testr failed (3)
ERROR: InvocationError: '/home/shshang/github/neutron/.tox/py27/bin/python -m 
neutron.openstack.common.lockutils python setup.py testr --slowest 
--testr-args='

 summary 

ERROR:   py27: commands failed


(py27)shshang@net-ubuntu2:~/github/neutron/.tox/py27/bin$ python
Python 2.7.5+ (default, Sep 19 2013, 13:48:49)
[GCC 4.8.1] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import errors4neutron.tests.unit.linuxbridge.test_lb_neutron_agent
Traceback (most recent call last):
  File "", line 1, in 
ImportError: No module named 
errors4neutron.tests.unit.linuxbridge.test_lb_neutron_agent





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Future of the Nova API

2014-02-24 Thread Russell Bryant
On 02/24/2014 05:49 PM, Michael Davies wrote:
> On Tue, Feb 25, 2014 at 8:31 AM, Morgan Fainberg  > wrote:
> 
> On the topic of backwards incompatible changes:
> 
> I strongly believe that breaking current clients that use the APIs
> directly is the worst option possible. All the arguments about
> needing to know which APIs work based upon which backend drivers are
> used are all valid, but making an API incompatible change when we’ve
> made the contract that the current API will be stable is a very bad
> approach. Breaking current clients isn’t just breaking “novaclient",
> it would also break any customers that are developing directly
> against the API. In the case of cloud deployments with real-world
> production loads on them (and custom development around the APIs)
> upgrading between major versions is already difficult to orchestrate
> (timing, approvals, etc), if we add in the need to re-work large
> swaths of code due to API changes, it will become even more onerous
> and perhaps drive deployers to forego the upgrades in lieu of stability.
> 
> If the perception is that we don’t have stable APIs (especially when
> we are ostensibly versioning them), driving adoption of OpenStack
> becomes significantly more difficult. Difficulty in driving further
> adoption would be a big negative to both the project and the community.
> 
> TL;DR, “don’t break the contract”. If we are seriously making
> incompatible changes (and we will be regardless of the direction)
> the only reasonable option is a new major version
> 
> 
> I'm absolutely in agreement here - thanks Morgan for raising this.
> 
> Changing the API on consumers means forcing them to re-evaluate their
> options: "Should I fix my usage of the API, or is it time to try another
> solution?  The implementation cost is mostly the same".  We can't assume
> that API breakages won't lead to customers leaving.  It's worth noting
> that competing cloud APIs are inconsistent, and frankly awful.  But they
> don't change because it's all about the commercial interest of retaining
> customers and supporting a cornucopia of SDKs.
> 
> Any changes to a versioned API need to be completely backwards
> compatible, and we shouldn't assume changes aren't going to break things
> - we should test the crap out of them so as to ensure this is the case.
> Or put another way, any time we touch a stable API, we need to be
> extremely careful.
> 
> If we want new features, if we want to clean up existing interfaces,
> it's far better to move to a new API version (even with the maintenance
> burden of supporting another API) than try and bolt something on the
> side.  This includes improving input validation, because we should not
> be changing the functionality presented to end-users on a stable API,
> even if it's for their own good.  What it comes down to is strongly
> supporting the consumers of our software.  We need to make things easy
> for those who support and develop against the APIs.

Let's please avoid too much violent agreement on this.  There seems to
have been some confusion spurred by Morgan's post.

I don't think *anybody* is in favor of non backwards compatible changes
to an existing API.  The short version of choices discussed in this thread:

1) Continue developing v3 (non backwards compat changes until we call it
stable).  Maintain v2 and v3 until we reach a point that we can drop v2
(there is debate about when that could be)

2) Focus on v2 only, and figure out ways to add features and evolve it
**but only in backwards compatible ways**

3) Some other possible view of a way forward that hasn't been brought up
yet, but I'm totally open to ideas

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Future of the Nova API

2014-02-24 Thread Sean Dague
It's really easy to just say "don't break the contract." Until we got
the level of testing that we currently have in Tempest, the contract was
broken pretty regularly. I'm sure there are still breaks in it around
the edges where we aren't clamping down on people today.

So the history of v2 is far from being a stable API in the traditional
sense.

Which isn't to say we're trying to go and make the whole thing fluid.
However there has to be a path forward for incremental improvement,
because there are massive short comings in the existing API.

While a big bang approach might work for smaller interfaces, the Nova
API surface is huge. So huge, it's not even fully documented. Which
means we're at a state where you aren't implementing to an API, you are
implementing to an implementation. And if you look at HP and RAX you'll
find enough differences to make you scratch your head a bunch. And
that's only 2 data points. I'm sure the private cloud products have all
kinds of funkiness in them.

So we do really need to be pragmatic here as well. Because our
experience with v3 so far has been doing a major version bump on Nova is
a minimum of 2 years, and that doesn't reach a completion point that
anyone's happy with to switch over.

So, that begs a new approach. Because I think at this point even if we
did put out Nova v3, there can never be a v4. It's too much, too big,
and doesn't fit in the incremental nature of the project. So whatever
gets decided about v3, the thing that's important to me is a sane way to
be able to add backwards compatible changes (which we actually don't
have today, and I don't think any other service in OpenStack does
either), as well a mechanism for deprecating parts of the API. With some
future decision about whether removing them makes sense.

-Sean

On 02/24/2014 05:01 PM, Morgan Fainberg wrote:
> On the topic of backwards incompatible changes:
> 
> I strongly believe that breaking current clients that use the APIs
> directly is the worst option possible. All the arguments about needing
> to know which APIs work based upon which backend drivers are used are
> all valid, but making an API incompatible change when we’ve made the
> contract that the current API will be stable is a very bad approach.
> Breaking current clients isn’t just breaking “novaclient", it would also
> break any customers that are developing directly against the API. In the
> case of cloud deployments with real-world production loads on them (and
> custom development around the APIs) upgrading between major versions is
> already difficult to orchestrate (timing, approvals, etc), if we add in
> the need to re-work large swaths of code due to API changes, it will
> become even more onerous and perhaps drive deployers to forego the
> upgrades in lieu of stability.
> 
> If the perception is that we don’t have stable APIs (especially when we
> are ostensibly versioning them), driving adoption of OpenStack becomes
> significantly more difficult. Difficulty in driving further adoption
> would be a big negative to both the project and the community.
> 
> TL;DR, “don’t break the contract”. If we are seriously making
> incompatible changes (and we will be regardless of the direction) the
> only reasonable option is a new major version.
> 
> *—*
> *Morgan Fainberg*
> Principal Software Engineer
> Core Developer, Keystone
> m...@metacloud.com 
> 
> 
> On February 24, 2014 at 10:16:31, Matt Riedemann
> (mrie...@linux.vnet.ibm.com ) wrote:
> 
>>
>>
>> On 2/24/2014 10:13 AM, Russell Bryant wrote:
>> > On 02/24/2014 01:50 AM, Christopher Yeoh wrote:
>> >> Hi,
>> >>
>> >> There has recently been some speculation around the V3 API and whether
>> >> we should go forward with it or instead backport many of the changes
>> >> to the V2 API. I believe that the core of the concern is the extra
>> >> maintenance and test burden that supporting two APIs means and the
>> >> length of time before we are able to deprecate the V2 API and return
>> >> to maintaining only one (well two including EC2) API again.
>> >
>> > Yes, this is a major concern.  It has taken an enormous amount of work
>> > to get to where we are, and v3 isn't done.  It's a good time to
>> > re-evaluate whether we are on the right path.
>> >
>> > The more I think about it, the more I think that our absolute top goal
>> > should be to maintain a stable API for as long as we can reasonably do
>> > so.  I believe that's what is best for our users.  I think if you gave
>> > people a choice, they would prefer an inconsistent API that works for
>> > years over dealing with non-backwards compatible jumps to get a nicer
>> > looking one.
>> >
>> > The v3 API and its unit tests are roughly 25k lines of code.  This also
>> > doesn't include the changes necessary in novaclient or tempest.  That's
>> > just *our* code.  It explodes out from there into every SDK, and then
>> > end user apps.  This should not be taken li

Re: [openstack-dev] [nova] Future of the Nova API

2014-02-24 Thread Michael Davies
On Tue, Feb 25, 2014 at 8:31 AM, Morgan Fainberg  wrote:

> On the topic of backwards incompatible changes:
>
> I strongly believe that breaking current clients that use the APIs
> directly is the worst option possible. All the arguments about needing to
> know which APIs work based upon which backend drivers are used are all
> valid, but making an API incompatible change when we've made the contract
> that the current API will be stable is a very bad approach. Breaking
> current clients isn't just breaking "novaclient", it would also break any
> customers that are developing directly against the API. In the case of
> cloud deployments with real-world production loads on them (and custom
> development around the APIs) upgrading between major versions is already
> difficult to orchestrate (timing, approvals, etc), if we add in the need to
> re-work large swaths of code due to API changes, it will become even more
> onerous and perhaps drive deployers to forego the upgrades in lieu of
> stability.
>
> If the perception is that we don't have stable APIs (especially when we
> are ostensibly versioning them), driving adoption of OpenStack becomes
> significantly more difficult. Difficulty in driving further adoption would
> be a big negative to both the project and the community.
>
> TL;DR, "don't break the contract". If we are seriously making incompatible
> changes (and we will be regardless of the direction) the only reasonable
> option is a new major version
>

I'm absolutely in agreement here - thanks Morgan for raising this.

Changing the API on consumers means forcing them to re-evaluate their
options: "Should I fix my usage of the API, or is it time to try another
solution?  The implementation cost is mostly the same".  We can't assume
that API breakages won't lead to customers leaving.  It's worth noting that
competing cloud APIs are inconsistent, and frankly awful.  But they don't
change because it's all about the commercial interest of retaining
customers and supporting a cornucopia of SDKs.

Any changes to a versioned API need to be completely backwards compatible,
and we shouldn't assume changes aren't going to break things - we should
test the crap out of them so as to ensure this is the case. Or put another
way, any time we touch a stable API, we need to be extremely careful.

If we want new features, if we want to clean up existing interfaces, it's
far better to move to a new API version (even with the maintenance burden
of supporting another API) than try and bolt something on the side.  This
includes improving input validation, because we should not be changing the
functionality presented to end-users on a stable API, even if it's for
their own good.  What it comes down to is strongly supporting the consumers
of our software.  We need to make things easy for those who support and
develop against the APIs.

Hope this helps,

Michael...
-- 
Michael Davies   mich...@the-davies.net
Rackspace Australia
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Future of the Nova API

2014-02-24 Thread Russell Bryant
On 02/24/2014 05:26 PM, Christopher Yeoh wrote:
>>> - Whilst we have existing users of the API we also have a lot more
>>>   users in the future. It would be much better to allow them to use
>>>   the API we want to get to as soon as possible, rather than trying
>>>   to evolve the V2 API and forcing them along the transition that
>>> they could otherwise avoid.
>>
>> I'm not sure I understand this.  A key point is that I think any
>> evolving of the V2 API has to be backwards compatible, so there's no
>> forcing them along involved.
> 
> Well other people have been suggesting we can just deprecate parts (be
> it proxying or other bits we really don't like) and then make the
> backwards incompatible change. I think we've already said we'll do it
> for XML for the V2 API and force them off to JSON.

Well, marking deprecated is different than removing it.  We have to get
good data that shows that it's not actually being used before can
actually remove it.  Marking it deprecated at least signals that we
don't consider it actively maintained and that it may go away in the future.

I also consider the XML situation a bit different than changing
specifics of a given API extension, for example.  We're talking about
potentially removing an entire API vs changing an API while it's in use.

>> 2) Take what we have learned from v3 and apply it to v2.  For example:
>>
> 
>>  - revisit a new major API when we get to the point of wanting to
>>effectively do a re-write, where we are majorly re-thinking the
>>way our API is designed (from an external perspective, not internal
>>implementation).
> 
> Ultimately I think what this would means is punting any significant API
> improvements several years down the track and effectively throwing away
> a lot of the worked we've done in the last year on the API

One of the important questions is how much improvement can we make to v2
without breaking backwards compatibility?

What can we *not* do in a backwards compatible manner?  How much does it
hurt to give those things up?  How does that compare to the cost of dual
maintenance?

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Future of the Nova API

2014-02-24 Thread Christopher Yeoh
On Mon, 24 Feb 2014 11:48:41 -0500
Jay Pipes  wrote:
> It's not about "forcing" providers to support all of the public API.
> It's about providing a single, well-documented, consistent HTTP REST
> API for *consumers* of that API. Whether a provider chooses to, for
> example, deploy with nova-network or Neutron, or Xen vs. KVM, or
> support block migration for that matter *should have no effect on the
> public API*. The fact that those choices currently *do* effect the
> public API that is consumed by the client is a major indication of
> the weakness of the API.

So for the nova-network/neutron issue its more a result of either
support for neutron was never implemented or new nova-network features
were added without corresponding neutron support. I agree its not a
good place to be in, but isn't really relevant to whether we have
extensions or not.

Similarly with a Xen vs KVM situation I don't think its an extension
related issue. In V2 we have features in *core* which are only supported
by some virt backends. It perhaps comes down to not being willing to
say either that we will force all virt backends to support all features
in the API or they don't get in the tree. Or alternatively be willing
to say no to any feature in the API which can not be currently
implemented in all virt backends. The former greatly increases the
barrier to getting a hypervisor included, the latter restricts Nova
development to the speed of the slowest developing and least
mature hypervisor supported.

Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Future of the Nova API

2014-02-24 Thread Chris Friesen

On 02/24/2014 04:01 PM, Morgan Fainberg wrote:


TL;DR, “don’t break the contract”. If we are seriously making
incompatible changes (and we will be regardless of the direction) the
only reasonable option is a new major version.


Agreed.  I don't think we can possibly consider making 
backwards-incompatible changes without changing the version number.


We could stay with V2 and make as many backwards-compatible changes as 
possible using a minor version. This could include things like adding 
support for unified terminology as long as we *also* continue to support 
the old terminology.  The downside of this is that the code gets messy.


On the other hand, if we need to make backwards incompatible changes 
then we need to bump the version number.


Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano] Object-oriented approach for defining Murano Applications

2014-02-24 Thread Georgy Okrokvertskhov
Hi Keith,

Thank you for bringing up this question. We think that it could be done
inside Heat. This is a part of our future roadmap to bring more stuff to
Heat and pass all actual work to the heat engine. However it will require a
collaboration between Heat and Murano teams, so that is why we want to have
incubated status, to start better integration with other projects being a
part of OpenStack community. I will understand Heat team when they refuse
to change Heat templates to satisfy the requirements of the project which
does not officially belong to OpenStack. With incubation status it will be
much easier.
As for the actual work, backups and snapshots are processes. It will be
hard to express them in a good way in current HOT template. We see that we
will use Mistral resources defined in Heat which will trig the events for
backup and backup workflow associated with the application can be defined
outside of Heat. I don't think that Heat team will include workflow
definitions as a part of template format, while they can allow us to use
resources which reference such workflows stored in a catalog. It can be an
extension for HOT Software config for example, but we need to validate this
approach with the heat team.

The idea of Heat template generation library\engine is exactly what we have
implemented. Murano engine uses its own application definition to generate
valid Heat templates from snippets. As there is no preliminary knowledge of
actual snippet content, Murano package definition language allows
application writer to specify application requirements, application
constraints, data transformation rules and assertions to make a heat
template generation process predictable and manageable. I think this is an
essential part of Catalog as it tightly coupled with the way how
applications and its resources are defined.

Thanks
Georgy


On Mon, Feb 24, 2014 at 1:44 PM, Keith Bray wrote:

>  Have you considered writing Heat resource plug-ins that perform (or
> configure within other services) instance snapshots, backups, or whatever
> other maintenance workflow possibilities you want that don't exist?  Then
> these maintenance workflows you mention could be expressed in the Heat
> template forming a single place for the application architecture
> definition, including defining the configuration for services that need to
> be application aware throughout the application's life .  As you describe
> things in Murano, I interpret that you are layering application
> architecture specific information and workflows into a DSL in a layer above
> Heat, which means information pertinent to the application as an ongoing
> concern would be disjoint.  Fragmenting the necessary information to wholly
> define an infrastructure/application architecture could make it difficult
> to share the application and modify the application stack.
>
>  I would be interested in a library that allows for composing Heat
> templates from "snippets" or "fragments" of pre-written Heat DSL... The
> library's job could be to ensure that the snippets, when combined, create a
> valid Heat template free from conflict amongst resources, parameters, and
> outputs.  The interaction with the library, I think, would belong in
> Horizon, and the "Application Catalog" and/or "Snippets Catalog" could be
> implemented within Glance.
>
>  >>>Also, there may be workflow steps which are not covered by Heat by
> design. For example, application publisher may include creating instance
> snapshots, data migrations, backups etc into the deployment or maintenance
> workflows. I don't see how these may be done by Heat, while Murano should
> definitely support this scenarios.
>
>   From: Alexander Tivelkov 
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Date: Monday, February 24, 2014 12:18 PM
> To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Subject: Re: [openstack-dev] [Murano] Object-oriented approach for
> defining Murano Applications
>
>   Hi Stan,
>
>  It is good that we are on a common ground here :)
>
>  Of course this can be done by Heat. In fact - it will be, in the very
> same manner as it always was, I am pretty sure we've discussed this many
> times already. When Heat Software config is fully implemented, it will be
> possible to use it instead of our Agent execution plans for software
> configuration - it the very same manner as we use "regular" heat templates
> for resource allocation.
>
>  Heat does indeed support template composition - but we don't want our
> end-users to do learn how to do that: we want them just to combine existing
> application on higher-level. Murano will use the template composition under
> the hood, but only in the way which is designed by application publisher.
> If the publisher has decided to configure the software with using Heat
> Software Config, then this option will be used. If some other (pro

Re: [openstack-dev] [nova] Future of the Nova API

2014-02-24 Thread Dan Smith
> The API layer is a actually quite a very thin layer on top of the
> rest of Nova. Most of the logic in the API code is really just
> checking incoming data, calling the underlying nova logic and then
> massaging what is returned in the correct format. So as soon as you
> change the format the cost of localised changes is pretty much the
> same as duplicating the APIs. In fact I'd argue in many cases its
> more because in terms of code readability its a lot worse and
> techniques like using decorators for jsonschema for input validation
> are a lot harder to implement. And unit and tempest tests still need
> to be duplicated.

Making any change to the backend is double the effort with the two trees
as it would be with one API. I agree that changing/augmenting the format
of a call means some localized "if this then that" code, but that's
minor compared to what it takes to do things on the backend, IMHO.

> I don't understand why this is also not seen as forcing people off
> V2 to V3 which is being given as a reason for not being able to set
> a reasonable deprecation time for V2. This will require major changes
> for people using the V2 API to change how they use it.

Well, deprecating them doesn't require the change. Removing them does. I
think we can probably keep the proxying in a deprecated form for a very
long time, hopefully encouraging new users to "do it right" without
breaking existing users who don't care. Hopefully losing out on the
functionality they miss by not talking directly to Neutron (for example)
will be a good carrot to avoid using the proxy APIs.

> In all the discussions we've (as in the Nova group) had over the API 
> there has been a pretty clear consensus that proxying is quite 
> suboptimal (there are caching issues etc) and the long term goal is
> to remove it from Nova. Why the change now?

This is just MHO, of course. I don't think I've been party to those
conversations. I understand why the proxying is bad, but that's a
different issue from whether we drop it and break our users.

> I strongly disagree here. I think you're overestimating the amount of
> maintenance effort this involves and significantly underestimating
> how much effort and review time a backport is going to take.

Fair enough. I'm going from my experience over the last few cycles of
changing how the API communicates with the backend. This is something
we'll have to continue to evolve over time, and right now it
Sucks Big Time(tm) :)

>> - twice the code
> For starters, It's not twice the code because we don't do things
> like proxying and because we are able to logically separate out
> input validation jsonschema.

You're right, I should have said "twice the code for changes between the
API and the backend".

> Eg just one simple example, but how many people new to the API get
> confused about what they are meant to send when it asks for
> instance_uuid when they've never received one - is at server uuid -
> and if so what's the difference? Do I have to do some sort of
> conversion? Similar issues around project and tenant. And when
> writing code they have to remember for this part of the API they pass
> it as server_uuid, in another instance_uuid, or maybe its just id?
> All of these looked at individually may look like small costs or
> barriers to using the API but they all add up and they end up being
> imposed over a lot of people.

Yup, it's ugly, no doubt. I think that particular situation is probably
(hopefully?) covered up by the various client libraries (and/or docs)
that we have. If not, I think it's probably something we can improve
from an experience perspective on that end. But yeah, I know the public
API docs would still have that ambiguity.

> And how is say removing proxying or making *any* backwards
> incompatible change any different?

It's not. That's why I said "maybe remove it some day" :)

> Well if you never deprecate the only way to do it is to maintain the 
> old API forever (including test). And just take the hit on all that 
> involves.

Sure. Hopefully people that actually deploy and support our API will
chime in here about whether they think that effort is worth not telling
their users to totally rewrite their clients.

If we keep v2 and v3, I think we start in icehouse with a very large
surface, which will increase over time. If we don't, then we start with
v2 and end up with only the delta over time.

> What about the tasks API? We that discussed at the mid cycle summit
> and decided that the alternative backwards compatible way of doing it
> was too ugly and we didn't want to do that. But that's exactly what
> we'd be doing if we implemented them in the v2 API and it would be a 
> feature which ends up looking bolted because of the otherwise 
> significant non backwards compatible API changes we can't do.

If we version the core API and let the client declare the version it
speaks in a header, we could iterate on that interface right? If they're
version =X return
the task. We 

Re: [openstack-dev] [nova] Future of the Nova API

2014-02-24 Thread Russell Bryant
On 02/24/2014 05:01 PM, Morgan Fainberg wrote:
> On the topic of backwards incompatible changes:
> 
> I strongly believe that breaking current clients that use the APIs
> directly is the worst option possible. All the arguments about needing
> to know which APIs work based upon which backend drivers are used are
> all valid, but making an API incompatible change when we’ve made the
> contract that the current API will be stable is a very bad approach.
> Breaking current clients isn’t just breaking “novaclient", it would also
> break any customers that are developing directly against the API. In the
> case of cloud deployments with real-world production loads on them (and
> custom development around the APIs) upgrading between major versions is
> already difficult to orchestrate (timing, approvals, etc), if we add in
> the need to re-work large swaths of code due to API changes, it will
> become even more onerous and perhaps drive deployers to forego the
> upgrades in lieu of stability.
> 
> If the perception is that we don’t have stable APIs (especially when we
> are ostensibly versioning them), driving adoption of OpenStack becomes
> significantly more difficult. Difficulty in driving further adoption
> would be a big negative to both the project and the community.
> 
> TL;DR, “don’t break the contract”. If we are seriously making
> incompatible changes (and we will be regardless of the direction) the
> only reasonable option is a new major version.

FWIW, I do *not* consider non backwards compatible changes to be on the
table for the existing API.  Evolving it would have to be done in a
backwards compatible way.  I'm completely in agreement with that.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Future of the Nova API

2014-02-24 Thread Christopher Yeoh
On Mon, 24 Feb 2014 11:13:11 -0500
Russell Bryant  wrote:
> 
> Yes, this is a major concern.  It has taken an enormous amount of work
> to get to where we are, and v3 isn't done.  It's a good time to
> re-evaluate whether we are on the right path.

So I think its important to point out that we pretty much were "done"
before the last minute nova-network unfreezing which became a new
requirement for V3 in I-3. And the unfortunate unexpected delay in the
tasks API work. If either of those hadn't occurred we could have made
up the difference in I-3 - and even then we *could* have made it
but for I think reasonable risk purposes in trying to merge a lot of
code at the last minute decided to delay.

> The more I think about it, the more I think that our absolute top goal
> should be to maintain a stable API for as long as we can reasonably do
> so.  I believe that's what is best for our users.  I think if you gave
> people a choice, they would prefer an inconsistent API that works for
> years over dealing with non-backwards compatible jumps to get a nicer
> looking one.
> 
> The v3 API and its unit tests are roughly 25k lines of code.  This
> also doesn't include the changes necessary in novaclient or tempest.
> That's just *our* code.  It explodes out from there into every SDK,
> and then end user apps.  This should not be taken lightly.

So the v2 API and its unit tests are around 43k LOC. And this is even
with the v3 API having more tests for the better input validation we do.

Just taking this down to burden in terms of LOC (and this may be one
of the worst metrics ever). If we proceeded with the v3 API and
maintained the V2 API for say 4 cycles, thats and extra burden of 100k
LOC compared to just doing the v2 API. But we'd pay that off in just 2
and a bit cycles once the the v2 API is removed because we'd now be
maintaining around 25k LOC instead of 43k LOC.

> 
> If it's a case of wanting to be more strict, some would argue that the
> current behavior isn't so bad (see robustness principle [1]):
> 
> "Be conservative in what you do, be liberal in what you accept
> from others (often reworded as "Be conservative in what you send, be
> liberal in what you accept")."

Sometimes the problem is that people send extraneous data and they're
never told that what they're doing is wrong. But really no harm
caused, everything still works. I'm sure there are plenty of examples
of this happening. 

But the bigger issue around input validation being too lax is
that people send optional parameters (perhaps with a typo, or perhaps
simply in the wrong place) and the API layer quietly ignores them. The
users think they've requested some behaviour, the API says "yep,
sure!", but it doesn't actually do what they want. We've even seen
this sort of thing in our api samples which automatically flows through
to our documentation!

> There's a decent counter argument to this, too.  However, I still fall
> back on it being best to just not break existing clients above all
> else.

I agree, we shouldn't break existing clients - within a major version.
That's why we need to make API rev.

> > - The V3 API as-is has:
> >   - lower maintenance
> >   - is easier to understand and use (consistent).
> >   - Much better input validation which is baked-in (json-schema)
> > rather than ad-hoc and incomplete.
> 
> So here's the rub ... with the exception of the consistency bits, none
> of this is visible to users, which makes me think we should be able to
> do all of this on v2.

As discussed above we can't really do a lot on input validation
either. And I think the pain of doing the backport is being greatly
underestimated. In doing the v3 port we arranged the patches so much of
it in terms of review was similar to doing patches to V2 rather than
starting from "new code". And I know how hard that was to get it all in
during a period when it was easier to review bandwidth.

> 
> > - Whilst we have existing users of the API we also have a lot more
> >   users in the future. It would be much better to allow them to use
> >   the API we want to get to as soon as possible, rather than trying
> >   to evolve the V2 API and forcing them along the transition that
> > they could otherwise avoid.
> 
> I'm not sure I understand this.  A key point is that I think any
> evolving of the V2 API has to be backwards compatible, so there's no
> forcing them along involved.

Well other people have been suggesting we can just deprecate parts (be
it proxying or other bits we really don't like) and then make the
backwards incompatible change. I think we've already said we'll do it
for XML for the V2 API and force them off to JSON.

> > - Proposed way forward:
> >   - Release the V3 API in Juno with nova-network and tasks support
> >   - Feature freeze the V2 API when the V3 API is released
> > - Set the timeline for deprecation of V2 so users have a lot
> >   of warning
> > - Fallback for those who really don't want to move after
> >   dep

Re: [openstack-dev] [Murano] Object-oriented approach for defining Murano Applications

2014-02-24 Thread Christopher Armstrong
On Mon, Feb 24, 2014 at 4:20 PM, Georgy Okrokvertskhov <
gokrokvertsk...@mirantis.com> wrote:

> Hi Keith,
>
> Thank you for bringing up this question. We think that it could be done
> inside Heat. This is a part of our future roadmap to bring more stuff to
> Heat and pass all actual work to the heat engine. However it will require a
> collaboration between Heat and Murano teams, so that is why we want to have
> incubated status, to start better integration with other projects being a
> part of OpenStack community. I will understand Heat team when they refuse
> to change Heat templates to satisfy the requirements of the project which
> does not officially belong to OpenStack. With incubation status it will be
> much easier.
> As for the actual work, backups and snapshots are processes. It will be
> hard to express them in a good way in current HOT template. We see that we
> will use Mistral resources defined in Heat which will trig the events for
> backup and backup workflow associated with the application can be defined
> outside of Heat. I don't think that Heat team will include workflow
> definitions as a part of template format, while they can allow us to use
> resources which reference such workflows stored in a catalog. It can be an
> extension for HOT Software config for example, but we need to validate this
> approach with the heat team.
>
>
For what it's worth, there's already precedent for including non-OpenStack
resource plugins in Heat, in a "contrib" directory (which is still tested
with the CI infrastructure).




-- 
IRC: radix
Christopher Armstrong
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] why doesn't _rollback_live_migration() always call rollback_live_migration_at_destination()?

2014-02-24 Thread Chris Friesen

I'm looking at the live migration rollback code and I'm a bit confused.

When setting up a live migration we unconditionally run 
ComputeManager.pre_live_migration() on the destination host to do 
various things including setting up networks on the host.


If something goes wrong with the live migration in 
ComputeManager._rollback_live_migration() we will only call 
self.compute_rpcapi.rollback_live_migration_at_destination() if we're 
doing block migration or volume-backed migration that isn't shared storage.


However, looking at 
ComputeManager.rollback_live_migration_at_destination(), I also see it 
cleaning up networking as well as block device.


What happens if we have a shared-storage instance that we try to migrate 
and fail and end up rolling back?  Are we going to end up with messed-up 
networking on the destination host because we never actually cleaned it up?


Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Sent the first batch of invitations to Atlanta's Summit

2014-02-24 Thread Collins, Sean
Make sure that you also log in, or have your username and password handy before 
you redeem it.

If you click a link to send a password reset, you'll lose your session, and the 
invite code is a one-time use – I had to dig through my history to get the URL 
back, since the back button did not work correctly.

--
Sean M. Collins
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Future of the Nova API

2014-02-24 Thread Morgan Fainberg
On the topic of backwards incompatible changes:

I strongly believe that breaking current clients that use the APIs directly is 
the worst option possible. All the arguments about needing to know which APIs 
work based upon which backend drivers are used are all valid, but making an API 
incompatible change when we’ve made the contract that the current API will be 
stable is a very bad approach. Breaking current clients isn’t just breaking 
“novaclient", it would also break any customers that are developing directly 
against the API. In the case of cloud deployments with real-world production 
loads on them (and custom development around the APIs) upgrading between major 
versions is already difficult to orchestrate (timing, approvals, etc), if we 
add in the need to re-work large swaths of code due to API changes, it will 
become even more onerous and perhaps drive deployers to forego the upgrades in 
lieu of stability.

If the perception is that we don’t have stable APIs (especially when we are 
ostensibly versioning them), driving adoption of OpenStack becomes 
significantly more difficult. Difficulty in driving further adoption would be a 
big negative to both the project and the community.

TL;DR, “don’t break the contract”. If we are seriously making incompatible 
changes (and we will be regardless of the direction) the only reasonable option 
is a new major version.
—
Morgan Fainberg
Principal Software Engineer
Core Developer, Keystone
m...@metacloud.com


On February 24, 2014 at 10:16:31, Matt Riedemann (mrie...@linux.vnet.ibm.com) 
wrote:



On 2/24/2014 10:13 AM, Russell Bryant wrote:  
> On 02/24/2014 01:50 AM, Christopher Yeoh wrote:  
>> Hi,  
>>  
>> There has recently been some speculation around the V3 API and whether  
>> we should go forward with it or instead backport many of the changes  
>> to the V2 API. I believe that the core of the concern is the extra  
>> maintenance and test burden that supporting two APIs means and the  
>> length of time before we are able to deprecate the V2 API and return  
>> to maintaining only one (well two including EC2) API again.  
>  
> Yes, this is a major concern. It has taken an enormous amount of work  
> to get to where we are, and v3 isn't done. It's a good time to  
> re-evaluate whether we are on the right path.  
>  
> The more I think about it, the more I think that our absolute top goal  
> should be to maintain a stable API for as long as we can reasonably do  
> so. I believe that's what is best for our users. I think if you gave  
> people a choice, they would prefer an inconsistent API that works for  
> years over dealing with non-backwards compatible jumps to get a nicer  
> looking one.  
>  
> The v3 API and its unit tests are roughly 25k lines of code. This also  
> doesn't include the changes necessary in novaclient or tempest. That's  
> just *our* code. It explodes out from there into every SDK, and then  
> end user apps. This should not be taken lightly.  
>  
>> This email is rather long so here's the TL;DR version:  
>>  
>> - We want to make backwards incompatible changes to the API  
>> and whether we do it in-place with V2 or by releasing V3  
>> we'll have some form of dual API support burden.  
>> - Not making backwards incompatible changes means:  
>> - retaining an inconsistent API  
>  
> I actually think this isn't so bad, as discussed above.  
>  
>> - not being able to fix numerous input validation issues  
>  
> I'm not convinced, actually. Surely we can do a lot of cleanup here.  
> Perhaps you have some examples of what we couldn't do in the existing API?  
>  
> If it's a case of wanting to be more strict, some would argue that the  
> current behavior isn't so bad (see robustness principle [1]):  
>  
> "Be conservative in what you do, be liberal in what you accept from  
> others (often reworded as "Be conservative in what you send, be  
> liberal in what you accept")."  
>  
> There's a decent counter argument to this, too. However, I still fall  
> back on it being best to just not break existing clients above all else.  
>  
>> - have to forever proxy for glance/cinder/neutron with all  
>> the problems that entails.  
>  
> I don't think I'm as bothered by the proxying as others are. Perhaps  
> it's not architecturally pretty, but it's worth it to maintain  
> compatibility for our users.  

+1 to this, I think this is also related to what Jay Pipes is saying in  
his reply:  

"Whether a provider chooses to, for example,  
deploy with nova-network or Neutron, or Xen vs. KVM, or support block  
migration for that matter *should have no effect on the public API*. The  
fact that those choices currently *do* effect the public API that is  
consumed by the client is a major indication of the weakness of the API."  

As a consumer, I don't want to have to know which V2 APIs work and which  
don't depending on if I'm using nova-network or Neutron.  

>  
>> - Backporting V3 infrastructure changes to V2 would

[openstack-dev] [Ironic] Starting to postpone work to Juno

2014-02-24 Thread Devananda van der Veen
Hi all,

For the last few meetings, we've been discussing how to prioritize the work
that we need to get done as we approach the close of Icehouse development.
There's still some distance between where we are and where we need to be --
integration with other projects (eg. Nova), CI testing of that integration
(eg. via devstack), and fixing bugs that we continue to find.

As core reviewers need to focus their time during the last week of I-3,
we've discussed postponing cosmetic changes, particularly patches that just
refactor code without any performance or feature benefit, to the start of
Juno. [1] So, later today I am going to block patches that do not have
important functional changes and are non-trivial in scope (eg, take more
than a minute to read), are related to low-priority or wishlist items, or
are not targeted to Icehouse.

Near the end of the week, I will retarget incomplete blueprints to the Juno
release.

Next week is the TripleO developer sprint, which coincides with the close
of I-3. Many Ironic developers and more than half of our core review team
will also be there. This will give us a good opportunity to hammer out
testing and integration issues and work on bug fixes.

Over the next month, I would like us to stabilize what we have, add further
integration and functional testing to our gate, and write deployer/usage
documentation.

Regards,
Devananda


[1]

We actually voted on this last week, I didn't follow through, and Chris
reminded me during the meeting today...

http://eavesdrop.openstack.org/meetings/ironic/2014/ironic.2014-02-17-19.00.html
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Sent the first batch of invitations to Atlanta's Summit

2014-02-24 Thread Stefano Maffulli
On 02/17/2014 05:21 PM, Steve Kowalik wrote:
> I found it completely non-obvious too, and had to go back and look for
> the link. If the promotion code text box was always visible with the
> Apply button grayed out when the text box is empty, I think that would help.

Unfortunately the site is managed by eventbrite and we have little
control over their UX choices.

Since we know it's quite easy to miss the spot to redeem the invitation
code, we include a screenshot in the invitation email: there is an arrow
there, showing where to click to enter the discount code. If you have
other ideas on how to make the process more obvious let us know.

Cheers,
Stef

-- 
Ask and answer questions on https://ask.openstack.org

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano] Object-oriented approach for defining Murano Applications

2014-02-24 Thread Keith Bray
Have you considered writing Heat resource plug-ins that perform (or configure 
within other services) instance snapshots, backups, or whatever other 
maintenance workflow possibilities you want that don't exist?  Then these 
maintenance workflows you mention could be expressed in the Heat template 
forming a single place for the application architecture definition, including 
defining the configuration for services that need to be application aware 
throughout the application's life .  As you describe things in Murano, I 
interpret that you are layering application architecture specific information 
and workflows into a DSL in a layer above Heat, which means information 
pertinent to the application as an ongoing concern would be disjoint.  
Fragmenting the necessary information to wholly define an 
infrastructure/application architecture could make it difficult to share the 
application and modify the application stack.

I would be interested in a library that allows for composing Heat templates 
from "snippets" or "fragments" of pre-written Heat DSL... The library's job 
could be to ensure that the snippets, when combined, create a valid Heat 
template free from conflict amongst resources, parameters, and outputs.  The 
interaction with the library, I think, would belong in Horizon, and the 
"Application Catalog" and/or "Snippets Catalog" could be implemented within 
Glance.

>>>Also, there may be workflow steps which are not covered by Heat by design. 
>>>For example, application publisher may include creating instance snapshots, 
>>>data migrations, backups etc into the deployment or maintenance workflows. I 
>>>don't see how these may be done by Heat, while Murano should definitely 
>>>support this scenarios.

From: Alexander Tivelkov mailto:ativel...@mirantis.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Monday, February 24, 2014 12:18 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Murano] Object-oriented approach for defining 
Murano Applications

Hi Stan,

It is good that we are on a common ground here :)

Of course this can be done by Heat. In fact - it will be, in the very same 
manner as it always was, I am pretty sure we've discussed this many times 
already. When Heat Software config is fully implemented, it will be possible to 
use it instead of our Agent execution plans for software configuration - it the 
very same manner as we use "regular" heat templates for resource allocation.

Heat does indeed support template composition - but we don't want our end-users 
to do learn how to do that: we want them just to combine existing application 
on higher-level. Murano will use the template composition under the hood, but 
only in the way which is designed by application publisher. If the publisher 
has decided to configure the software with using Heat Software Config, then 
this option will be used. If some other (probably some legacy ) way of doing 
this was preferred, Murano should be able to support that and allow to create 
such workflows.

Also, there may be workflow steps which are not covered by Heat by design. For 
example, application publisher may include creating instance snapshots, data 
migrations, backups etc into the deployment or maintenance workflows. I don't 
see how these may be done by Heat, while Murano should definitely support this 
scenarios.

So, as a conclusion, Murano should not be though of as a Heat alternative: it 
is a different tool located on the different layer of the stack, aiming 
different user audience - and, the most important - using the Heat underneath.


--
Regards,
Alexander Tivelkov


On Mon, Feb 24, 2014 at 8:36 PM, Stan Lagun 
mailto:sla...@mirantis.com>> wrote:
Hi Alex,

Personally I like the approach and how you explain it. I just would like to 
know your opinion on how this is better from someone write Heat template that 
creates Active Directory  lets say with one primary and one secondary 
controller and then publish it somewhere. Since Heat do supports software 
configuration as of late and has concept of environments [1] that Steven Hardy 
generously pointed out in another mailing thread that can be used for 
composition as well it seems like everything you said can be done by Heat alone

[1]: 
http://hardysteven.blogspot.co.uk/2013/10/heat-providersenvironments-101-ive.html


On Mon, Feb 24, 2014 at 7:51 PM, Alexander Tivelkov 
mailto:ativel...@mirantis.com>> wrote:
Sorry folks, I didn't put the proper image url. Here it is:


https://creately.com/diagram/hrxk86gv2/kvbckU5hne8C0r0sofJDdtYgxc%3D


--
Regards,
Alexander Tivelkov


On Mon, Feb 24, 2014 at 7:39 PM, Alexander Tivelkov 
mailto:ativel...@mirantis.com>> wrote:

Hi,


I would like to initiate one more discussion about an approach we selected to 
solve a particular problem in Murano.

The problem statement is 

Re: [openstack-dev] [nova] Future of the Nova API

2014-02-24 Thread Christopher Yeoh
On Mon, 24 Feb 2014 07:56:19 -0800
Dan Smith  wrote:

> > - We want to make backwards incompatible changes to the API
> >   and whether we do it in-place with V2 or by releasing V3
> >   we'll have some form of dual API support burden.
> 
> IMHO, the cost of maintaining both APIs (which are largely duplicated)
> for almost any amount of time outweighs the cost of localized changes.

The API layer is a actually quite a very thin layer on top of the rest
of Nova. Most of the logic in the API code is really just checking
incoming data, calling the underlying nova logic and then massaging
what is returned in the correct format. So as soon as you change the
format the cost of localised changes is pretty much the same as
duplicating the APIs. In fact I'd argue in many cases its more because
in terms of code readability its a lot worse and techniques like using
decorators for jsonschema for input validation are a lot harder to
implement. And unit and tempest tests still need to be duplicated.

> 
> The neutron stickiness aside, I don't see a problem leaving the
> proxying in place for the foreseeable future. I think that it's
> reasonable to mark them as deprecated, encourage people not to use
> them, and maybe even (with a core api version to mark the change) say
> that they're not supported anymore.
> 

I don't understand why this is also not seen as forcing people off V2
to V3 which is being given as a reason for not being able to set a
reasonable deprecation time for V2. This will require major changes for
people using the V2 API to change how they use it. 


> I also think that breaking our users because we decided to split A
> into B and C on the backend kind of sucks. I imagine that continuing
> to do that at the API layer (when we're clearly going to keep doing
> it on the backend) is going to earn us a bit of a reputation.

In all the discussions we've (as in the Nova group) had over the API
there has been a pretty clear consensus that proxying is quite
suboptimal (there are caching issues etc) and the long term goal is to
remove it from Nova. Why the change now? 

> 
> >   - Backporting V3 infrastructure changes to V2 would be a
> > considerable amount of programmer/review time
> 
> While acknowledging that you (and others) have done that for v3
> already, I have to think that such an effort is much less costly than
> maintaining two complete overlapping pieces of API code.

I strongly disagree here. I think you're overestimating the
amount of maintenance effort this involves and significantly
underestimating how much effort and review time a backport is going to
take.

> - twice the code
> - different enough to be annoying to convert existing clients to use
> - not currently different enough to justify the pain

For starters, It's not twice the code because we don't do things like
proxying and because we are able to logically separate out input
validation jsonschema. 

v2 API: ~14600 LOC
v3 API: ~7300 LOC (~8600 LOC if nova-network as-is added back in,
though the actually increase would almost certainly be a lot smaller)

And that's with a lot of the jsonschema patches not landed. So its
actually getting *smaller*. Long term which looks the better from a
maintenance point of view 

And I think you're continuing to look at it solely from the point of
view of pain for existing users of the API and not considering the pain
for new users who have to work out how to use the API. Eg just one
simple example, but how many people new to the API get confused about
what they are meant to send when it asks for instance_uuid when
they've never received one - is at server uuid - and if so what's the
difference? Do I have to do some sort of conversion? Similar issues
around project and tenant. And when writing code they have to remember
for this part of the API they pass it as server_uuid, in another
instance_uuid, or maybe its just id? All of these looked at
individually may look like small costs or barriers to using the API but
they all add up and they end up being imposed over a lot of people.

> This feels a lot like holding our users hostage in order to get them
> to move. We're basically saying "We tweaked a few things, fixed some
> spelling errors, and changed some date stamp formats. You will have to
> port your client, or no new features for you!" That's obviously a
> little hyperbolic, but I think that deployers of APIv2 would probably
> feel like that's the story they have to give to their users.

And how is say removing proxying or making *any* backwards incompatible
change any different? And this sort of situation is very common with
major library version upgrades. If you want new features you have to
port to the library version which requires changes to your app (that's
why its a major library version not a minor one).

> I naively think that we could figure out a way to move things forward
> without having to completely break older clients. It's clear that
> other services (with much larger and mor

Re: [openstack-dev] [Neutron][LBaaS] Object Model discussion

2014-02-24 Thread Eugene Nikanorov
Hi Jay,

Thanks for suggestions. I get the idea.
I'm not sure the essence of this API is much different then what we have
now.
1) We operate on parameters of loadbalancer rather then on
vips/pools/listeners. No matter how we name them, the notions are there.
2) I see two opposite preferences: one is that user doesn't care about
'loadbalancer' in favor of pools/vips/listeners ('pure logical API')
another is vice versa (yours).
3) The approach of providing $BALANCER_ID to pretty much every call solves
all my concerns, I like it.
Basically that was my initial code proposal (it's not exactly the same, but
it's very close).
The idea of my proposal was to have that 'balancer' resource plus being
able to operate on vips/pools/etc.
In this direction we could evolve from existing API to the API in your
latest suggestion.

Thanks,
Eugene.


On Tue, Feb 25, 2014 at 12:35 AM, Jay Pipes  wrote:

> Thanks, Eugene! I've given the API a bit of thought today and jotted
> down some thoughts below.
>
> On Fri, 2014-02-21 at 23:57 +0400, Eugene Nikanorov wrote:
> > Could you provide some examples -- even in the pseudo-CLI
> > commands like
> > I did below. It's really difficult to understand where the
> > limits are
> > without specific examples.
> > You know, I always look at the API proposal from implementation
> > standpoint also, so here's what I see.
> > In the cli workflow that you described above, everything is fine,
> > because the driver knowы how and where to deploy each object
> > that you provide in your command, because it's basically a batch.
>
> Yes, that is true.
>
> > When we're talking about separate objectы that form a loadbalancer -
> > vips, pools, members, it becomes not clear how to map them backends
> > and at which point.
>
> Understood, but I think we can make some headway here. Examples below.
>
> > So here's an example I usually give:
> > We have 2 VIPs (in fact, one address and 2 ports listening for http
> > and https, now we call them listeners),
> > both listeners pass request to a webapp server farm, and http listener
> > also passes requests to static image servers by processing incoming
> > request URIs by L7 rules.
> > So object topology is:
> >
> >
> >  Listener1 (addr:80)   Listener2(addr:443)
> >| \/
> >| \/
> >|  X
> >|  / \
> >  pool1(webapp) pool2(static imgs)
> > sorry for that stone age pic :)
> >
> >
> > The proposal that we discuss can create such object topology by the
> > following sequence of commands:
> > 1) create-vip --name VipName address=addr
> > returns vid_id
> > 2) create-listener --name listener1 --port 80 --protocol http --vip_id
> > vip_id
> > returns listener_id1
> > 3) create-listener --name listener2 --port 443 --protocol https
> > --sl-params params --vip_id vip_id
> >
> > returns listener_id2
>
> > 4) create-pool --name pool1 
> >
> > returns pool_id1
> > 5) create-pool --name pool1 
> > returns pool_id2
> >
> > 6) set-listener-pool listener_id1 pool_id1 --default
> > 7) set-listener-pool listener_id1 pool_id2 --l7policy policy
> >
> > 7) set-listener-pool listener_id2 pool_id1 --default
>
> > That's a generic workflow that allows you to create such config. The
> > question is at which point the backend is chosen.
>
> From a user's perspective, they don't care about VIPs, listeners or
> pools :) All the user cares about is:
>
>  * being able to add or remove backend nodes that should be balanced
> across
>  * being able to set some policies about how traffic should be directed
>
> I do realize that AWS ELB's API uses the term "listener" in its API, but
> I'm not convinced this is the best term. And I'm not convinced that
> there is a need for a "pool" resource at all.
>
> Could the above steps #1 through #6 be instead represented in the
> following way?
>
> # Assume we've created a load balancer with ID $BALANCER_ID using
> # Something like I showed in my original response:
>
> neutron balancer-create --type=advanced --front= \
>  --back= --algorithm="least-connections" \
>  --topology="active-standby"
>
> neutron balancer-configure $BALANCER_ID --front-protocol=http \
>  --front-port=80 --back-protocol=http --back-port=80
>
> neutron balancer-configure $BALANCER_ID --front-protocol=https \
>  --front-port=443 --back-protocol=https --back-port=443
>
> Likewise, we could configure the load balancer to send front-end HTTPS
> traffic (terminated at the load balancer) to back-end HTTP services:
>
> neutron balancer-configure $BALANCER_ID --front-protocol=https \
>  --front-port=443 --back-protocol=http --back-port=80
>
> No mention of listeners, VIPs, or pools at all.
>
> The REST API for the balancer-update CLI command above might be
> something like this:
>
> PUT /balancers/{balancer_id}
>
> with JSON body of request like so:
>
> {
>   "

Re: [openstack-dev] [savanna] Nominate Andrew Lazarew for savanna-core

2014-02-24 Thread Sergey Lukjanov
Unanimously.

Congratulations, Andrew, welcome to the core team!


On Fri, Feb 21, 2014 at 4:46 PM, Matthew Farrellee  wrote:

> On 02/19/2014 05:40 PM, Sergey Lukjanov wrote:
>
>> Hey folks,
>>
>> I'd like to nominate Andrew Lazarew (alazarev) for savanna-core.
>>
>> He is among the top reviewers of Savanna subprojects. Andrew is working
>> on Savanna full time since September 2013 and is very familiar with
>> current codebase. His code contributions and reviews have demonstrated a
>> good knowledge of Savanna internals. Andrew have a valuable knowledge of
>> both core and EDP parts, IDH plugin and Hadoop itself. He's working on
>> both bugs and new features implementation.
>>
>> Some links:
>>
>> http://stackalytics.com/report/reviews/savanna-group/30
>> http://stackalytics.com/report/reviews/savanna-group/90
>> http://stackalytics.com/report/reviews/savanna-group/180
>> https://review.openstack.org/#/q/owner:alazarev+savanna+AND+
>> -status:abandoned,n,z
>> https://launchpad.net/~alazarev
>>
>> Savanna cores, please, reply with +1/0/-1 votes.
>>
>> Thanks.
>>
>> --
>> Sincerely yours,
>> Sergey Lukjanov
>> Savanna Technical Lead
>> Mirantis Inc.
>>
>
> fyi, some of those links don't work, but these do,
>
> http://stackalytics.com/report/contribution/savanna-group/30
> http://stackalytics.com/report/contribution/savanna-group/90
> http://stackalytics.com/report/contribution/savanna-group/180
>
> i'm very happy to see andrew evolving in the savanna community, making
> meaningful contributions, demonstrating a reasoned approach to resolve
> disagreements, and following guidelines such as GitCommitMessages more
> closely. i expect he will continue his growth as well as influence others
> to contribute positively.
>
> +1
>
> best,
>
>
> matt
>



-- 
Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Feedback on SSL implementation

2014-02-24 Thread Eugene Nikanorov
Hi,

Barbican is the storage option we're considering, however it seems that
there's not much progress with incubation of it.

Another week point of our current state is a lack of secure communication
between neutron server and the agent, but that is solvable.

Thanks,
Eugene.


On Fri, Feb 21, 2014 at 11:42 PM, Jay Pipes  wrote:

> On Wed, 2014-02-19 at 22:01 -0800, Stephen Balukoff wrote:
>
> > Front-end versus back-end protocols:
> > It's actually really common for a HTTPS-enabled front-end to speak
> > HTTP to the back-end.  The assumption here is that the back-end
> > network is "trusted" and therefore we don't need to bother with the
> > (considerable) extra CPU overhead of encrypting the back-end traffic.
> > To be honest, if you're going to speak HTTPS on the front-end and the
> > back-end, then the only possible reason for even terminating SSL on
> > the load balancer is to insert the X-Fowarded-For header. In this
> > scenario, you lose almost all the benefit of doing SSL offloading at
> > all!
>
> This is exactly correct.
>
> > If we make a policy decision right here not to allow front-end and
> > back-end protocol to mismatch, this will break a lot of topologies.
>
> Yep.
>
> Best,
> -jay
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] bug 1203680 - fix requires doc

2014-02-24 Thread Sean Dague
On 02/24/2014 03:10 PM, Ben Nemec wrote:
> On 2014-02-21 17:09, Sean Dague wrote:
>> On 02/21/2014 05:28 PM, Clark Boylan wrote:
>>> On Fri, Feb 21, 2014 at 1:00 PM, Ben Nemec 
>>> wrote:
 On 2014-02-21 13:01, Mike Spreitzer wrote:

 https://bugs.launchpad.net/devstack/+bug/1203680 is literally about
 Glance
 but Nova has the same problem.  There is a fix released, but just
 merging
 that fix accomplishes nothing --- we need people who run DevStack to
 set the
 new variable (INSTALL_TESTONLY_PACKAGES).  This is something that
 needs to
 be documented (in http://devstack.org/configuration.html and all the
 places
 that tell people how to do unit testing, for examples), so that
 people know
 to do it, right?



 IMHO, that should be enabled by default.  Every developer using
 devstack is
 going to want to run unit tests at some point (or should anyway...),
 and if
 the gate doesn't want the extra install time for something like
 tempest that
 probably doesn't need these packages, then it's much simpler to
 disable it
 in that one config instead of every separate config used by every
 developer.

 -Ben

>>>
>>> I would be wary of relying on devstack to configure your unittest
>>> environments. Just like it takes over the node you run it on, devstack
>>> takes full ownership of the repos it clones and will do potentially
>>> lossy things like `git reset --hard` when you don't expect it to. +1
>>> to documenting the requirements for unittesting, not sure I would
>>> include devstack in that documentation.
>>
>> Agreed, I never run unit tests in the devstack tree. I run them on my
>> laptop or other non dedicated computers. That's why we do unit tests in
>> virtual envs, they don't need a full environment.
>>
>> Also many of the unit tests can't be run when openstack services are
>> actually running, because they try to bind to ports that openstack
>> services use.
>>
>> It's one of the reasons I've never considered that path a priority in
>> devstack.
>>
>> -Sean
>>
> 
> What is the point of devstack if we can't use it for development?  

I builds you a consistent cloud.

> Are
> we really telling people that they shouldn't be altering the code in
> /opt/stack because it's owned by devstack, and devstack reserves the
> right to blow it away any time it feels the urge? 

Actually, I tell people that all that time. Most of them don't listen to
me. :)

Devstack defaults to RECLONE=False, but that tends to break people in
other ways (like having month old trees they are building against). But
the reality is I've watched tons of people have their work reset on them
because they were developing in /opt/stack, so I tell people don't do
that (and if they do it anyway, at least they realize it's dangerous).

> And if that's not
> what we're saying, aren't they going to want to run unit tests before
> they push their changes from /opt/stack?  I don't think it's reasonable
> to tell them that they have to copy their code to another system to run
> unit tests on it.

Devstack can clone from alternate sources, and that's my approach on
anything long running. For instance, keeping trees in ~/code/ and adjust
localrc to use those trees/branches that I'm using (with the added
benefit of being able to easily reclone the rest of the tree).

Lots of people use devstack + vagrant, and do basically the same thing
with their laptop repos being mounted up into the guest.

And some people do it the way you are suggesting above.

The point is, for better or worse, what we have is a set of tools from
which you can assemble a workflow that suits your needs. We don't have a
prescribed "this is the one way to develop" approach. There is some
assumption that you'll pull together something from the tools provided.

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Object Model discussion

2014-02-24 Thread Jay Pipes
Thanks, Eugene! I've given the API a bit of thought today and jotted
down some thoughts below.

On Fri, 2014-02-21 at 23:57 +0400, Eugene Nikanorov wrote:
> Could you provide some examples -- even in the pseudo-CLI
> commands like
> I did below. It's really difficult to understand where the
> limits are
> without specific examples.
> You know, I always look at the API proposal from implementation
> standpoint also, so here's what I see.
> In the cli workflow that you described above, everything is fine,
> because the driver knowы how and where to deploy each object
> that you provide in your command, because it's basically a batch.

Yes, that is true.

> When we're talking about separate objectы that form a loadbalancer -
> vips, pools, members, it becomes not clear how to map them backends
> and at which point.

Understood, but I think we can make some headway here. Examples below.

> So here's an example I usually give:
> We have 2 VIPs (in fact, one address and 2 ports listening for http
> and https, now we call them listeners), 
> both listeners pass request to a webapp server farm, and http listener
> also passes requests to static image servers by processing incoming
> request URIs by L7 rules.
> So object topology is:
> 
> 
>  Listener1 (addr:80)   Listener2(addr:443)
>| \/
>| \/
>|  X
>|  / \
>  pool1(webapp) pool2(static imgs)
> sorry for that stone age pic :)
> 
> 
> The proposal that we discuss can create such object topology by the
> following sequence of commands:
> 1) create-vip --name VipName address=addr
> returns vid_id
> 2) create-listener --name listener1 --port 80 --protocol http --vip_id
> vip_id
> returns listener_id1
> 3) create-listener --name listener2 --port 443 --protocol https
> --sl-params params --vip_id vip_id
> 
> returns listener_id2

> 4) create-pool --name pool1 
> 
> returns pool_id1
> 5) create-pool --name pool1 
> returns pool_id2
> 
> 6) set-listener-pool listener_id1 pool_id1 --default
> 7) set-listener-pool listener_id1 pool_id2 --l7policy policy
> 
> 7) set-listener-pool listener_id2 pool_id1 --default

> That's a generic workflow that allows you to create such config. The
> question is at which point the backend is chosen.

From a user's perspective, they don't care about VIPs, listeners or
pools :) All the user cares about is:

 * being able to add or remove backend nodes that should be balanced
across
 * being able to set some policies about how traffic should be directed

I do realize that AWS ELB's API uses the term "listener" in its API, but
I'm not convinced this is the best term. And I'm not convinced that
there is a need for a "pool" resource at all.

Could the above steps #1 through #6 be instead represented in the
following way?

# Assume we've created a load balancer with ID $BALANCER_ID using
# Something like I showed in my original response:

neutron balancer-create --type=advanced --front= \
 --back= --algorithm="least-connections" \
 --topology="active-standby"

neutron balancer-configure $BALANCER_ID --front-protocol=http \
 --front-port=80 --back-protocol=http --back-port=80

neutron balancer-configure $BALANCER_ID --front-protocol=https \
 --front-port=443 --back-protocol=https --back-port=443

Likewise, we could configure the load balancer to send front-end HTTPS
traffic (terminated at the load balancer) to back-end HTTP services:

neutron balancer-configure $BALANCER_ID --front-protocol=https \
 --front-port=443 --back-protocol=http --back-port=80

No mention of listeners, VIPs, or pools at all.

The REST API for the balancer-update CLI command above might be
something like this:

PUT /balancers/{balancer_id}

with JSON body of request like so:

{
  "front-port": 443,
  "front-protocol": "https",
  "back-port": 80,
  "back-protocol": "http"
}

And the code handling the above request would simply look to see if the
load balancer had a "routing entry" for the front-end port and protocol
of (443, https) and set the entry to route to back-end port and protocol
of (80, http).

For the advanced L7 policy heuristics, it makes sense to me to use a
similar strategy. For example (using a similar example from ELB):

neutron l7-policy-create --type="ssl-negotiation" \
 --attr=ProtocolSSLv3=true \
 --attr=ProtocolTLSv1.1=true \
 --attr=DHE-RSA-AES256-SHA256=true \
 --attr=Server-Defined-Cipher-Order=true

Presume above returns an ID for the policy $L7_POLICY_ID. We could then
assign that policy to operate on the front-end of the load balancer by
doing:

neutron balancer-apply-policy $BALANCER_ID $L7_POLICY_ID --port=443

There's no need to specify --front-port of course, since the policy only
applies to the front-end.

There is also no need to refer to a "listener" object, no need to call
anything a VIP, nor any reason to use the

Re: [openstack-dev] OpenStack and GSoC 2014

2014-02-24 Thread Victoria Martínez de la Cruz
So happy to hear that! Congrats all!


2014-02-24 16:16 GMT-03:00 Davanum Srinivas :

> Hi all,
>
> We're in! Just got notified by Admin Team that our Organization
> Application has been accepted. I've updated the etherpad with the full
> responses from them.
>
> https://etherpad.openstack.org/p/gsoc2014orgapp
>
> thanks,
> dims
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] bug 1203680 - fix requires doc

2014-02-24 Thread Ben Nemec

On 2014-02-21 17:09, Sean Dague wrote:

On 02/21/2014 05:28 PM, Clark Boylan wrote:
On Fri, Feb 21, 2014 at 1:00 PM, Ben Nemec  
wrote:

On 2014-02-21 13:01, Mike Spreitzer wrote:

https://bugs.launchpad.net/devstack/+bug/1203680 is literally about 
Glance
but Nova has the same problem.  There is a fix released, but just 
merging
that fix accomplishes nothing --- we need people who run DevStack to 
set the
new variable (INSTALL_TESTONLY_PACKAGES).  This is something that 
needs to
be documented (in http://devstack.org/configuration.html and all the 
places
that tell people how to do unit testing, for examples), so that 
people know

to do it, right?



IMHO, that should be enabled by default.  Every developer using 
devstack is
going to want to run unit tests at some point (or should anyway...), 
and if
the gate doesn't want the extra install time for something like 
tempest that
probably doesn't need these packages, then it's much simpler to 
disable it
in that one config instead of every separate config used by every 
developer.


-Ben



I would be wary of relying on devstack to configure your unittest
environments. Just like it takes over the node you run it on, devstack
takes full ownership of the repos it clones and will do potentially
lossy things like `git reset --hard` when you don't expect it to. +1
to documenting the requirements for unittesting, not sure I would
include devstack in that documentation.


Agreed, I never run unit tests in the devstack tree. I run them on my
laptop or other non dedicated computers. That's why we do unit tests in
virtual envs, they don't need a full environment.

Also many of the unit tests can't be run when openstack services are
actually running, because they try to bind to ports that openstack
services use.

It's one of the reasons I've never considered that path a priority in
devstack.

-Sean



What is the point of devstack if we can't use it for development?  Are 
we really telling people that they shouldn't be altering the code in 
/opt/stack because it's owned by devstack, and devstack reserves the 
right to blow it away any time it feels the urge?  And if that's not 
what we're saying, aren't they going to want to run unit tests before 
they push their changes from /opt/stack?  I don't think it's reasonable 
to tell them that they have to copy their code to another system to run 
unit tests on it.


-Ben

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Satori Project Update (Configuration Discovery)

2014-02-24 Thread Ziad Sawalha
We had our first team meeting[1] today and will be holding weekly team meetings 
on Mondays at 15:00 UTC on #openstack-meeting-alt.

An early prototype of Satori is available on pypi [2].

We’re working towards adding the following features before making an 
announcement to the user list on availability of satori:

- usability improvements such as update docs and additional CLI error trapping
- include an in-host discovery component (that logs on to servers and discover 
running and/or installed software).

We’re available on #satori and eager to get feedback on the work we are doing.

Ziad

[1] https://wiki.openstack.org/wiki/Satori/MeetingLogs
[2] https://pypi.python.org/pypi/satori


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Simulating many fake nova compute nodes for scheduler testing

2014-02-24 Thread David Peraza
Thanks John,

I also think it is a good idea to test the algorithm at unit test level, but I 
will like to try out over amqp as well, that is, we process and threads talking 
to each other over rabbit or qpid. I'm trying to test out performance as well. 

Regards,
David Peraza

-Original Message-
From: John Garbutt [mailto:j...@johngarbutt.com] 
Sent: Monday, February 24, 2014 11:51 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] Simulating many fake nova compute nodes for 
scheduler testing

On 24 February 2014 16:24, David Peraza  wrote:
> Hello all,
>
> I have been trying some new ideas on scheduler and I think I'm 
> reaching a resource issue. I'm running 6 compute service right on my 4 
> CPU 4 Gig VM, and I started to get some memory allocation issues. 
> Keystone and Nova are already complaining there is not enough memory. 
> The obvious solution to add more candidates is to get another VM and set 
> another 6 Fake compute service.
> I could do that but I think I need to be able to scale more without 
> the need to use this much resources. I will like to simulate a cloud 
> of 100 maybe
> 1000 compute nodes that do nothing (Fake driver) this should not take 
> this much memory. Anyone knows of a more efficient way to  simulate 
> many computes? I was thinking changing the Fake driver to report many 
> compute services in different threads instead of having to spawn a 
> process per compute service. Any other ideas?

It depends what you want to test, but I was able to look at tuning the filters 
and weights using the test at the end of this file:
https://review.openstack.org/#/c/67855/33/nova/tests/scheduler/test_caching_scheduler.py

Cheers,
John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

DISCLAIMER
==
This e-mail may contain privileged and confidential information which is the 
property of Persistent Systems Ltd. It is intended only for the use of the 
individual or entity to which it is addressed. If you are not the intended 
recipient, you are not authorized to read, retain, copy, print, distribute or 
use this message. If you have received this communication in error, please 
notify the sender and delete all copies of this message. Persistent Systems 
Ltd. does not accept any liability for virus infected mails.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Mistral] Porting executor and engine to oslo.messaging

2014-02-24 Thread W Chan
Renat,

Regarding your comments on change https://review.openstack.org/#/c/75609/,
I don't think the port to oslo.messaging is just a swap from pika to
oslo.messaging.  OpenStack services as I understand is usually implemented
as an RPC client/server over a messaging transport.  Sync vs async calls
are done via the RPC client call and cast respectively.  The messaging
transport is abstracted and concrete implementation is done via
drivers/plugins.  So the architecture of the executor if ported to
oslo.messaging needs to include a client, a server, and a transport.  The
consumer (in this case the mistral engine) instantiates an instance of the
client for the executor, makes the method call to handle task, the client
then sends the request over the transport to the server.  The server picks
up the request from the exchange and processes the request.  If cast
(async), the client side returns immediately.  If call (sync), the client
side waits for a response from the server over a reply_q (a unique queue
for the session in the transport).  Also, oslo.messaging allows versioning
in the message. Major version change indicates API contract changes.  Minor
version indicates backend changes but with API compatibility.

So, where I'm headed with this change...  I'm implementing the basic
structure/scaffolding for the new executor service using oslo.messaging
(default transport with rabbit).  Since the whole change will take a few
rounds, I don't want to disrupt any changes that the team is making at the
moment and so I'm building the structure separately.  I'm also adding
versioning (v1) in the module structure to anticipate any versioning
changes in the future.   I expect the change request will lead to some
discussion as we are doing here.  I will migrate the core operations of the
executor (handle_task, handle_task_error, do_task_action) to the server
component when we agree on the architecture and switch the consumer
(engine) to use the new RPC client for the executor instead of sending the
message to the queue over pika.  Also, the launcher for
./mistral/cmd/task_executor.py will change as well in subsequent round.  An
example launcher is here
https://github.com/uhobawuhot/interceptor/blob/master/bin/interceptor-engine.
 The interceptor project here is what I use to research how oslo.messaging
works.  I hope this is clear. The blueprint only changes how the request
and response are being transported.  It shouldn't change how the executor
currently works.

Finally, can you clarify the difference between local vs scalable engine?
 I personally do not prefer to explicitly name the engine scalable because
this requirement should be in the engine by default and we do not need to
explicitly state/separate that.  But if this is a roadblock for the change,
I can put the scalable structure back in the change to move this forward.

Thanks.
Winson
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [infra] Meeting Tuesday February 25th at 19:00 UTC

2014-02-24 Thread Elizabeth Krumbach Joseph
Hi everyone,

The OpenStack Infrastructure (Infra) team is hosting our weekly
meeting tomorrow, Tuesday February 25th, at 19:00 UTC in
#openstack-meeting

Meeting agenda available here:
https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting (anyone is
welcome to to add agenda items)

Everyone interested in infrastructure and process surrounding
automated testing and deployment is encouraged to attend.

-- 
Elizabeth Krumbach Joseph || Lyz || pleia2
http://www.princessleia.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] OpenStack and GSoC 2014

2014-02-24 Thread Davanum Srinivas
Hi all,

We're in! Just got notified by Admin Team that our Organization
Application has been accepted. I've updated the etherpad with the full
responses from them.

https://etherpad.openstack.org/p/gsoc2014orgapp

thanks,
dims

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Missing tests

2014-02-24 Thread Martins, Tiago
HI!
I'm sorry it took me this long to answer you.
During the fixtures of the UT , there must be somewhere where you can add the 
extensions to load them, so their tests won't break.Could you send me a link to 
your patch in gerrit?

From: Vinod Kumar Boppanna [mailto:vinod.kumar.boppa...@cern.ch]
Sent: segunda-feira, 24 de fevereiro de 2014 09:47
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] Missing tests

Hi,

I had uploaded to Gerrit the code for Domain Quota Management. One of the test 
is failing due to the missing tests for the following extensions.

Extensions are missing tests: ['os-extended-hypervisors', 
'os-extended-services-delete']

What can i do now? (these extensions are not done by me)

Regards,
Vinod Kumar Boppanna

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] L3 HA VRRP concerns

2014-02-24 Thread Salvatore Orlando
Hi Assaf,

some comments inline.
As a general comment, I'd prefer to move all the discussions to gerrit
since the patches are now in review.
This unless you have design concerns (the ones below look more related to
the implementation to me)

Salvatore


On 24 February 2014 15:58, Assaf Muller  wrote:

> Hi everyone,
>
> A few concerns have popped up recently about [1] which I'd like to share
> and discuss,
> and would love to hear your thoughts Sylvain.
>
> 1) Is there a way through the API to know, for a given router, what agent
> is hosting
> the active instance? This might be very important for admins to know.
>

I reckon the current agent management extension already provides this
information, but I'll double check this.
This is an admin-only extension.


>
> 2) The current approach is to create an administrative network and subnet
> for VRRP traffic per router group /
> per router. Is this network counted in the quota for the tenant? (Clearly
> it shouldn't). Same
> question for the HA ports created for each router instance.
>

That is a good point. I have not reviewed the implementation so I cannot
provide a final answer.
I think it should be possible to assign to admins rather than tenants; if
not I would consider this an important enhancement, but I would not hold
progress on the patches currently on review because of this.


> 3) The administrative network is created per router and takes away from
> the VLAN ranges if using
> VLAN tenant networks (For a tunneling based deployment this is a
> non-issue). Maybe we could
> consider a change that creates an administrative network per tenant (Which
> would then limit
> the solution to up to 255 routers because of VRRP'd group limit), or an
> admin network per 255
> routers?
>

I am not able to comment on this question. I'm sure the author(s) will be
able to.


>
> 4) Maybe the VRRP hello and dead times should be configurable? I can see
> admins that would love to
> up or down these numbers.
>
>
I reckon this a reasonable thing to have. This could be either pointed out
in the reviews or pushed as an additional change on top of the other ones
in review.


> 5) The administrative / VRRP networks, subnets and ports that are created
> - Will they be marked in any way
> as an 'internal' network or some equivalent tag? Otherwise they'd show up
> when running neutron net-list,
> in the Horizon networks listing as well as the graphical topology drawing
> (Which, personally, is what
> bothers me most about this). I'd love them tagged and hidden from the
> normal net-list output,
> and something like a 'neutron net-list --all' introduced.
>

I agree this should be avoided; this is also connected to the point you
raised at #2.


>
> 6) The IP subnet chosen for VRRP traffic is specified in neutron.conf. If
> a tenant creates a subnet
> with the same range, and attaches a HA router to that subnet, the
> operation will fail as the router
> cannot have different interfaces belonging to the same subnet. Nir
> suggested to look into using
> the 169.254.0.0/16 range as the default because we know it will
> (hopefully) not be allocated by tenants.
>

We adopted a similar approach in the NSX plugin for a service network which
the plugin uses for metadata access.
In that case we used the link-local network, but perhaps an easier solution
would be to make the cidr specified in neutron.conf "reserved" thus
preventing tenants from specifying subnets overlapping with this range in
the first place.
I reckong the link-local range is a good candidate for the default value.


>
> [1] https://blueprints.launchpad.net/neutron/+spec/l3-high-availability
>
>
> Assaf Muller, Cloud Networking Engineer
> Red Hat
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Notification When Creating/Deleting a Tenant in openstack

2014-02-24 Thread Lance D Bragstad

Response below.


Best Regards,

Lance Bragstad
ldbra...@us.ibm.com

Nader Lahouti  wrote on 02/24/2014 11:31:10 AM:

> From: Nader Lahouti 
> To: "OpenStack Development Mailing List (not for usage questions)"
> ,
> Date: 02/24/2014 11:37 AM
> Subject: Re: [openstack-dev] [keystone] Notification When Creating/
> Deleting a Tenant in openstack
>
> Hi Swann,
>
> I was able to listen to keystone notification by setting
> notifications in the keystone.conf file. I only needed the
> notification (CURD) for project and handle it in my plugin code so
> don't need ceilometer to handle them.
> The other issue is that the notification is only for limited to
> resource_id  and don't have other information such as project name.

The idea behind this when we originally implemented notifications in
Keystone was to
provide the resource being changed, such as 'user', 'project', 'trust' and
the uuid of that
resource. From there your plugin and could request more information from
Keystone by doing a
GET on that resource. This way would could keep the payload of the
notification sent minimal
in case all the information on the resource wasn't required.

>
> Thanks,
> Nader.
>
>

> On Mon, Feb 24, 2014 at 2:10 AM, Swann Croiset  wrote:
>
> Hi Nader,
>
> These notifications must be handled by Ceilometer like others [1].
> it is surprising that it does not already identity meters indeed...
> probably nobody needs them before you.
> I guess it remains to open a BP and code them like I recently did for
Heat [2]
>
>
> http://docs.openstack.org/developer/ceilometer/measurements.html
>
https://blueprints.launchpad.net/ceilometer/+spec/handle-heat-notifications
>

> 2014-02-20 19:10 GMT+01:00 Nader Lahouti :
>
> Thanks Dolph for link. The document shows the format of the message
> and doesn't give any info on how to listen to the notification.
> Is there any other document showing the detail on how to listen or
> get these notifications ?
>
> Regards,
> Nader.
>
> On Feb 20, 2014, at 9:06 AM, Dolph Mathews 
wrote:

> Yes, see:
>
>   http://docs.openstack.org/developer/keystone/event_notifications.html
>
> On Thu, Feb 20, 2014 at 10:54 AM, Nader Lahouti  > wrote:
> Hi All,
>
> I have a question regarding creating/deleting a tenant in openstack
> (using horizon or CLI). Is there any notification mechanism in place
> so that an application get informed of such an event?
>
> If not, can it be done using plugin to send create/delete
> notification to an application?
>
> Appreciate your suggestion and help.
>
> Regards,
> Nader.
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] ERROR: InvocationError: when running tox

2014-02-24 Thread Randy Tuttle
Thanks guys.

Yes, Ben, I can see oslo.config installed in tox sub-directory. I will try
to wipe tox out and try again. You are right though, the tox.ini only has
site-packages for Jenkins noted.

Sean, I think your first email response might be right. I am running on a
Mac instead of Ubuntu box. I think, based on my research on this, that the
last module (or even a series of them) may not have loaded, and this is
proven when I try with import. Here's the thread I've been reading.

https://bugs.launchpad.net/nova/+bug/1271097

Cheers


On Mon, Feb 24, 2014 at 1:05 PM, Ben Nemec  wrote:

>  On 2014-02-24 09:02, Randy Tuttle wrote:
>
>Has anyone experienced this issue when running tox. I'm trying to
> figure if this is some limit of tox environment or something else. I've
> seen this referenced in other projects, but can't seem to zero in on a
> proper fix.
>
> tox -e py27
>
> [...8><...snip a lot]
>
> neutron.tests.unit.test_routerserviceinsertion\nneutron.tests.unit.test_security_groups_rpc\nneutron.tests.unit.test_servicetype=\xc1\xf1\x19',
> stderr=None
> error: testr failed (3)
> ERROR: InvocationError:
> '/Users/rtuttle/projects/neutron/.tox/py27/bin/python -m
> neutron.openstack.common.lockutils python setup.py testr --slowest
> --testr-args='
> __ summary
> __
> ERROR:   py27: commands failed
>
> It seems that what it may be complaining about is a missing oslo.config.
> If I try to load the final module noted from above (i.e.,
> neutron.tests.unit.test_servicetype), I get an error about the missing
> module.
>
> Python 2.7.5 (v2.7.5:ab05e7dd2788, May 13 2013, 13:18:45)
> [GCC 4.2.1 (Apple Inc. build 5666) (dot 3)] on darwin
> Type "help", "copyright", "credits" or "license" for more information.
> >>> import neutron.tests.unit.test_servicetype
> Traceback (most recent call last):
>   File "", line 1, in 
>   File "neutron/tests/unit/__init__.py", line 20, in 
> from oslo.config import cfg
> ImportError: No module named oslo.config
>
> Cheers,
> Randy
>
> We hit a similar problem in some of the other projects recently, but it
> doesn't look like that applies to Neutron because it isn't using
> site-packages in its tox runs anyway.  The first thing I would check is
> whether oslo.config is installed in the py27 tox venv.  It might be a good
> idea to just wipe your .tox directory and start fresh if you haven't done
> that recently.
>
> -Ben
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano] Object-oriented approach for defining Murano Applications

2014-02-24 Thread Alexander Tivelkov
Hi Stan,

It is good that we are on a common ground here :)

Of course this can be done by Heat. In fact - it will be, in the very same
manner as it always was, I am pretty sure we've discussed this many times
already. When Heat Software config is fully implemented, it will be
possible to use it instead of our Agent execution plans for software
configuration - it the very same manner as we use "regular" heat templates
for resource allocation.

Heat does indeed support template composition - but we don't want our
end-users to do learn how to do that: we want them just to combine existing
application on higher-level. Murano will use the template composition under
the hood, but only in the way which is designed by application publisher.
If the publisher has decided to configure the software with using Heat
Software Config, then this option will be used. If some other (probably
some legacy ) way of doing this was preferred, Murano should be able to
support that and allow to create such workflows.

Also, there may be workflow steps which are not covered by Heat by design.
For example, application publisher may include creating instance snapshots,
data migrations, backups etc into the deployment or maintenance workflows.
I don't see how these may be done by Heat, while Murano should definitely
support this scenarios.

So, as a conclusion, Murano should not be though of as a Heat alternative:
it is a different tool located on the different layer of the stack, aiming
different user audience - and, the most important - using the Heat
underneath.


--
Regards,
Alexander Tivelkov


On Mon, Feb 24, 2014 at 8:36 PM, Stan Lagun  wrote:

> Hi Alex,
>
> Personally I like the approach and how you explain it. I just would like
> to know your opinion on how this is better from someone write Heat template
> that creates Active Directory  lets say with one primary and one secondary
> controller and then publish it somewhere. Since Heat do supports software
> configuration as of late and has concept of environments [1] that Steven
> Hardy generously pointed out in another mailing thread that can be used for
> composition as well it seems like everything you said can be done by Heat
> alone
>
> [1]:
> http://hardysteven.blogspot.co.uk/2013/10/heat-providersenvironments-101-ive.html
>
>
> On Mon, Feb 24, 2014 at 7:51 PM, Alexander Tivelkov <
> ativel...@mirantis.com> wrote:
>
>> Sorry folks, I didn't put the proper image url. Here it is:
>>
>>
>> https://creately.com/diagram/hrxk86gv2/kvbckU5hne8C0r0sofJDdtYgxc%3D
>>
>>
>> --
>> Regards,
>> Alexander Tivelkov
>>
>>
>> On Mon, Feb 24, 2014 at 7:39 PM, Alexander Tivelkov <
>> ativel...@mirantis.com> wrote:
>>
>>> Hi,
>>>
>>> I would like to initiate one more discussion about an approach we
>>> selected to solve a particular problem in Murano.
>>>
>>> The problem statement is the following: We have multiple entities like
>>> low level resources and high level application definitions. Each entity
>>> does some specific actions for example to create a VM or deploy application
>>> configuration. We want each entity's workflow code reusable in order to
>>> simplify development for a new application as the current approach with XML
>>> based rules requires significant efforts.
>>>
>>> After internal discussions inside Murano team we come up to the solution
>>> which uses a well known programmatic concept - classes, their inheritance
>>> and composition.
>>>
>>> In this thread I would like to share our ideas and discuss the
>>> implementation details.
>>>
>>> We want to represent each and every entity being manipulated by Murano,
>>> as an instance of some "class". These classes will define structure of the
>>> entities and their behavior. Different entities may be combined together,
>>> interacting with each other, forming a composite environment. The
>>> inheritance may be used to extract common structure and functionality into
>>> generic superclasses, while having their subclasses to define only their
>>> specific attributes and actions.
>>>
>>> This approach is better to explain on some example. Let's consider the
>>> Active Directory windows service. This is one of the currently present
>>> Murano Applications, and its structure and deployment workflow is pretty
>>> complex. Let's see how it may be simplified by using the proposed
>>> object-oriented approach.
>>>
>>> First, let's just describe an Active Directory service in plain English.
>>>
>>> Active Directory service consists of several Controllers: exactly one
>>> Primary Domain Controller and, optionally, several Secondary Domain
>>> Controllers. Controllers (both primary and Secondary) are special Windows
>>> Instances, having an active directory server role activated. Their specific
>>> difference is in the configuration scripts which are executed on them after
>>> the roles are activated. Also, Secondary Domain Controllers have an ability
>>> to join to a domain, while Primary Domain Controller cannot do it.
>>>
>>> Window

Re: [openstack-dev] [Neutron] ERROR: InvocationError: when running tox

2014-02-24 Thread Ben Nemec
 

On 2014-02-24 09:02, Randy Tuttle wrote: 

> Has anyone experienced this issue when running tox. I'm trying to figure if 
> this is some limit of tox environment or something else. I've seen this 
> referenced in other projects, but can't seem to zero in on a proper fix.
> 
> tox -e py27
> 
> [...8><...snip a lot]
> 
> neutron.tests.unit.test_routerserviceinsertionnneutron.tests.unit.test_security_groups_rpcnneutron.tests.unit.test_servicetype=xc1xf1x19',
>  stderr=None
> error: testr failed (3)
> ERROR: InvocationError: '/Users/rtuttle/projects/neutron/.tox/py27/bin/python 
> -m neutron.openstack.common.lockutils python setup.py testr --slowest 
> --testr-args='
> __ summary 
> __
> ERROR: py27: commands failed
> 
> It seems that what it may be complaining about is a missing oslo.config. If I 
> try to load the final module noted from above (i.e., 
> neutron.tests.unit.test_servicetype), I get an error about the missing module.
> 
> Python 2.7.5 (v2.7.5:ab05e7dd2788, May 13 2013, 13:18:45) 
> [GCC 4.2.1 (Apple Inc. build 5666) (dot 3)] on darwin
> Type "help", "copyright", "credits" or "license" for more information.
 import neutron.tests.unit.test_servicetype
> Traceback (most recent call last):
> File "", line 1, in 
> File "neutron/tests/unit/__init__.py", line 20, in 
> from oslo.config import cfg
> ImportError: No module named oslo.config
> 
> Cheers, Randy

We hit a similar problem in some of the other projects recently, but it
doesn't look like that applies to Neutron because it isn't using
site-packages in its tox runs anyway. The first thing I would check is
whether oslo.config is installed in the py27 tox venv. It might be a
good idea to just wipe your .tox directory and start fresh if you
haven't done that recently. 

-Ben 
 ___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Question about USB passthrough

2014-02-24 Thread gustavo panizzo
On 02/24/2014 01:10 AM, Liuji (Jeremy) wrote:
> Hi, Boris and all other guys:
>
> I have found a BP about USB device passthrough in 
> https://blueprints.launchpad.net/nova/+spec/host-usb-passthrough. 
> I have also read the latest nova code and make sure it doesn't support USB 
> passthrough by now.
>
> Are there any progress or plan for USB passthrough?
use usbip, it works today and is awesome!

http://usbip.sourceforge.net/

>
>
> Thanks,
> Jeremy Liu
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


-- 
1AE0 322E B8F7 4717 BDEA BF1D 44BB 1BA7 9F6C 6333

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Object Model discussion

2014-02-24 Thread Eugene Nikanorov
Folks,

So far everyone agrees that the model should be pure logical, but no one
came up with the API and meaningful implementation details (at least at
idea level) of such obj model.
As I've pointed out, 'pure logical' object model has some API and user
experience inconsistencies that we need to sort out before we implement it.
I'd like to see real details proposed for such 'pure logical' object model.

Let's also consider the cost of the change - it's easier to do it gradually
than rewrite it from scratch.

Thanks,
Eugene.



On Mon, Feb 24, 2014 at 9:36 PM, Samuel Bercovici wrote:

>  Hi,
>
>
>
> I also agree that the model should be pure logical.
>
> I think that the existing model is almost correct but the pool should be
> made pure logical. This means that the vip ßàpool relationships needs
> also to become any to any.
>
> Eugene, has rightfully pointed that the current "state" management will
> not handle such relationship well.
>
> To me this means that the "state" management is broken and not the model.
>
> I will propose an update to the state management in the next few days.
>
>
>
> Regards,
>
> -Sam.
>
>
>
>
>
>
>
>
>
> *From:* Mark McClain [mailto:mmccl...@yahoo-inc.com]
> *Sent:* Monday, February 24, 2014 6:32 PM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [Neutron][LBaaS] Object Model discussion
>
>
>
>
>
> On Feb 21, 2014, at 1:29 PM, Jay Pipes  wrote:
>
>
>
>  I disagree on this point. I believe that the more implementation details
> bleed into the API, the harder the API is to evolve and improve, and the
> less flexible the API becomes.
>
> I'd personally love to see the next version of the LBaaS API be a
> complete breakaway from any implementation specifics and refocus itself
> to be a control plane API that is written from the perspective of the
> *user* of a load balancing service, not the perspective of developers of
> load balancer products.
>
>
>
> I agree with Jay.  We the API needs to be user centric and free of
> implementation details.  One of my concerns I've voiced in some of the IRC
> discussions is that too many implementation details are exposed to the user.
>
>
>
> mark
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] ilo driver need to submit a code change in nova ironic driver

2014-02-24 Thread Chris K
Hi Barmawer,

Currently the Ironic Nova driver is blocked from merging. The Ironic team
is working on getting all the pieces in place for our C.I. testing. At this
point I would say your best path is to create your patch with 51328 as a
dependency. Please note that the nova driver will most likely be going
through several more revisions as we get closer. This will mean that your
dependent patch will need to rebased as new Nova driver patches are pushed
up. This is very common, I am just pointing it out so that you can keep an
eye out for the "[OUTDATED]" tag on the review. Also please tag your
dependent patch with "implements bp:deprecate-baremetal-driver" this will
ensure your patch is added to the Blue Print, and make it clear that is
part of the deprecate-baremetal-driver patch set.


Chris Krelle


On Mon, Feb 24, 2014 at 6:05 AM, Faizan Barmawer
wrote:

> Hi All,
>
> I am currently working on ilo driver for ironic project.
> As part of this implementation and to integrate with nova ironic driver (
> https://review.openstack.org/#/c/51328/) we need to make changes to
> "driver.py" and "ironic_driver_fields.py" files, to pass down ilo driver
> specific fields to the ironic node. Since nova ironic driver code review
> still in progress and not yet integrated into openstack, we have not
> included this piece of code in the ilo driver code review patch (
> https://review.openstack.org/#/c/73787/).
>
> We need your suggestion on delivering this part of ilo driver code change
> in nova ironic driver.
> - Should we wait for the completion of nova ironic driver and then raise a
> defect to submit these changes? or
> - should we raise a defect now and submit for review, giving the
> dependency on the nova ironic driver review? or
> - Can we use the existing blueprint for ilo driver to raise a separate
> review for this code change giving nova ironic driver as dependency?
>
> Please suggest a better way of delivering these changes.
>
> Thanks & Regards,
> Barmawer
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Future of the Nova API

2014-02-24 Thread Matt Riedemann



On 2/24/2014 10:13 AM, Russell Bryant wrote:

On 02/24/2014 01:50 AM, Christopher Yeoh wrote:

Hi,

There has recently been some speculation around the V3 API and whether
we should go forward with it or instead backport many of the changes
to the V2 API. I believe that the core of the concern is the extra
maintenance and test burden that supporting two APIs means and the
length of time before we are able to deprecate the V2 API and return
to maintaining only one (well two including EC2) API again.


Yes, this is a major concern.  It has taken an enormous amount of work
to get to where we are, and v3 isn't done.  It's a good time to
re-evaluate whether we are on the right path.

The more I think about it, the more I think that our absolute top goal
should be to maintain a stable API for as long as we can reasonably do
so.  I believe that's what is best for our users.  I think if you gave
people a choice, they would prefer an inconsistent API that works for
years over dealing with non-backwards compatible jumps to get a nicer
looking one.

The v3 API and its unit tests are roughly 25k lines of code.  This also
doesn't include the changes necessary in novaclient or tempest.  That's
just *our* code.  It explodes out from there into every SDK, and then
end user apps.  This should not be taken lightly.


This email is rather long so here's the TL;DR version:

- We want to make backwards incompatible changes to the API
   and whether we do it in-place with V2 or by releasing V3
   we'll have some form of dual API support burden.
   - Not making backwards incompatible changes means:
 - retaining an inconsistent API


I actually think this isn't so bad, as discussed above.


 - not being able to fix numerous input validation issues


I'm not convinced, actually.  Surely we can do a lot of cleanup here.
Perhaps you have some examples of what we couldn't do in the existing API?

If it's a case of wanting to be more strict, some would argue that the
current behavior isn't so bad (see robustness principle [1]):

 "Be conservative in what you do, be liberal in what you accept from
 others (often reworded as "Be conservative in what you send, be
 liberal in what you accept")."

There's a decent counter argument to this, too.  However, I still fall
back on it being best to just not break existing clients above all else.


 - have to forever proxy for glance/cinder/neutron with all
   the problems that entails.


I don't think I'm as bothered by the proxying as others are.  Perhaps
it's not architecturally pretty, but it's worth it to maintain
compatibility for our users.


+1 to this, I think this is also related to what Jay Pipes is saying in 
his reply:


"Whether a provider chooses to, for example,
deploy with nova-network or Neutron, or Xen vs. KVM, or support block
migration for that matter *should have no effect on the public API*. The
fact that those choices currently *do* effect the public API that is
consumed by the client is a major indication of the weakness of the API."

As a consumer, I don't want to have to know which V2 APIs work and which 
don't depending on if I'm using nova-network or Neutron.





   - Backporting V3 infrastructure changes to V2 would be a
 considerable amount of programmer/review time


Agreed, but so is the ongoing maintenance and development of v3.



- The V3 API as-is has:
   - lower maintenance
   - is easier to understand and use (consistent).
   - Much better input validation which is baked-in (json-schema)
 rather than ad-hoc and incomplete.


So here's the rub ... with the exception of the consistency bits, none
of this is visible to users, which makes me think we should be able to
do all of this on v2.


- Whilst we have existing users of the API we also have a lot more
   users in the future. It would be much better to allow them to use
   the API we want to get to as soon as possible, rather than trying
   to evolve the V2 API and forcing them along the transition that they
   could otherwise avoid.


I'm not sure I understand this.  A key point is that I think any
evolving of the V2 API has to be backwards compatible, so there's no
forcing them along involved.


- We already have feature parity for the V3 API (nova-network being
   the exception due to the very recent unfreezing of it), novaclient
   support, and a reasonable transition path for V2 users.

- Proposed way forward:
   - Release the V3 API in Juno with nova-network and tasks support
   - Feature freeze the V2 API when the V3 API is released
 - Set the timeline for deprecation of V2 so users have a lot
   of warning
 - Fallback for those who really don't want to move after
   deprecation is an API service which translates between V2 and V3
   requests, but removes the dual API support burden from Nova.


One of my biggest principles with a new API is that we should not have
to force a migration with a strict timeline like this.  If we haven't
built something c

Re: [openstack-dev] [nova] Question about USB passthrough

2014-02-24 Thread yunhong jiang
On Mon, 2014-02-24 at 04:10 +, Liuji (Jeremy) wrote:
> I have found a BP about USB device passthrough in
> https://blueprints.launchpad.net/nova/+spec/host-usb-passthrough. 
> I have also read the latest nova code and make sure it doesn't support
> USB passthrough by now.
> 
> Are there any progress or plan for USB passthrough?

I don't know anyone is working on USB passthrough.

--jyh


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Object Model discussion

2014-02-24 Thread Samuel Bercovici
Hi,

I also agree that the model should be pure logical.
I think that the existing model is almost correct but the pool should be made 
pure logical. This means that the vip <>pool relationships needs also to 
become any to any.
Eugene, has rightfully pointed that the current "state" management will not 
handle such relationship well.
To me this means that the "state" management is broken and not the model.
I will propose an update to the state management in the next few days.

Regards,
-Sam.




From: Mark McClain [mailto:mmccl...@yahoo-inc.com]
Sent: Monday, February 24, 2014 6:32 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS] Object Model discussion


On Feb 21, 2014, at 1:29 PM, Jay Pipes 
mailto:jaypi...@gmail.com>> wrote:


I disagree on this point. I believe that the more implementation details
bleed into the API, the harder the API is to evolve and improve, and the
less flexible the API becomes.

I'd personally love to see the next version of the LBaaS API be a
complete breakaway from any implementation specifics and refocus itself
to be a control plane API that is written from the perspective of the
*user* of a load balancing service, not the perspective of developers of
load balancer products.

I agree with Jay.  We the API needs to be user centric and free of 
implementation details.  One of my concerns I've voiced in some of the IRC 
discussions is that too many implementation details are exposed to the user.

mark
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Notification When Creating/Deleting a Tenant in openstack

2014-02-24 Thread Nader Lahouti
Hi Swann,

I was able to listen to keystone notification by setting notifications in
the keystone.conf file. I only needed the notification (CURD) for project
and handle it in my plugin code so don't need ceilometer to handle them.
The other issue is that the notification is only for limited to resource_id
 and don't have other information such as project name.


Thanks,
Nader.




On Mon, Feb 24, 2014 at 2:10 AM, Swann Croiset  wrote:

>
> Hi Nader,
>
> These notifications must be handled by Ceilometer like others [1].
> it is surprising that it does not already identity meters indeed...
> probably nobody needs them before you.
>  I guess it remains to open a BP and code them like I recently did for
> Heat [2]
>
>
> http://docs.openstack.org/developer/ceilometer/measurements.html
> https://blueprints.launchpad.net/ceilometer/+spec/handle-heat-notifications
>
>
> 2014-02-20 19:10 GMT+01:00 Nader Lahouti :
>
> Thanks Dolph for link. The document shows the format of the message and
>> doesn't give any info on how to listen to the notification.
>> Is there any other document showing the detail on how to listen or get
>> these notifications ?
>>
>> Regards,
>> Nader.
>>
>> On Feb 20, 2014, at 9:06 AM, Dolph Mathews 
>> wrote:
>>
>> Yes, see:
>>
>>   http://docs.openstack.org/developer/keystone/event_notifications.html
>>
>> On Thu, Feb 20, 2014 at 10:54 AM, Nader Lahouti 
>> wrote:
>>
>>> Hi All,
>>>
>>> I have a question regarding creating/deleting a tenant in openstack
>>> (using horizon or CLI). Is there any notification mechanism in place so
>>> that an application get informed of such an event?
>>>
>>> If not, can it be done using plugin to send create/delete notification
>>> to an application?
>>>
>>> Appreciate your suggestion and help.
>>>
>>> Regards,
>>> Nader.
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Object Model discussion

2014-02-24 Thread Eugene Nikanorov
Mark,

I'm not sure I understand what are implementation details in the workflow I
have proposed in the email above, could you point to them?

Thanks,
Eugene.



On Mon, Feb 24, 2014 at 8:31 PM, Mark McClain wrote:

>
>  On Feb 21, 2014, at 1:29 PM, Jay Pipes  wrote:
>
> I disagree on this point. I believe that the more implementation details
> bleed into the API, the harder the API is to evolve and improve, and the
> less flexible the API becomes.
>
> I'd personally love to see the next version of the LBaaS API be a
> complete breakaway from any implementation specifics and refocus itself
> to be a control plane API that is written from the perspective of the
> *user* of a load balancing service, not the perspective of developers of
> load balancer products.
>
>
> I agree with Jay.  We the API needs to be user centric and free of
> implementation details.  One of my concerns I've voiced in some of the IRC
> discussions is that too many implementation details are exposed to the user.
>
>  mark
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [gantt] scheduler sub-group meeting tomorrow (2/25)

2014-02-24 Thread Dugger, Donald D
All-

I'm tempted to cancel the gantt meeting for tomorrow.  The only topics I have 
are the no-db scheduler update (we can probably do that via email) and the 
gantt code forklift (I've been out with the flu and there's no progress on 
that).

I'm willing to chair but I'd like to have some specific topics to talk about.

Suggestions anyone?

--
Don Dugger
"Censeo Toto nos in Kansa esse decisse." - D. Gale
Ph: 303/443-3786

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][libvirt] Is there anything blocking the libvirt driver from implementing the host_maintenance_mode API?

2014-02-24 Thread Chris Friesen

On 02/20/2014 11:38 AM, Matt Riedemann wrote:



On 2/19/2014 4:05 PM, Matt Riedemann wrote:

The os-hosts OS API extension [1] showed up before I was working on the
project and I see that only the VMware and XenAPI drivers implement it,
but was wondering why the libvirt driver doesn't - either no one wants
it, or there is some technical reason behind not implementing it for
that driver?

[1]
http://docs.openstack.org/api/openstack-compute/2/content/PUT_os-hosts-v2_updateHost_v2__tenant_id__os-hosts__host_name__ext-os-hosts.html



By the way, am I missing something when I think that this extension is
already covered if you're:

1. Looking to get the node out of the scheduling loop, you can just
disable it with os-services/disable?

2. Looking to evacuate instances off a failed host (or one that's in
"maintenance mode"), just use the evacuate server action.


In compute/api.py the API.evacuate() routine errors out if 
self.servicegroup_api.service_is_up(service) is true, which means that 
you can't evacuate from a compute node that is "disabled", you need to 
migrate instead.


So, the alternative is basically to disable the service, then get a list 
of all the servers on the compute host, then kick off the migration 
(either cold or live) of each of the servers.  Then because migration 
uses a "cast" instead of a "call" you need to poll all the migrations 
for success or late failures.  Once you have no failed migrations and no 
servers running on the host then you're good.


Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] ERROR: InvocationError: when running tox

2014-02-24 Thread Collins, Sean
Sorry - fired off this e-mail without looking too closely at your log
output - I just saw the escape characters and the long lines from tox
and it reminded me of the last discussion we had about it. It's
probably not the same error as I was describing.

That's the tough thing that I strongly dislike about Testr - when it
fails, it fails spectacularly and it's very hard to determine what
happened, for mere idiots like myself.

-- 
Sean M. Collins
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Mistral] Community meeting minutes - 02/24/2014

2014-02-24 Thread Renat Akhmerov
Folks,

Thanks for joining us at #openstack-meeting. Here are the links to the meeting 
minutes and log:

Minutes: 
http://eavesdrop.openstack.org/meetings/mistral/2014/mistral.2014-02-24-16.00.html
Log: 
http://eavesdrop.openstack.org/meetings/mistral/2014/mistral.2014-02-24-16.00.log.html

Next meeting will be held on March 3. Looking forward to chat with you again.

Renat Akhmerov
@ Mirantis Inc.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Future of the Nova API

2014-02-24 Thread Russell Bryant
On 02/24/2014 01:50 AM, Christopher Yeoh wrote:
> Hi,
> 
> There has recently been some speculation around the V3 API and whether
> we should go forward with it or instead backport many of the changes
> to the V2 API. I believe that the core of the concern is the extra
> maintenance and test burden that supporting two APIs means and the
> length of time before we are able to deprecate the V2 API and return
> to maintaining only one (well two including EC2) API again.

Yes, this is a major concern.  It has taken an enormous amount of work
to get to where we are, and v3 isn't done.  It's a good time to
re-evaluate whether we are on the right path.

The more I think about it, the more I think that our absolute top goal
should be to maintain a stable API for as long as we can reasonably do
so.  I believe that's what is best for our users.  I think if you gave
people a choice, they would prefer an inconsistent API that works for
years over dealing with non-backwards compatible jumps to get a nicer
looking one.

The v3 API and its unit tests are roughly 25k lines of code.  This also
doesn't include the changes necessary in novaclient or tempest.  That's
just *our* code.  It explodes out from there into every SDK, and then
end user apps.  This should not be taken lightly.

> This email is rather long so here's the TL;DR version:
> 
> - We want to make backwards incompatible changes to the API
>   and whether we do it in-place with V2 or by releasing V3
>   we'll have some form of dual API support burden.
>   - Not making backwards incompatible changes means:
> - retaining an inconsistent API

I actually think this isn't so bad, as discussed above.

> - not being able to fix numerous input validation issues

I'm not convinced, actually.  Surely we can do a lot of cleanup here.
Perhaps you have some examples of what we couldn't do in the existing API?

If it's a case of wanting to be more strict, some would argue that the
current behavior isn't so bad (see robustness principle [1]):

"Be conservative in what you do, be liberal in what you accept from
others (often reworded as "Be conservative in what you send, be
liberal in what you accept")."

There's a decent counter argument to this, too.  However, I still fall
back on it being best to just not break existing clients above all else.

> - have to forever proxy for glance/cinder/neutron with all
>   the problems that entails.

I don't think I'm as bothered by the proxying as others are.  Perhaps
it's not architecturally pretty, but it's worth it to maintain
compatibility for our users.

>   - Backporting V3 infrastructure changes to V2 would be a
> considerable amount of programmer/review time

Agreed, but so is the ongoing maintenance and development of v3.

> 
> - The V3 API as-is has:
>   - lower maintenance
>   - is easier to understand and use (consistent).
>   - Much better input validation which is baked-in (json-schema)
> rather than ad-hoc and incomplete.

So here's the rub ... with the exception of the consistency bits, none
of this is visible to users, which makes me think we should be able to
do all of this on v2.

> - Whilst we have existing users of the API we also have a lot more
>   users in the future. It would be much better to allow them to use
>   the API we want to get to as soon as possible, rather than trying
>   to evolve the V2 API and forcing them along the transition that they
>   could otherwise avoid.

I'm not sure I understand this.  A key point is that I think any
evolving of the V2 API has to be backwards compatible, so there's no
forcing them along involved.

> - We already have feature parity for the V3 API (nova-network being
>   the exception due to the very recent unfreezing of it), novaclient
>   support, and a reasonable transition path for V2 users.
> 
> - Proposed way forward:
>   - Release the V3 API in Juno with nova-network and tasks support
>   - Feature freeze the V2 API when the V3 API is released
> - Set the timeline for deprecation of V2 so users have a lot
>   of warning
> - Fallback for those who really don't want to move after
>   deprecation is an API service which translates between V2 and V3
>   requests, but removes the dual API support burden from Nova.

One of my biggest principles with a new API is that we should not have
to force a migration with a strict timeline like this.  If we haven't
built something compelling enough to get people to *want* to migrate as
soon as they are able, then we haven't done our job.  Deprecation of the
old thing should only be done when we feel it's no longer wanted or used
by the vast majority.  I just don't see that happening any time soon.

We have a couple of ways forward right now.

1) Continue as we have been, and plan to release v3 once we have a
compelling enough feature set.

2) Take what we have learned from v3 and apply it to v2.  For example:

 - The plugin infrastructure is an internal implementation detail that
   

  1   2   >