Re: [openstack-dev] [cinder][qa] release notes for cinder v1 to v2?

2013-12-16 Thread Mike Perez
So the grizzly API releases notes are dead on. (I put them together and
have worked with both API's ;) )

Unfortunately, a lot of the features in v2 got back ported to v1. The main
differences are in the upgrade notes.


-Mike Perez


On Mon, Dec 16, 2013 at 9:57 PM, Zhi Kun Liu  wrote:

> It seems that there's no document about the change from v1 to v2. Maybe
> the change is very small.  Only found some info in OpenStack release notes.
>
> Cinder API v2
>
>- List volumes/snapshots summary actually is a summary view. In v1 it
>was the same as detail view.
>- List volumes/snapshots detail and summary has display_name key
>changed to name.
>- List volumes/snapshots detail and summary has display_description
>key changed to description.
>
>
>
> https://wiki.openstack.org/wiki/ReleaseNotes/Grizzly#OpenStack_Block_Storage_.28Cinder.29
>
> https://wiki.openstack.org/wiki/ReleaseNotes/Havana#OpenStack_Block_Storage_.28Cinder.29
>
> 2013/12/17 David Kranz 
>
>> Sorry for lost subject in last message.
>>
>> Is there a document that describes the api changes from v1 to v2, similar
>> to the one documenting nova v2 to v3?
>>
>>  -David
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
> --
> Regards,
> Zhi Kun Liu
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] weekly meeting

2013-12-16 Thread Mike Perez
I agree with Qin here that alternating might be a good option. I'm not
opposed to being present to both meetings though.

-Mike Perez


On Mon, Dec 16, 2013 at 9:31 PM, Qin Zhao  wrote:

> Hi John,
>
> Yes, alternating the time for each week should be fine.  I just change my
> gmail name to English... I think you can see my name now...
>
>
>  On Tue, Dec 17, 2013 at 12:05 PM, John Griffith <
> john.griff...@solidfire.com> wrote:
>
>> On Mon, Dec 16, 2013 at 8:57 PM, 赵钦  wrote:
>> > Hi John,
>> >
>> > I think the current meeting schedule, UTC 16:00, basically works for
>> China
>> > TZ (12AM), although it is not perfect. If we need to reschedule, I
>> think UTC
>> > 05:00 is better than UTC 04:00, since UTC 04:00 (China 12PM) is our
>> lunch
>> > time.
>> >
>> >
>> > On Tue, Dec 17, 2013 at 11:04 AM, John Griffith
>> >  wrote:
>> >>
>> >> Hi All,
>> >>
>> >> Prompted by a recent suggestion from Tom Fifield, I thought I'd gauge
>> >> some interest in either changing the weekly Cinder meeting time, or
>> >> proposing a second meeting to accomodate folks in other time-zones.
>> >>
>> >> A large number of folks are already in time-zones that are not
>> >> "friendly" to our current meeting time.  I'm wondering if there is
>> >> enough of an interest to move the meeting time from 16:00 UTC on
>> >> Wednesdays, to 04:00 or 05:00 UTC?  Depending on the interest I'd be
>> >> willing to look at either moving the meeting for a trial period or
>> >> holding a second meeting to make sure folks in other TZ's had a chance
>> >> to be heard.
>> >>
>> >> Let me know your thoughts, if there are folks out there that feel
>> >> unable to attend due to TZ conflicts and we can see what we might be
>> >> able to do.
>> >>
>> >> Thanks,
>> >> John
>> >>
>> >> ___
>> >> OpenStack-dev mailing list
>> >> OpenStack-dev@lists.openstack.org
>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>> >
>> >
>> > ___
>> > OpenStack-dev mailing list
>> > OpenStack-dev@lists.openstack.org
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>> Hi Chaochin,
>>
>> Thanks for the feedback, I think the alternate time would have to be
>> moved up an hour or two anyway (between the lunch hour in your TZ and
>> the fact that it just moves the problem of being at midnight to the
>> folks in US Eastern TZ).  Also, I think if there is interest that a
>> better solution might be to implement something like the Ceilometer
>> team does and alternate the time each week.
>>
>> John
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
> --
> Qin Zhao
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][glance] Oslo.cfg resets not really resetting the CONF

2013-12-16 Thread Mark McLoughlin
On Tue, 2013-12-17 at 11:17 +0530, Amala Basha Alungal wrote:
> Hi Mark, Ben
> 
> 
> The reset() method in turn calls the clear() method which does an
> unregister_opt(). However the unregister_opt only unregisters the
> config_opts. The entire set of options inside _opts remain as is.
> We've filed a bug on the oslo end. 

Yes, that's working as designed.

Those two options are registered by __call__() so reset() unregisters
only them.

The idea is that you can register lots and then do __call__() and
reset() without affecting the registered options.

Mark.

> On Tue, Dec 17, 2013 at 5:27 AM, Mark McLoughlin 
> wrote:
> Hi
> 
> On Fri, 2013-12-13 at 14:14 +0530, Amala Basha Alungal wrote:
> > Hi,
> >
> >
> >
> > I stumbled into a situation today where in I had to write
> few tests that
> > modifies the oslo.config.cfg and in turn resets the values
> back in a tear
> 
> > down. Acc to the docs, oslo.cfg reset() "*Clears the object
> state and
> > unsets overrides and defaults." *but, it doesn't seem to be
> happening, as
> > the subsequent tests that are run retains these modified
> values and tests
> > behave abnormally. The patch has been submitted for review
> 
> > here.
> > Am I missing something obvious?
> 
> 
> From https://bugs.launchpad.net/oslo/+bug/1261376 :
> 
>   reset() will clear any values read from the command line or
> config
>   files and it will also remove any values set with
> set_default() or
>   set_override()
> 
>   However, it will not undo register_opt() - there is
> unregister_opt()
>   for that purpose
> 
> Maybe if you pushed a version of
> https://review.openstack.org/60188
> which uses reset() and explain how it's not working as you
> expected?
> 
> Thanks,
> Mark.
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> 
> -- 
> Thanks And Regards
> Amala Basha
> +91-7760972008



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][qa] release notes for cinder v1 to v2?

2013-12-16 Thread Zhi Kun Liu
It seems that there's no document about the change from v1 to v2. Maybe the
change is very small.  Only found some info in OpenStack release notes.

Cinder API v2

   - List volumes/snapshots summary actually is a summary view. In v1 it
   was the same as detail view.
   - List volumes/snapshots detail and summary has display_name key changed
   to name.
   - List volumes/snapshots detail and summary has display_description key
   changed to description.


https://wiki.openstack.org/wiki/ReleaseNotes/Grizzly#OpenStack_Block_Storage_.28Cinder.29
https://wiki.openstack.org/wiki/ReleaseNotes/Havana#OpenStack_Block_Storage_.28Cinder.29

2013/12/17 David Kranz 

> Sorry for lost subject in last message.
>
> Is there a document that describes the api changes from v1 to v2, similar
> to the one documenting nova v2 to v3?
>
>  -David
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Regards,
Zhi Kun Liu
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][glance] Oslo.cfg resets not really resetting the CONF

2013-12-16 Thread Amala Basha Alungal
Hi Mark, Ben

The reset() method in turn calls the *clear()* method which does an
*unregister_opt()*. However the unregister_opt only unregisters the
*config_opts*. The entire set of options inside *_opts* remain as is. We've
filed a bug  on the oslo end.


On Tue, Dec 17, 2013 at 5:27 AM, Mark McLoughlin  wrote:

> Hi
>
> On Fri, 2013-12-13 at 14:14 +0530, Amala Basha Alungal wrote:
> > Hi,
> >
> >
> >
> > I stumbled into a situation today where in I had to write few tests that
> > modifies the oslo.config.cfg and in turn resets the values back in a tear
> > down. Acc to the docs, oslo.cfg reset() "*Clears the object state and
> > unsets overrides and defaults." *but, it doesn't seem to be happening, as
> > the subsequent tests that are run retains these modified values and tests
> > behave abnormally. The patch has been submitted for review
> > here.
> > Am I missing something obvious?
>
> From https://bugs.launchpad.net/oslo/+bug/1261376 :
>
>   reset() will clear any values read from the command line or config
>   files and it will also remove any values set with set_default() or
>   set_override()
>
>   However, it will not undo register_opt() - there is unregister_opt()
>   for that purpose
>
> Maybe if you pushed a version of https://review.openstack.org/60188
> which uses reset() and explain how it's not working as you expected?
>
> Thanks,
> Mark.
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Thanks And Regards
Amala Basha
+91-7760972008
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] weekly meeting

2013-12-16 Thread Qin Zhao
Hi John,

Yes, alternating the time for each week should be fine.  I just change my
gmail name to English... I think you can see my name now...


On Tue, Dec 17, 2013 at 12:05 PM, John Griffith  wrote:

> On Mon, Dec 16, 2013 at 8:57 PM, 赵钦  wrote:
> > Hi John,
> >
> > I think the current meeting schedule, UTC 16:00, basically works for
> China
> > TZ (12AM), although it is not perfect. If we need to reschedule, I think
> UTC
> > 05:00 is better than UTC 04:00, since UTC 04:00 (China 12PM) is our lunch
> > time.
> >
> >
> > On Tue, Dec 17, 2013 at 11:04 AM, John Griffith
> >  wrote:
> >>
> >> Hi All,
> >>
> >> Prompted by a recent suggestion from Tom Fifield, I thought I'd gauge
> >> some interest in either changing the weekly Cinder meeting time, or
> >> proposing a second meeting to accomodate folks in other time-zones.
> >>
> >> A large number of folks are already in time-zones that are not
> >> "friendly" to our current meeting time.  I'm wondering if there is
> >> enough of an interest to move the meeting time from 16:00 UTC on
> >> Wednesdays, to 04:00 or 05:00 UTC?  Depending on the interest I'd be
> >> willing to look at either moving the meeting for a trial period or
> >> holding a second meeting to make sure folks in other TZ's had a chance
> >> to be heard.
> >>
> >> Let me know your thoughts, if there are folks out there that feel
> >> unable to attend due to TZ conflicts and we can see what we might be
> >> able to do.
> >>
> >> Thanks,
> >> John
> >>
> >> ___
> >> OpenStack-dev mailing list
> >> OpenStack-dev@lists.openstack.org
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> Hi Chaochin,
>
> Thanks for the feedback, I think the alternate time would have to be
> moved up an hour or two anyway (between the lunch hour in your TZ and
> the fact that it just moves the problem of being at midnight to the
> folks in US Eastern TZ).  Also, I think if there is interest that a
> better solution might be to implement something like the Ceilometer
> team does and alternate the time each week.
>
> John
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Qin Zhao
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Re: [Blueprint vlan-aware-vms] VLAN aware VMs

2013-12-16 Thread Isaku Yamahata
Added openstack-dev

On Mon, Dec 16, 2013 at 11:34:05PM +0100,
Erik Moe  wrote:

> Hi,
> 
> I have added a new document to the launchpad. Document should now be more
> in line with what we discussed at the Icehouse summit.
> 
> https://blueprints.launchpad.net/neutron/+spec/vlan-aware-vms
> 
> Doc:
> https://docs.google.com/document/d/1lDJ31-XqkjjWC-IBq-_wV1KVhi7DzPkKYlCxTIPs_9U/edit?usp=sharing
> 
> You are very welcome to give feedback if this is the solution you had in
> mind.

The document is view-only. So I commented below.

- 2 Modeling proposal
  What's the purpose of trunk network?
  Can you please add a use case that trunk network can't be optimized away?

- 4 IP address management
  nitpick
  Can you please clarify what "the L2 gateway ports" in section 2
  modeling proposal, figure 1?

- Table 3
  Will this be same to l2-gateway one?
  https://blueprints.launchpad.net/neutron/+spec/l2-gateway

- Figure 5
  What's the purpose of br-int local VID?
  VID can be directly converted from br-eth1 VID to VM VID untagged.

-- 
Isaku Yamahata 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack][Heat] Creating an Openstack project using Heat

2013-12-16 Thread Sayaji Patil
Thanks Steve.

Regards,
Sayaji


On Mon, Dec 16, 2013 at 2:57 PM, Steve Baker  wrote:

> On 12/17/2013 09:57 AM, Sayaji Patil wrote:
> > Hi,
> > I have installed Openstack with heat using packstack. One thing I
> > noticed
> > is that the "Orchestration Heat" option is only available inside a
> > project view.
> > Is this by design ?
> >
> Yes, heat stacks are scoped to a project/tenant.
> > My use case it to create a project with images, networks,routers and
> > firewall rules
> > in a single workflow. I looked at the documentation and at this point
> > there is no
> > resource available to create a project or upload an image.
> >
> >
> It wouldn't be hard to write a resource which creates a tenant/project,
> however there will be more changes required before the other resources
> in your stack can be created in the context of your new project. For now
> you need to create your project and user outside of heat.
>
> As for image upload, a glance resource could be written which registers
> an image from a URL. Feel free to file a blueprint for that describing
> your use case.
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] weekly meeting

2013-12-16 Thread Zhi Yan Liu
Hello John,

04:00 or 05:00 UTC works for me too.

On Tue, Dec 17, 2013 at 12:05 PM, John Griffith
 wrote:
> On Mon, Dec 16, 2013 at 8:57 PM, 赵钦  wrote:
>> Hi John,
>>
>> I think the current meeting schedule, UTC 16:00, basically works for China
>> TZ (12AM), although it is not perfect. If we need to reschedule, I think UTC
>> 05:00 is better than UTC 04:00, since UTC 04:00 (China 12PM) is our lunch
>> time.
>>
>>
>> On Tue, Dec 17, 2013 at 11:04 AM, John Griffith
>>  wrote:
>>>
>>> Hi All,
>>>
>>> Prompted by a recent suggestion from Tom Fifield, I thought I'd gauge
>>> some interest in either changing the weekly Cinder meeting time, or
>>> proposing a second meeting to accomodate folks in other time-zones.
>>>
>>> A large number of folks are already in time-zones that are not
>>> "friendly" to our current meeting time.  I'm wondering if there is
>>> enough of an interest to move the meeting time from 16:00 UTC on
>>> Wednesdays, to 04:00 or 05:00 UTC?  Depending on the interest I'd be
>>> willing to look at either moving the meeting for a trial period or
>>> holding a second meeting to make sure folks in other TZ's had a chance
>>> to be heard.
>>>
>>> Let me know your thoughts, if there are folks out there that feel
>>> unable to attend due to TZ conflicts and we can see what we might be
>>> able to do.
>>>
>>> Thanks,
>>> John
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> Hi Chaochin,
>
> Thanks for the feedback, I think the alternate time would have to be
> moved up an hour or two anyway (between the lunch hour in your TZ and
> the fact that it just moves the problem of being at midnight to the
> folks in US Eastern TZ).  Also, I think if there is interest that a
> better solution might be to implement something like the Ceilometer
> team does and alternate the time each week.

Agreed, like Glance team also.

zhiyan

>
> John
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Scheduler sub-group agenda 12/17

2013-12-16 Thread Dugger, Donald D
1)  Memcached based scheduler updates

2)  Scheduler code forklift

3)  Instance groups

--
Don Dugger
"Censeo Toto nos in Kansa esse decisse." - D. Gale
Ph: 303/443-3786

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] weekly meeting

2013-12-16 Thread John Griffith
On Mon, Dec 16, 2013 at 8:57 PM, 赵钦  wrote:
> Hi John,
>
> I think the current meeting schedule, UTC 16:00, basically works for China
> TZ (12AM), although it is not perfect. If we need to reschedule, I think UTC
> 05:00 is better than UTC 04:00, since UTC 04:00 (China 12PM) is our lunch
> time.
>
>
> On Tue, Dec 17, 2013 at 11:04 AM, John Griffith
>  wrote:
>>
>> Hi All,
>>
>> Prompted by a recent suggestion from Tom Fifield, I thought I'd gauge
>> some interest in either changing the weekly Cinder meeting time, or
>> proposing a second meeting to accomodate folks in other time-zones.
>>
>> A large number of folks are already in time-zones that are not
>> "friendly" to our current meeting time.  I'm wondering if there is
>> enough of an interest to move the meeting time from 16:00 UTC on
>> Wednesdays, to 04:00 or 05:00 UTC?  Depending on the interest I'd be
>> willing to look at either moving the meeting for a trial period or
>> holding a second meeting to make sure folks in other TZ's had a chance
>> to be heard.
>>
>> Let me know your thoughts, if there are folks out there that feel
>> unable to attend due to TZ conflicts and we can see what we might be
>> able to do.
>>
>> Thanks,
>> John
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

Hi Chaochin,

Thanks for the feedback, I think the alternate time would have to be
moved up an hour or two anyway (between the lunch hour in your TZ and
the fact that it just moves the problem of being at midnight to the
folks in US Eastern TZ).  Also, I think if there is interest that a
better solution might be to implement something like the Ceilometer
team does and alternate the time each week.

John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] weekly meeting

2013-12-16 Thread 赵钦
Hi John,

I think the current meeting schedule, UTC 16:00, basically works for China
TZ (12AM), although it is not perfect. If we need to reschedule, I think
UTC 05:00 is better than UTC 04:00, since UTC 04:00 (China 12PM) is our
lunch time.


On Tue, Dec 17, 2013 at 11:04 AM, John Griffith  wrote:

> Hi All,
>
> Prompted by a recent suggestion from Tom Fifield, I thought I'd gauge
> some interest in either changing the weekly Cinder meeting time, or
> proposing a second meeting to accomodate folks in other time-zones.
>
> A large number of folks are already in time-zones that are not
> "friendly" to our current meeting time.  I'm wondering if there is
> enough of an interest to move the meeting time from 16:00 UTC on
> Wednesdays, to 04:00 or 05:00 UTC?  Depending on the interest I'd be
> willing to look at either moving the meeting for a trial period or
> holding a second meeting to make sure folks in other TZ's had a chance
> to be heard.
>
> Let me know your thoughts, if there are folks out there that feel
> unable to attend due to TZ conflicts and we can see what we might be
> able to do.
>
> Thanks,
> John
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] weekly meeting

2013-12-16 Thread Huang Zhiteng
04:00 or 05:00 UTC works for me.

On Tue, Dec 17, 2013 at 11:04 AM, John Griffith
 wrote:
> Hi All,
>
> Prompted by a recent suggestion from Tom Fifield, I thought I'd gauge
> some interest in either changing the weekly Cinder meeting time, or
> proposing a second meeting to accomodate folks in other time-zones.
>
> A large number of folks are already in time-zones that are not
> "friendly" to our current meeting time.  I'm wondering if there is
> enough of an interest to move the meeting time from 16:00 UTC on
> Wednesdays, to 04:00 or 05:00 UTC?  Depending on the interest I'd be
> willing to look at either moving the meeting for a trial period or
> holding a second meeting to make sure folks in other TZ's had a chance
> to be heard.
>
> Let me know your thoughts, if there are folks out there that feel
> unable to attend due to TZ conflicts and we can see what we might be
> able to do.
>
> Thanks,
> John
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Regards
Huang Zhiteng

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder] weekly meeting

2013-12-16 Thread John Griffith
Hi All,

Prompted by a recent suggestion from Tom Fifield, I thought I'd gauge
some interest in either changing the weekly Cinder meeting time, or
proposing a second meeting to accomodate folks in other time-zones.

A large number of folks are already in time-zones that are not
"friendly" to our current meeting time.  I'm wondering if there is
enough of an interest to move the meeting time from 16:00 UTC on
Wednesdays, to 04:00 or 05:00 UTC?  Depending on the interest I'd be
willing to look at either moving the meeting for a trial period or
holding a second meeting to make sure folks in other TZ's had a chance
to be heard.

Let me know your thoughts, if there are folks out there that feel
unable to attend due to TZ conflicts and we can see what we might be
able to do.

Thanks,
John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Project-Scoped Service Catalog Entries

2013-12-16 Thread Jay Pipes

On 12/16/2013 08:39 PM, Adam Young wrote:


See the endpoint filtering blueprint from the Havana release as a
starting point.  I think the difference between that and what you have
here is that these endpoints should only show up in a subset of the
service catalogs returned?

https://github.com/openstack/keystone/commit/5dc50bbf0fb94506a06ae325d46bcf3ac1c4ad0a


Unfortunately, this functionality was added as an API extension that 
isn't enabled by default in the WSGI pipeline and, by the nature of it 
being an extension, isn't guaranteed to be the same across deployments :(


So, in order to use this functionality, Horizon will need to query for 
the existence of an OS-EP-FILTER extenstion to the Keystone API, and 
enable functionality based on this.


Yet another reason I hate the idea of API extensions.

-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Project-Scoped Service Catalog Entries

2013-12-16 Thread Jay Pipes

On 12/16/2013 08:39 PM, Adam Young wrote:

On 12/16/2013 02:57 PM, Gabriel Hurley wrote:

I've run into a use case that doesn't currently seem to have a great
solution:


Let's say my users want to use a "top-of-stack" OpenStack project such
as Heat, Trove, etc. that I don't currently support in my deployment.
There's absolutely no reason these services can't live happily in a VM
talking to Nova, etc. via the normal APIs. However, in order to have a
good experience (Horizon integration, seamless CLI integration) the
service needs to be in the Service Catalog. One user could have their
service added to the catalog by an admin, but then everyone in the
cloud would be using their VM. And if you have multiple users all
doing the same thing in their own projects, you've got collisions!


So, I submit to you all that there is value in having a way to scope
Service Catalog entries to specific projects, and to allow users with
appropriate permissions on their project to add/remove those
project-level service catalog entries.

This could be accomplished in a number of ways:

   * Adding a new field to the model to store a Project ID.
   * Adding it in a standardized manner to "service metadata" as with
https://blueprints.launchpad.net/keystone/+spec/service-metadata
   * Adding it as an "additional requirement" as proposed by
https://blueprints.launchpad.net/keystone/+spec/auth-mechanisms-for-services

   * Use the existing Region field to track project scope as a hack.
   * Something else...

I see this as analogous to Nova's concept of per-project flavors, or
Glance's private/public/shared image capabilities. Allowing explicit
"sharing" would even be an interesting option for service endpoints.
It all depends how far we would want to go with it.

Feel free to offer feedback or other suggestions.

Thanks!

  - Gabriel

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


See the endpoint filtering blueprint from the Havana release as a
starting point.  I think the difference between that and what you have
here is that these endpoints should only show up in a subset of the
service catalogs returned?

https://github.com/openstack/keystone/commit/5dc50bbf0fb94506a06ae325d46bcf3ac1c4ad0a


Also note the above is an admin API extension...

-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Multidomain User Ids

2013-12-16 Thread Adam Young

On 12/04/2013 12:35 PM, Henry Nash wrote:


On 4 Dec 2013, at 13:28, Dolph Mathews > wrote:




On Sun, Nov 24, 2013 at 9:39 PM, Adam Young > wrote:


The #1 pain point I hear from people in the field is that they
need to consume read only  LDAP but have service users in
something Keystone specific.  We are close to having this, but we
have not closed the loop.  This was something that was Henry's to
drive home to completion.  Do we have a plan?  Federation depends
on this, I think, but this problem stands alone.


I'm still thinking through the idea of having keystone natively 
federate to itself out of the box, where keystone presents itself as 
an IdP (primarily for service users). It sounds like a simpler 
architectural solution than having to shuffle around code paths for 
both federated identities and local identities.



Two Solutions:
1 always require domain ID along with the user id for role
assignments.


From an API perspective, how? (while still allowing for cross-domain 
role assignments)


2 provide some way to parse from the user ID what domain it is.


I think you meant this one the other way around: Determine the domain 
given the user ID.



I was thinking that we could do something along the lines of 2
where we provide  "domain specific user_id prefix"  for example,
if there is just one ldpa service, and they wanted to prefix
anyting out of ldap with "ldap@", then an id would be  "prefix"
 "field from LDAP".  And would be configured on a per domain
basis.  THis would be optional.

The weakness is that itbe Log N to determine which Domain a
user_id came from.  A better approach would be to use a divider,
like '@' and then prefix would be the key for a hashtable lookup.
 Since it is optional, domains could still be stored in SQL and
user_ids could be uuids.

One problem is if someone comes by later an "must" use email
address as the userid, the @ would mess them up.  So The default
divider should be something URL safe but no likely to be part of
a userid. I realize that it might be impossible to match this
criterion.


I know this sounds a bit like "back to the future', but how about we 
make a user_id passed via the API a structured binary field, 
containing a concatenation of domain_id and (the actual) user_id, but 
rather than have a separator, encode the start positions in the first 
few digits, e.g. something like:

This might be the most insane idea I have heard all day.  I love it.



Digit #Meaning
0-1Start position of domain_id, (e.g. this will usually be 4)
2-3Start position of user_id
4-Ndomain_id
M-enduser_id


I suspect it is more of a brainstorming attempt than as an actual 
proposal.  It can't be binary for many reasons, and strings parsing gets 
wonky, especially if you assume utf-8  is in there (how many bytes per 
character?)


The interesting idea is appending the domain id instead of prepending 
it.  It may be an irrelevant change, but worth mulling.


An interesting approach would be to do domain prepended user ids using 
/  so that


user/domain  is the ID, and then the URL would be automagically 
segmented:  If they leave off the domain, then the userid by itself 
would still be valid.





We would run a migration that would convert all existing mappings. 
 Further, we would ensure (with padding if necessary) that this "new" 
user_id is ALWAYS larger than 64chars - hence we could easily detect 
which type of ID we had.


For usernames, sure... but I don't know why anyone would care to use 
email addresses as ID's.



Actually, there might be other reasons to forbid @ signs from
IDs, as they look like phishing attempts in URLs.


Phishing attempts?? They need to be encoded anyway...




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--

-Dolph
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org 


http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Project-Scoped Service Catalog Entries

2013-12-16 Thread Adam Young

On 12/16/2013 02:57 PM, Gabriel Hurley wrote:

I've run into a use case that doesn't currently seem to have a great solution:


Let's say my users want to use a "top-of-stack" OpenStack project such as Heat, 
Trove, etc. that I don't currently support in my deployment. There's absolutely no reason 
these services can't live happily in a VM talking to Nova, etc. via the normal APIs. 
However, in order to have a good experience (Horizon integration, seamless CLI 
integration) the service needs to be in the Service Catalog. One user could have their 
service added to the catalog by an admin, but then everyone in the cloud would be using 
their VM. And if you have multiple users all doing the same thing in their own projects, 
you've got collisions!


So, I submit to you all that there is value in having a way to scope Service 
Catalog entries to specific projects, and to allow users with appropriate 
permissions on their project to add/remove those project-level service catalog 
entries.

This could be accomplished in a number of ways:

   * Adding a new field to the model to store a Project ID.
   * Adding it in a standardized manner to "service metadata" as with 
https://blueprints.launchpad.net/keystone/+spec/service-metadata
   * Adding it as an "additional requirement" as proposed by 
https://blueprints.launchpad.net/keystone/+spec/auth-mechanisms-for-services
   * Use the existing Region field to track project scope as a hack.
   * Something else...

I see this as analogous to Nova's concept of per-project flavors, or Glance's 
private/public/shared image capabilities. Allowing explicit "sharing" would 
even be an interesting option for service endpoints. It all depends how far we would want 
to go with it.

Feel free to offer feedback or other suggestions.

Thanks!

  - Gabriel

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


See the endpoint filtering blueprint from the Havana release as a 
starting point.  I think the difference between that and what you have 
here is that these endpoints should only show up in a subset of the 
service catalogs returned?


https://github.com/openstack/keystone/commit/5dc50bbf0fb94506a06ae325d46bcf3ac1c4ad0a

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][libvirt]when deleting instance which is in migrating state, instance files can be stay in destination node forever

2013-12-16 Thread haruka tanizawa
Hi!

Actually, I have already filed cancel of LiveMigration with using taskflow
[0].
But my approach to blueprint was not so good at that time.
So, I rewrote this blueprint from also point of russelb.
I want to repush and wanted to approved.

If you have any suggestions, ideas etc...
I appreciate it :)

Sincerely, Haruka Tanizawa


[0] https://blueprints.launchpad.net/nova/+spec/feature-of-cancel


2013/12/17 Yaguang Tang 

> could we use Taskflow  to
> manage task state and resource for this kind of tasks in Nova? Cinder has
> been an pilot to use Taskflow for volume backup tasks. anyone interested in
> this suggestion or has done some research to improve the live migration
> workflow?
>
>
> 2013/12/17 Vladik Romanovsky 
>
>> I would block it in the API or have the API cancelling the migration
>> first.
>> I don't see a reason why to start an operation that is meant to fail,
>> which also has a complex chain of event, following it failure.
>>
>> Regardless of the above, I think that the suggested exception handling is
>> needed in any case.
>>
>>
>> Vladik
>>
>> - Original Message -
>> > From: "Loganathan Parthipan" 
>> > To: "OpenStack Development Mailing List (not for usage questions)" <
>> openstack-dev@lists.openstack.org>
>> > Sent: Monday, 16 December, 2013 8:25:09 AM
>> > Subject: Re: [openstack-dev] [Nova][libvirt]when deleting instance
>> which is in migrating state, instance files can be
>> > stay in destination node forever
>> >
>> >
>> >
>> > Isn’t just handling the exception instance_not_found enough? By this
>> time
>> > source would’ve been cleaned up. Destination VM resources will get
>> cleaned
>> > up by the periodic task since the VM is not associated with this host.
>> Am I
>> > missing something here?
>> >
>> >
>> >
>> >
>> >
>> >
>> > From: 王宏 [mailto:w.wangho...@gmail.com]
>> > Sent: 16 December 2013 11:32
>> > To: openstack-dev@lists.openstack.org
>> > Subject: [openstack-dev] [Nova][libvirt]when deleting instance which is
>> in
>> > migrating state, instance files can be stay in destination node forever
>> >
>> >
>> >
>> >
>> >
>> > Hi all.
>> >
>> >
>> > When I try to fix a bug: https://bugs.launchpad.net/nova/+bug/1242961 ,
>> >
>> >
>> > I get a trouble.
>> >
>> >
>> >
>> >
>> >
>> > To reproduce the bug is very easy. Live migrate a vm in block_migration
>> mode,
>> >
>> >
>> > and then delelte the vm immediately.
>> >
>> >
>> >
>> >
>> >
>> > The reason of this bug is as follow:
>> >
>> >
>> > 1. Because live migrate costs more time, so the vm will be deleted
>> > sucessfully
>> >
>> >
>> > before live migrate complete. And then, we will get an exception while
>> live
>> >
>> >
>> > migrating.
>> >
>> >
>> > 2. After live migrate failed, we start to rollback. But, in the rollback
>> > method
>> >
>> >
>> > we will get or modify the info of vm from db. Because the vm has been
>> deleted
>> >
>> >
>> > already, so we will get instance_not_found exception and rollback will
>> be
>> >
>> >
>> > faild too.
>> >
>> >
>> >
>> >
>> >
>> > I have two ways to fix the bug:
>> >
>> >
>> > i)Add check in nova-api. When try to delete a vm, we return an error
>> message
>> > if
>> >
>> >
>> > the vm_state is LIVE_MIGRATING. This way is very simple, but need to
>> > carefully
>> >
>> >
>> > consider. I have found a related discussion:
>> >
>> >
>> >
>> http://lists.openstack.org/pipermail/openstack-dev/2013-October/017454.html,
>> > but
>> >
>> >
>> > it has no result in the discussion.
>> >
>> >
>> > ii)Before live migrate we get all the data needed by rollback method,
>> and add
>> > a
>> >
>> >
>> > new rollback method. The new method will clean up resources at
>> destination
>> > based
>> >
>> >
>> > on the above data(The resouces at source has been already cleaned up by
>> >
>> >
>> > deleting).
>> >
>> >
>> >
>> >
>> >
>> > I have no idea whitch one I should choose. Or, any other ideas?:)
>> >
>> >
>> >
>> >
>> >
>> > Regards,
>> >
>> >
>> > wanghong
>> >
>> > ___
>> > OpenStack-dev mailing list
>> > OpenStack-dev@lists.openstack.org
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
> --
> Tang Yaguang
>
> Canonical Ltd. | www.ubuntu.com | www.canonical.com
> Mobile:  +86 152 1094 6968
> gpg key: 0x187F664F
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] DHCP Agent Reliability

2013-12-16 Thread Maru Newby

On Dec 13, 2013, at 8:06 PM, Isaku Yamahata  wrote:

> On Fri, Dec 06, 2013 at 04:30:17PM +0900,
> Maru Newby  wrote:
> 
>> 
>> On Dec 5, 2013, at 5:21 PM, Isaku Yamahata  wrote:
>> 
>>> On Wed, Dec 04, 2013 at 12:37:19PM +0900,
>>> Maru Newby  wrote:
>>> 
 In the current architecture, the Neutron service handles RPC and WSGI with 
 a single process and is prone to being overloaded such that agent 
 heartbeats can be delayed beyond the limit for the agent being declared 
 'down'.  Even if we increased the agent timeout as Yongsheg suggests, 
 there is no guarantee that we can accurately detect whether an agent is 
 'live' with the current architecture.  Given that amqp can ensure eventual 
 delivery - it is a queue - is sending a notification blind such a bad 
 idea?  In the best case the agent isn't really down and can process the 
 notification.  In the worst case, the agent really is down but will be 
 brought up eventually by a deployment's monitoring solution and process 
 the notification when it returns.  What am I missing? 
 
>>> 
>>> Do you mean overload of neutron server? Not neutron agent.
>>> So event agent sends periodic 'live' report, the reports are piled up
>>> unprocessed by server.
>>> When server sends notification, it considers agent dead wrongly.
>>> Not because agent didn't send live reports due to overload of agent.
>>> Is this understanding correct?
>> 
>> Your interpretation is likely correct.  The demands on the service are going 
>> to be much higher by virtue of having to field RPC requests from all the 
>> agents to interact with the database on their behalf.
> 
> Is this strongly indicating thread-starvation. i.e. too much unfair
> thread scheduling.
> Given that eventlet is cooperative threading, should sleep(0) to 
> hogging thread?

I'm afraid that's a question for a profiler: 
https://github.com/colinhowe/eventlet_profiler


m.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa][ceilometer] Who can be a asked to help review tempest tests?

2013-12-16 Thread David Kranz

On 12/16/2013 05:50 PM, Doug Hellmann wrote:
It might be good, to start out, if we all look at them. That way we 
can all learn a bit about tempest, too. If you add "ceilometer-core" 
as a reviewer gerrit will expand the group name.


Doug
Thanks, Doug. That makes sense.  The only reason I asked this was 
because I didn't know it was possible to ask all of you like that!


 -David



On Mon, Dec 16, 2013 at 5:33 PM, David Kranz > wrote:


Ceilometer team, we are reviewing tempest tests and hope to see
more. The tempest review team is hoping to identify some
ceilometer devs who could help answer questions or provide a
review if needed for ceilometer patches. Since ceilometer is new
we are not all familiar with many of the details. Can any one on
the ceilometer team volunteer?

 -David

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Enable to set DHCP port attributes

2013-12-16 Thread Itsuro ODA
Hi Neutron developers,

I submitted the following blue print.
https://blueprints.launchpad.net/neutron/+spec/enable-to-set-dhcp-port-attributes

It is a proposal to be enable to control dhcp port attributes
(especially ip address) by a user. 

This is based on a real requirement from our customer.
I don't know there is a consensus that dhcp port attributes should
not be enable to set by a user. Comments are welcome.

Thanks.
-- 
Itsuro ODA 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][glance] Oslo.cfg resets not really resetting the CONF

2013-12-16 Thread Mark McLoughlin
Hi

On Fri, 2013-12-13 at 14:14 +0530, Amala Basha Alungal wrote:
> Hi,
> 
> 
> 
> I stumbled into a situation today where in I had to write few tests that
> modifies the oslo.config.cfg and in turn resets the values back in a tear
> down. Acc to the docs, oslo.cfg reset() "*Clears the object state and
> unsets overrides and defaults." *but, it doesn't seem to be happening, as
> the subsequent tests that are run retains these modified values and tests
> behave abnormally. The patch has been submitted for review
> here.
> Am I missing something obvious?

>From https://bugs.launchpad.net/oslo/+bug/1261376 :

  reset() will clear any values read from the command line or config
  files and it will also remove any values set with set_default() or 
  set_override()

  However, it will not undo register_opt() - there is unregister_opt()
  for that purpose

Maybe if you pushed a version of https://review.openstack.org/60188
which uses reset() and explain how it's not working as you expected?

Thanks,
Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [governance] Becoming a Program, before applying for incubation

2013-12-16 Thread Mark McLoughlin
Hi Thierry,

On Fri, 2013-12-13 at 15:53 +0100, Thierry Carrez wrote:
> Hi everyone,
> 
> TL;DR:
> Incubation is getting harder, why not ask efforts to apply for a new
> program first to get the visibility they need to grow.
> 
> Long version:
> 
> Last cycle we introduced the concept of "Programs" to replace the
> concept of "Official projects" which was no longer working that well for
> us. This was recognizing the work of existing teams, organized around a
> common mission, as an integral part of "delivering OpenStack".
> Contributors to programs become ATCs, so they get to vote in Technical
> Committee (TC) elections. In return, those teams place themselves under
> the authority of the TC.
> 
> This created an interesting corner case. Projects applying for
> incubation would actually request two concurrent things: be considered a
> new "Program", and give "incubated" status to a code repository under
> that program.
> 
> Over the last months we significantly raised the bar for accepting new
> projects in incubation, learning from past integration and QA mistakes.
> The end result is that a number of promising projects applied for
> incubation but got rejected on maturity, team size, team diversity, or
> current integration level grounds.
> 
> At that point I called for some specific label, like "Emerging
> Technology" that the TC could grant to promising projects that just need
> more visibility, more collaboration, more crystallization before they
> can make good candidates to be made part of our integrated releases.
> 
> However, at the last TC meeting it became apparent we could leverage
> "Programs" to achieve the same result. Promising efforts would first get
> their mission, scope and existing results blessed and recognized as
> something we'd really like to see in OpenStack one day. Then when they
> are ready, they could have one of their deliveries apply for incubation
> if that makes sense.
> 
> The consequences would be that the effort would place itself under the
> authority of the TC. Their contributors would be ATCs and would vote in
> TC elections, even if their deliveries never make it to incubation. They
> would get (some) space at Design Summits. So it's not "free", we still
> need to be pretty conservative about accepting them, but it's probably
> manageable.
> 
> I'm still weighing the consequences, but I think it's globally nicer
> than introducing another status. As long as the TC feels free to revoke
> Programs that do not deliver the expected results (or that no longer
> make sense in the new world order) I think this approach would be fine.
> 
> Comments, thoughts ?

Thanks for writing this up; a few thoughts ...


I'm not totally convinced we need such formality around the TC
expressing its support for an early-stage program/project/effort/team.

How about if we had an RFC process (hmm, but not in the IETF sense)
whereby an individual or team can submit a document expressing a
position and ask the TC to give its feedback? We would record that
feedback in the governance repo, and it would be a short piece of prose
(perhaps even recording a diversity of views amongst the TC members)
rather than a yes/no status vote.

In the case of a fledgling project, they'd write up something like a
first draft of an incubation application and we'd give our feedback,
encouragement, whatever.


Setting a very low bar for the officialness of becoming a Program seems
wrong to me - I wouldn't like to see Programs being added and then later
removed with any sort of regularity. Part of what people are looking for
is an indication of what's coming down the track and the endorsement
implicit in becoming a Program - before a long-term viable team has been
established - seems too strong for me.


Even though this doesn't grant ATC status to the people working on those
projects, I'm struggling to see that as a burning issue for anyone -
honestly, if you're working on an early-stage, keen-to-be-incubated
project then I'd be surprised if you didn't find some small way to
contribute to one of our many ATC-granting projects.


One thing I'm noticing that's missing from these new docs:

  
http://git.openstack.org/cgit/openstack/governance/tree/reference/incubation-integration-requirements
  
http://git.openstack.org/cgit/openstack/governance/tree/reference/new-programs-requirements

is any caution around increasing the scope of OpenStack. I think we are
cautious about this, but we haven't mentioned it beyond e.g.

  ** Project must have a clear and defined scope
  ** Project should not inadvertently duplicate functionality present in other
 OpenStack projects. If they do, they should have a clear plan and timeframe
 to prevent long-term scope duplication.
  ** Project should leverage existing functionality in other OpenStack projects
 as much as possible

How would something like:

  ** Project must have a clear and defined scope which, in turn, represents
 a measured and obvious progression f

Re: [openstack-dev] a time-based resource management system

2013-12-16 Thread devdatta kulkarni
Hi Alan,

Looks like an interesting project.

Some questions/comments:

1) For resources, are you targeting only VMs, or is the scope going to include 
other resources
   as well (swift, load balancers, etc.)

2) Is the scope of reservation limited to time/duration or do you envision
   the scope to also include other kinds of contextual information (e.g. 
reserved capacity
   threshold), if any?

3) How are you currently specifying resource reservation policies?
   Are you using any specific policy specification framework for this purpose?

4) The details about mechanisms for session tracking and access revocation
   would be interesting to understand. The wiki page mentions there is a 
prototype 
   implementation, but I did not find a link. Is there anything that you can 
share?

5) In the implementation section, you mention that you are using roles and user 
access-list
   maintained by nova to control access. I was wondering if you considered 
Keystone
   to enforce time-based authorization policies of Cafe.

Good luck.

Thanks,
- Devdatta
 

-Original Message-
From: "Alan Tan" 
Sent: Monday, December 16, 2013 4:41pm
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] a time-based resource management system

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Hi everyone,

 

My name is Alan and I am from the Cyber Security Lab in University of
Waikato. 

 

We have recently started deploying and using Openstack in our experimental
private cloud testbed. The cloud testbed is mainly used in running our
research and teaching purposes. However, we notice that current Openstack
lacks the ability to control and manage user's access to resources in a
time-based manner. 

 

I.e. Current model for private clouds requires either the user to release
their resources (VMs) voluntarily or for the administrators to manually
remove the resources (VMs).

 

This makes capacity management a laborious effort in private clouds that
have a large user base. Hence, we have come up with the idea of an automatic
time-based resource management system that manages user access to resources
in a time slot booking style. We have detailed our plans and design in the
following wiki page. We would love to hear feedbacks from the community and
hopefully gather some interest in our project.

 

  https://wiki.openstack.org/wiki/Cafe

 

We look forward to hearing from you. We can be contacted via email. Our
addresses are listed on the wiki page. 

 

Thanks and have a good day.

 

Cheers,

Alan

 




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Re-initializing or dynamically configuring cinder driver

2013-12-16 Thread Mark McLoughlin
Hi,

On Sat, 2013-12-14 at 10:23 +0530, iKhan wrote:
> Hi All,
> 
> At present cinder driver can be only configured with adding entries in conf
> file. Once these driver related entries are modified or added in conf file,
> we need to restart cinder-volume service to validate the conf entries and
> create a child process that runs in background.
> 
> I am thinking of a way to re-initialize or dynamically configure cinder
> driver. So that I can accept the configuration from user on fly and perform
> operations. I think solution lies somewhere around "oslo.config.cfg", but I
> am still unclear about how re-initializing can be achieved.
> 
> Let know if anyone here is aware of any approach to re-initialize or
> dynamically configure a driver.

Some work on this was done in Oslo during Havana, see:

  https://blueprints.launchpad.net/oslo/+spec/service-restart
  https://blueprints.launchpad.net/oslo/+spec/cfg-reload-config-files

Thanks,
Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack][Heat] Creating an Openstack project using Heat

2013-12-16 Thread Steve Baker
On 12/17/2013 09:57 AM, Sayaji Patil wrote:
> Hi,
> I have installed Openstack with heat using packstack. One thing I
> noticed
> is that the "Orchestration Heat" option is only available inside a
> project view.
> Is this by design ? 
>
Yes, heat stacks are scoped to a project/tenant.
> My use case it to create a project with images, networks,routers and
> firewall rules
> in a single workflow. I looked at the documentation and at this point
> there is no
> resource available to create a project or upload an image.
>
>
It wouldn't be hard to write a resource which creates a tenant/project,
however there will be more changes required before the other resources
in your stack can be created in the context of your new project. For now
you need to create your project and user outside of heat.

As for image upload, a glance resource could be written which registers
an image from a URL. Feel free to file a blueprint for that describing
your use case.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Global Load Balancing

2013-12-16 Thread Tom Creighton
All:

I am trying to get a clear understanding of where or if Global Load Balancing 
as a Service fits within the OpenStack ecosystem.  I have included a list of 
basic use cases that I think a Global Load Balancing service should provide:

• Route traffic (HTTP/FTP/Database/etc...) across multiple data centers 
without having a single point of failure (i.e. LBaaS Virtual IP address)
• Monitor health of defined endpoints (load balancers or servers) with 
protocol-specific health checks
• Route traffic to another data center in the event of a health check 
failure
• Route traffic based on a requestor’s geographical location (GeoIP)
• Route traffic based on latency between requestor and defined endpoints

Please let me know if you have any opinions/recommendations on this matter or 
are interested in solving this problem.

Kind regards,

Tom Creighton

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa][ceilometer] Who can be a asked to help review tempest tests?

2013-12-16 Thread Doug Hellmann
It might be good, to start out, if we all look at them. That way we can all
learn a bit about tempest, too. If you add "ceilometer-core" as a reviewer
gerrit will expand the group name.

Doug


On Mon, Dec 16, 2013 at 5:33 PM, David Kranz  wrote:

> Ceilometer team, we are reviewing tempest tests and hope to see more. The
> tempest review team is hoping to identify some ceilometer devs who could
> help answer questions or provide a review if needed for ceilometer patches.
> Since ceilometer is new we are not all familiar with many of the details.
> Can any one on the ceilometer team volunteer?
>
>  -David
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] a time-based resource management system

2013-12-16 Thread Alan Tan
Hi everyone,

 

My name is Alan and I am from the Cyber Security Lab in University of
Waikato. 

 

We have recently started deploying and using Openstack in our experimental
private cloud testbed. The cloud testbed is mainly used in running our
research and teaching purposes. However, we notice that current Openstack
lacks the ability to control and manage user's access to resources in a
time-based manner. 

 

I.e. Current model for private clouds requires either the user to release
their resources (VMs) voluntarily or for the administrators to manually
remove the resources (VMs).

 

This makes capacity management a laborious effort in private clouds that
have a large user base. Hence, we have come up with the idea of an automatic
time-based resource management system that manages user access to resources
in a time slot booking style. We have detailed our plans and design in the
following wiki page. We would love to hear feedbacks from the community and
hopefully gather some interest in our project.

 

  https://wiki.openstack.org/wiki/Cafe

 

We look forward to hearing from you. We can be contacted via email. Our
addresses are listed on the wiki page. 

 

Thanks and have a good day.

 

Cheers,

Alan

 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [qa][ceilometer] Who can be a asked to help review tempest tests?

2013-12-16 Thread David Kranz
Ceilometer team, we are reviewing tempest tests and hope to see more. 
The tempest review team is hoping to identify some ceilometer devs who 
could help answer questions or provide a review if needed for ceilometer 
patches. Since ceilometer is new we are not all familiar with many of 
the details. Can any one on the ceilometer team volunteer?


 -David

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [savanna][qa] How will tempest tests run?

2013-12-16 Thread David Kranz
So it's great to see a submission of savanna tests for tempest. We would 
like to see these tests run before reviewing them. Is the intent that 
savanna will be enabled by default in devstack? If not, then I guess 
there will need to be separate savanna jobs. I see that right now there 
are savanna-enabled devstack jobs on the tempest experimental queue. In 
the long run, what is the intent for how these jobs should run? This is 
similar to the issue with ironic. It doesn't seem very scalable to set 
up separate complete tempest jobs for every project that is not turned 
on by default in devstack. Thoughts?


 -David

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Project-Scoped Service Catalog Entries

2013-12-16 Thread Jay Pipes

On 12/16/2013 04:17 PM, Georgy Okrokvertskhov wrote:

By they way, there is an initiative to create generic metadata
repository based on Glance project. As services endpoints are just URLs
they also can be stored in this Glance metadata repository and have all
features related to visibility and access control provides by this
repository.


Well, hold on there :) The proposed enhancements to Glance for a generic 
application metadata repository are a bit different from a service 
catalog, for one main reason: the service catalog is returned by 
Keystone in the Keystone token, and the service catalog will be [1] 
scoped to the owner of the token that was authenticated by Keystone.


I think because of the current relationship between the service catalog 
actually being within the construct of a returned Keystone token, 
Keystone, at least for the foreseeable future, is the appropriate place 
to do this kind of project-scoped service catalog.


I'm actually pretty familiar with the code in Keystone, and I was 
planning on implementing the proper region resource in the 3.2 API [2]. 
I would not mind doing the coding on Gabe's proposed project-scoped 
catalog once [1] has been implemented (API spec: [3]).


Best,
-jay

[1] https://blueprints.launchpad.net/keystone/+spec/service-scoped-tokens
[2] https://review.openstack.org/#/c/54215/
[3] https://review.openstack.org/#/c/61869/


On Mon, Dec 16, 2013 at 12:21 PM, Tim Bell mailto:tim.b...@cern.ch>> wrote:


+1

There is also the use case where a new service is being introduced
for everyone eventually but you wish to start with a few friends. In
the event of problems, the effort to tidy up is much less.
Documentation can be updated with the production environment.

Tim

 > -Original Message-
 > From: Gabriel Hurley [mailto:gabriel.hur...@nebula.com
]
 > Sent: 16 December 2013 20:58
 > To: OpenStack Development Mailing List
(openstack-dev@lists.openstack.org
)
 > Subject: [openstack-dev] Project-Scoped Service Catalog Entries
 >
 > I've run into a use case that doesn't currently seem to have a
great solution:
 >
 >
 > Let's say my users want to use a "top-of-stack" OpenStack project
such as Heat, Trove, etc. that I don't currently support in my
 > deployment. There's absolutely no reason these services can't
live happily in a VM talking to Nova, etc. via the normal APIs.
However, in
 > order to have a good experience (Horizon integration, seamless
CLI integration) the service needs to be in the Service Catalog. One
user
 > could have their service added to the catalog by an admin, but
then everyone in the cloud would be using their VM. And if you have
 > multiple users all doing the same thing in their own projects,
you've got collisions!
 >
 >
 > So, I submit to you all that there is value in having a way to
scope Service Catalog entries to specific projects, and to allow
users with
 > appropriate permissions on their project to add/remove those
project-level service catalog entries.
 >
 > This could be accomplished in a number of ways:
 >
 >   * Adding a new field to the model to store a Project ID.
 >   * Adding it in a standardized manner to "service metadata" as
with https://blueprints.launchpad.net/keystone/+spec/service-metadata
 >   * Adding it as an "additional requirement" as proposed by
https://blueprints.launchpad.net/keystone/+spec/auth-mechanisms-for-
 > services
 >   * Use the existing Region field to track project scope as a hack.
 >   * Something else...
 >
 > I see this as analogous to Nova's concept of per-project flavors,
or Glance's private/public/shared image capabilities. Allowing explicit
 > "sharing" would even be an interesting option for service
endpoints. It all depends how far we would want to go with it.
 >
 > Feel free to offer feedback or other suggestions.
 >
 > Thanks!
 >
 >  - Gabriel
 >
 > ___
 > OpenStack-dev mailing list
 > OpenStack-dev@lists.openstack.org

 > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Georgy Okrokvertskhov
Technical Program Manager,
Cloud and Infrastructure Services,
Mirantis
http://www.mirantis.com 
Tel. +1 650 963 9828
Mob. +1 650 996 3284


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mai

Re: [openstack-dev] [neutron][policy] Policy-Rules discussions based on Dec.12 network policy meeting

2013-12-16 Thread Prasad Vellanki
Hi
Please see inline 


On Sun, Dec 15, 2013 at 8:49 AM, Stephen Wong  wrote:

> Hi,
>
> During Thursday's  group-policy meeting[1], there are several
> policy-rules related issues which we agreed should be posted on the
> mailing list to gather community comments / consensus. They are:
>
> (1) Conflict resolution between policy-rules
> --- a priority field was added to the policy-rules attributes
> list[2]. Is this enough to resolve conflict across policy-rules (or
> even across policies)? Please state cases where a cross policy-rules
> conflict can occur.
> --- conflict resolution was a major discussion point during
> Thursday's meeting - and there was even suggestion on setting priority
> on endpoint groups; but I would like to have this email thread focused
> on conflict resolution across policy-rules in a single policy first.
>
> (2) Default policy-rule actions
> --- there seems to be consensus from the community that we need to
> establish some basic set of policy-rule actions upon which all
> plugins/drivers would have to support
> --- just to get the discussion going, I am proposing:
>
>
Or should this be a query the plugin for supported actions and thus the
user knows what functionality the plugin can support.  Hence there is no
default supported list.

a.) action_type: 'security'action: 'allow' | 'drop'
> b.) action_type: 'qos'action: {'qos_class': {'critical' |
> 'low-priority' | 'high-priority' |
>
>'low-immediate' | 'high-immediate' |
>
>'expedite-forwarding'}
>  (a subset of DSCP values - hopefully in language that can
> be well understood by those performing application deployments)
> c.) action_type:'redirect'   action: {UUID, [UUID]...}
>  (a list of Neutron objects to redirect to, and the list
> should contain at least one element)
>
>
I am not sure making the UUIDs a list of neutron objects or endpoints will
work well. It seems that it should more higher level such as list of
services that form a chain. Lets say one forms a chain of services,
firewall, IPS, LB. It would be tough to expect user to derive the neutron
ports create a chain of them. It could be a VM UUID.

Please discuss. In the document, there is also 'rate-limit' and
> 'policing' for 'qos' type, but those can be optional instead of
> required for now
>
> (3) Prasad asked for clarification on 'redirect' action, I propose to
> add the following text to document regarding 'redirect' action:
>
> "'redirect' action is used to mirror traffic to other destinations
> - destination can be another endpoint group, a service chain, a port,
> or a network. Note that 'redirect' action type can be used with other
> forwarding related action type such as 'security'; therefore, it is
> entirely possible that one can specify {'security':'deny'} and still
> do {'redirect':{'uuid-1', 'uuid-2'...}. Note that the destination
> specified on the list CANNOT be the endpoint-group who provides this
> policy. Also, in case of destination being another endpoint-group, the
> policy of this new destination endpoint-group will still be applied"
>
>
As I said above one needs clarity on what these UUIDs mean. Also do we need
a call to manage the ordered list around adding, deleting.listing the
elements in the list.
One other issue that comes up whether the classifier holds up along the
chain. The classifier that goes into the chain might not be the same on the
reverse path.

Please discuss.
>
> (4)  We didn't get a chance to discuss this during last Thursday's
> meeting, but there has been discussion on the document regarding
> adding IP address fields in the classifier of a policy-rule. Email may
> be a better forum to state the use cases. Please discuss here.
>
> I will gather all the feedback by Wednesday and update the
> document before this coming Thursday's meeting.
>
>
We do need to support various use cases mentioned in the document where the
classifier is required to match on various fields in the packet header such
as IP address, MAC address, ports etc. The use cases are L2 firewall,
Monitoring devices where the traffic being sent to them is not dependent on
where they come from, thus can be derived from src and dst groups.


> Thanks,
> - Stephen
>
> [1]
> http://eavesdrop.openstack.org/meetings/networking_policy/2013/networking_policy.2013-12-12-16.01.log.html
> [2]
> https://docs.google.com/document/d/1ZbOFxAoibZbJmDWx1oOrOsDcov6Cuom5aaBIrupCD9E/edit#heading=h.x1h06xqhlo1n
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder][qa] release notes for cinder v1 to v2?

2013-12-16 Thread David Kranz

Sorry for lost subject in last message.

Is there a document that describes the api changes from v1 to v2, 
similar to the one documenting nova v2 to v3?


 -David

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder][qa]

2013-12-16 Thread David Kranz
Is there a document that describes the api changes from v1 to v2, 
similar to the one documenting nova v2 to v3?


 -David

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack][Heat] Creating an Openstack project using Heat

2013-12-16 Thread Zane Bitter

On 16/12/13 15:57, Sayaji Patil wrote:

Hi,
 I have installed Openstack with heat using packstack. One thing I
noticed
is that the "Orchestration Heat" option is only available inside a
project view.
Is this by design ?

My use case it to create a project with images, networks,routers and
firewall rules
in a single workflow. I looked at the documentation and at this point
there is no
resource available to create a project or upload an image.

Regards,
Sayaji


This is the development mailing list; the appropriate forum for these 
kinds of questions would be the general OpenStack mailing list:


http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

cheers,
Zane.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Announcing Fuel

2013-12-16 Thread Robert Collins
On 17 December 2013 09:59, Mike Scherbakov  wrote:
> Thanks for support,
> as a starting point we have chosen Ironic - we really want to see it as a
> replacement of Fuel's existing provisioning layer, with an intention of
> participation and delivery of many features which we already know from the
> real-world installations with Fuel.
>
> We are trying to be more test-driven, so starting our contributions as proof
> of concept for functional testing framework for Ironic:
> https://review.openstack.org/#/c/62410/2, see the description in README
> file: https://review.openstack.org/#/c/62410/2/irci/README.md
> Distributed envs testing, especially PXE booting is the area of our
> interest. Next, we plan to collaboratively work on torrent-based
> provisioning driver.

Cool.

Rather than torrents you may prefer a multicast driver, it should be
about 50% network utilisation, linear IO for the target disk - much
better.

Rob

-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Project-Scoped Service Catalog Entries

2013-12-16 Thread Georgy Okrokvertskhov
By they way, there is an initiative to create generic metadata repository
based on Glance project. As services endpoints are just URLs they also can
be stored in this Glance metadata repository and have all features related
to visibility and access control provides by this repository.

Thanks
Georgy


On Mon, Dec 16, 2013 at 12:21 PM, Tim Bell  wrote:

>
> +1
>
> There is also the use case where a new service is being introduced for
> everyone eventually but you wish to start with a few friends. In the event
> of problems, the effort to tidy up is much less. Documentation can be
> updated with the production environment.
>
> Tim
>
> > -Original Message-
> > From: Gabriel Hurley [mailto:gabriel.hur...@nebula.com]
> > Sent: 16 December 2013 20:58
> > To: OpenStack Development Mailing List (
> openstack-dev@lists.openstack.org)
> > Subject: [openstack-dev] Project-Scoped Service Catalog Entries
> >
> > I've run into a use case that doesn't currently seem to have a great
> solution:
> >
> >
> > Let's say my users want to use a "top-of-stack" OpenStack project such
> as Heat, Trove, etc. that I don't currently support in my
> > deployment. There's absolutely no reason these services can't live
> happily in a VM talking to Nova, etc. via the normal APIs. However, in
> > order to have a good experience (Horizon integration, seamless CLI
> integration) the service needs to be in the Service Catalog. One user
> > could have their service added to the catalog by an admin, but then
> everyone in the cloud would be using their VM. And if you have
> > multiple users all doing the same thing in their own projects, you've
> got collisions!
> >
> >
> > So, I submit to you all that there is value in having a way to scope
> Service Catalog entries to specific projects, and to allow users with
> > appropriate permissions on their project to add/remove those
> project-level service catalog entries.
> >
> > This could be accomplished in a number of ways:
> >
> >   * Adding a new field to the model to store a Project ID.
> >   * Adding it in a standardized manner to "service metadata" as with
> https://blueprints.launchpad.net/keystone/+spec/service-metadata
> >   * Adding it as an "additional requirement" as proposed by
> https://blueprints.launchpad.net/keystone/+spec/auth-mechanisms-for-
> > services
> >   * Use the existing Region field to track project scope as a hack.
> >   * Something else...
> >
> > I see this as analogous to Nova's concept of per-project flavors, or
> Glance's private/public/shared image capabilities. Allowing explicit
> > "sharing" would even be an interesting option for service endpoints. It
> all depends how far we would want to go with it.
> >
> > Feel free to offer feedback or other suggestions.
> >
> > Thanks!
> >
> >  - Gabriel
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Georgy Okrokvertskhov
Technical Program Manager,
Cloud and Infrastructure Services,
Mirantis
http://www.mirantis.com
Tel. +1 650 963 9828
Mob. +1 650 996 3284
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Docker] Environment variables

2013-12-16 Thread Dan Smith
> eg use a 'env_' prefix for glance image attributes
> 
> We've got a couple of cases now where we want to overrides these
> same things on a per-instance basis. Kernel command line args
> is one other example. Other hardware overrides like disk/net device
> types are another possibility
> 
> Rather than invent new extensions for each, I think we should
> have a way to pass arbitrary attributes alon with the boot
> API call, that a driver would handle in much  the same way as
> they do for glance image properties. Basically think of it as
> a way to custom any image property per instance created.

Personally, I think having a bunch of special case magic namespaces
(even if documented) is less desirable than a proper API to do something
like this. Especially a namespace that someone else could potentially
use legitimately that would conflict.

To me, this feels a lot like what I'm worried this effort will turn
into, which is making containers support in Nova look like a bolt-on
thing with a bunch of specialness required to make it behave.

Anyone remember this bolt-on gem?

nova boot --block-device-mapping
vda=965453c9-02b5-4d5b-8ec0-3164a89bf6f4:::0 --flavor=m1.tiny
--image=6415797a-7c03-45fe-b490-f9af99d2bae0 BFV

I found that one amidst hundreds of forum threads of people confused
about what incantation of magic they were supposed to do to make it
actually boot from volume.

Just MHO.

--Dan


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Announcing Fuel

2013-12-16 Thread Mike Scherbakov
Thanks for support,
as a starting point we have chosen Ironic - we really want to see it as a
replacement of Fuel's existing provisioning layer, with an intention of
participation and delivery of many features which we already know from the
real-world installations with Fuel.

We are trying to be more test-driven, so starting our contributions as
proof of concept for functional testing framework for Ironic:
https://review.openstack.org/#/c/62410/2, see the description in README
file: https://review.openstack.org/#/c/62410/2/irci/README.md
Distributed envs testing, especially PXE booting is the area of our
interest. Next, we plan to collaboratively work on torrent-based
provisioning driver.

Fuel UI & Tuskar UI are also very interesting topic. Mark - thanks for
provided links. Let us get a few cycles to research what you have so far
and get back to you.


On Fri, Dec 13, 2013 at 6:11 PM, Liz Blanchard  wrote:

>
> On Dec 13, 2013, at 8:04 AM, Jaromir Coufal  wrote:
>
> > On 2013/12/12 15:31, Mike Scherbakov wrote:
> >> Folks,
> >>
> >>
> >> Most of you by now have heard of Fuel, which we’ve been working on as a
> >> related OpenStack project for a period of time
> >> -see
> >> https://launchpad.net/fueland https://wiki.openstack.org/wiki/Fuel. The
> >> aim of the project is to provide a distribution agnostic and plug-in
> >> agnostic engine for preparing, configuring and ultimately deploying
> >> various “flavors” of OpenStack in production. We’ve also used Fuel in
> >> most of our customer engagements to stand up an OpenStack cloud.
> > ...
> >> We’d love to open discussion on this and hear everybody’s thoughts on
> >> this direction.
> >
> > Hey Mike,
> >
> > it sounds all great. I'll be very happy to discuss all the UX efforts
> going on in TripleO/Tuskar UI together with intentions and future steps of
> Fuel.
> >
> +1. The Fuel wizard has some great UX ideas to bring to our thoughts
> around deployment in the Tuskar UI!
>
> Great to hear these will be brought together,
> Liz
>
> > Cheers
> > -- Jarda
> >
> > --- Jaromir Coufal (jcoufal)
> > --- OpenStack User Experience
> > --- IRC: #openstack-ux (at FreeNode)
> > --- Forum: http://ask-openstackux.rhcloud.com
> > --- Wiki: https://wiki.openstack.org/wiki/UX
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Mike Scherbakov
#mihgen
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Openstack][Heat] Creating an Openstack project using Heat

2013-12-16 Thread Sayaji Patil
Hi,
I have installed Openstack with heat using packstack. One thing I
noticed
is that the "Orchestration Heat" option is only available inside a project
view.
Is this by design ?

My use case it to create a project with images, networks,routers and
firewall rules
in a single workflow. I looked at the documentation and at this point there
is no
resource available to create a project or upload an image.

Regards,
Sayaji
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [governance] Becoming a Program, before applying for incubation

2013-12-16 Thread Doug Hellmann
On Mon, Dec 16, 2013 at 9:49 AM, Thierry Carrez wrote:

> Flavio Percoco wrote:
> > What I'm arguing here is:
> >
> > 1. Programs that are not part of OpenStack's release cycle shouldn't
> > be considered official nor they should have the rights that integrated
> > projects have.
> >
> > 2. I think requesting Programs to exist at the early stages of the
> > project is not necessary. I don't even think incubated projects should
> > have programs. I do agree the project's mission and goals have to be
> > clear but the program should be officially created *after* the project
> > graduates from incubation.
> >
> > The reasoning here is that anything could happen during incubation.
> > For example, a program created for project A - which is incubated -
> > may change to cover a broader mission that will allow a newborn
> > project B to fall under its umbrella, hence my previous proposal of
> > having a incubation stage for programs as well.
>
> I think your concerns can be covered if we consider that programs
> covering incubated or "promising" projects should also somehow incubate.
> To avoid confusion I'd use a different term, let's say "incoming"
> programs for the sake of the discussion.
>
> Incoming programs would automatically graduate when one of their
> deliveries graduate to "integrated" status (for projects with such
> deliveries), or when the TC decides so (think: for "horizontal" programs
> like Documentation or Deployment).
>
> That doesn't change most of this proposal, which is that we'd encourage
> teams to ask to become an (incoming) program before they consider filing
> one of their projects for incubation.
>

It seems like the implications of the "incoming" designation is the same as
the "emerging" designation you suggested previously. :-)

I like the idea of some sort of acknowledgement that there is a group
working on a solution to a problem and that the solution hasn't reached
sufficient maturity to be an incubated project. I prefer the name
"emerging" over "incoming" but not strongly.

The status of a fledgeling program in this state should be re-evaluated
periodically, as we do with incubated projects, so I don't see a problem
with creating such "working groups" (maybe that's a better name?) when
there is sufficient interest and participation early on. I do like the idea
of asking them to produce *something* -- a design doc, requirements list,
some sort of detailed plan for doing whatever the program's mission would
be -- before being granted this new official designation, to show that the
people involved are prepared to spend time and effort, more than just
saying "yes, I'm interested, too".



>
> FWIW we already distinguish (on
> https://wiki.openstack.org/wiki/Programs) programs that are born out of
> an incubated project from other programs, so adding this "incoming"
> status would not change much.
>
> > My proposal is to either not requesting any program to be created for
> > incubated projects / emerging technologies or to have a program called
> > 'Emerging Technologies' were all these projects could fit in.
>
> I don't think an "Emerging Technologies" program would make sense, since
> that would just be a weird assemblage of separate teams (how would that
> program elect a PTL ?). I prefer that they act as separate teams (which
> they are) and use the "incoming Program" concept described above.
>

+1


>
> > The only
> > difference is that, IMHO, projects under this program should not have
> > all the rights that integrated projects and other programs have,
> > although the program will definitely fall under the TCs authority. For
> > example, projects under this program shouldn't be able to vote on the
> > TCs elections.
>
> So *that* would be a change from where we stand today, which is that
> incubated project contributors get ATC status and vote on TC elections.
> We can go either way, consider "incoming programs" to be "OpenStack
> programs" in the sense of the TC charter, or not.
>
> I'm not convinced there is so much value in restricting TC voting access
> (or ATC status) to "OpenStack programs". Incoming programs would all be
> placed under the authority of the TC so it's only fair that they have a
> vote. Also giving them ATC status gets them automatically invited to
> Design Summits, and getting "incoming" programs in Design Summits sounds
> like a good thing to do...
>

Right, bringing them to the summits is a big goal, isn't it?

Doug



>
> --
> Thierry Carrez (ttx)
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ceilometer] Complex query BP implementation

2013-12-16 Thread Ildikó Váncsa
Hi guys,

The first working version of the Complex filter expressions in API queries 
blueprint [1] was pushed for review[2].

We implemented a new query REST resource in order to provide rich query 
functionality for samples, alarms and alarm history. The future plans (in 
separated blueprints) with this new functionality is extending it to support 
Statistics and stored queries. The new feature is documented on Launchpad 
wiki[3], with an example for how to use the new query on the API.

What is your opinion about this solution?
I would appreciate some review comments and/or feedback on the implementation. 
:)

[1]  
https://blueprints.launchpad.net/ceilometer/+spec/complex-filter-expressions-in-api-queries
[2]  
https://review.openstack.org/#/q/status:open+project:openstack/ceilometer+branch:master+topic:bp/complex-filter-expressions-in-api-queries,n,z
[3]  
https://wiki.openstack.org/wiki/Ceilometer/ComplexFilterExpressionsInAPIQueries

Thanks and Best Regards,
Ildiko
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Project-Scoped Service Catalog Entries

2013-12-16 Thread Tim Bell

+1

There is also the use case where a new service is being introduced for everyone 
eventually but you wish to start with a few friends. In the event of problems, 
the effort to tidy up is much less. Documentation can be updated with the 
production environment.

Tim

> -Original Message-
> From: Gabriel Hurley [mailto:gabriel.hur...@nebula.com]
> Sent: 16 December 2013 20:58
> To: OpenStack Development Mailing List (openstack-dev@lists.openstack.org)
> Subject: [openstack-dev] Project-Scoped Service Catalog Entries
> 
> I've run into a use case that doesn't currently seem to have a great solution:
> 
> 
> Let's say my users want to use a "top-of-stack" OpenStack project such as 
> Heat, Trove, etc. that I don't currently support in my
> deployment. There's absolutely no reason these services can't live happily in 
> a VM talking to Nova, etc. via the normal APIs. However, in
> order to have a good experience (Horizon integration, seamless CLI 
> integration) the service needs to be in the Service Catalog. One user
> could have their service added to the catalog by an admin, but then everyone 
> in the cloud would be using their VM. And if you have
> multiple users all doing the same thing in their own projects, you've got 
> collisions!
> 
> 
> So, I submit to you all that there is value in having a way to scope Service 
> Catalog entries to specific projects, and to allow users with
> appropriate permissions on their project to add/remove those project-level 
> service catalog entries.
> 
> This could be accomplished in a number of ways:
> 
>   * Adding a new field to the model to store a Project ID.
>   * Adding it in a standardized manner to "service metadata" as with 
> https://blueprints.launchpad.net/keystone/+spec/service-metadata
>   * Adding it as an "additional requirement" as proposed by 
> https://blueprints.launchpad.net/keystone/+spec/auth-mechanisms-for-
> services
>   * Use the existing Region field to track project scope as a hack.
>   * Something else...
> 
> I see this as analogous to Nova's concept of per-project flavors, or Glance's 
> private/public/shared image capabilities. Allowing explicit
> "sharing" would even be an interesting option for service endpoints. It all 
> depends how far we would want to go with it.
> 
> Feel free to offer feedback or other suggestions.
> 
> Thanks!
> 
>  - Gabriel
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Project-Scoped Service Catalog Entries

2013-12-16 Thread Gabriel Hurley
I've run into a use case that doesn't currently seem to have a great solution:


Let's say my users want to use a "top-of-stack" OpenStack project such as Heat, 
Trove, etc. that I don't currently support in my deployment. There's absolutely 
no reason these services can't live happily in a VM talking to Nova, etc. via 
the normal APIs. However, in order to have a good experience (Horizon 
integration, seamless CLI integration) the service needs to be in the Service 
Catalog. One user could have their service added to the catalog by an admin, 
but then everyone in the cloud would be using their VM. And if you have 
multiple users all doing the same thing in their own projects, you've got 
collisions!


So, I submit to you all that there is value in having a way to scope Service 
Catalog entries to specific projects, and to allow users with appropriate 
permissions on their project to add/remove those project-level service catalog 
entries.

This could be accomplished in a number of ways:

  * Adding a new field to the model to store a Project ID.
  * Adding it in a standardized manner to "service metadata" as with 
https://blueprints.launchpad.net/keystone/+spec/service-metadata
  * Adding it as an "additional requirement" as proposed by 
https://blueprints.launchpad.net/keystone/+spec/auth-mechanisms-for-services
  * Use the existing Region field to track project scope as a hack.
  * Something else...

I see this as analogous to Nova's concept of per-project flavors, or Glance's 
private/public/shared image capabilities. Allowing explicit "sharing" would 
even be an interesting option for service endpoints. It all depends how far we 
would want to go with it.

Feel free to offer feedback or other suggestions.

Thanks!

 - Gabriel

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] UI Wireframes for Resource Management - ready for implementation

2013-12-16 Thread Jay Dobies



On 12/13/2013 01:53 PM, Tzu-Mainn Chen wrote:

On 2013/13/12 11:20, Tzu-Mainn Chen wrote:

These look good!  Quick question - can you explain the purpose of Node
Tags?  Are they
an additional way to filter nodes through nova-scheduler (is that even
possible?), or
are they there solely for display in the UI?

Mainn


We start easy, so that's solely for UI needs of filtering and monitoring
(grouping of nodes). It is already in Ironic, so there is no reason why
not to take advantage of it.
-- Jarda


Okay, great.  Just for further clarification, are you expecting this UI 
filtering
to be present in release 0?  I don't think Ironic natively supports filtering
by node tag, so that would be further work that would have to be done.

Mainn


I might be getting ahead of things, but will the tags be free-form 
entered by the user, pre-entered in a separate settings and selectable 
at node register/update time, or locked into a select few that we specify?


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] [Murano] [Solum] [Glance]Metadata repository initiative discussion for Glance

2013-12-16 Thread Georgy Okrokvertskhov
Hi,

Doodle shows that the most suitable time is 10AM PST on Tuesday.
Lets keep this time for Metadata Repository\Catalog meeting in
#openstack-glance IRC channel.

See you tomorrow!

Thanks
Georgy

On Fri, Dec 13, 2013 at 12:09 PM, Georgy Okrokvertskhov <
gokrokvertsk...@mirantis.com> wrote:

> Hi,
>
> It looks like I forgot to add Glance. Fixing this now. I am sorry for
> duplicating the thread.
>
> Thanks
> Georgy
>
>
>
> On Fri, Dec 13, 2013 at 12:02 PM, Georgy Okrokvertskhov <
> gokrokvertsk...@mirantis.com> wrote:
>
>> Yes. It is a Pacific Standard Time.
>>
>> Thanks
>> Georgy
>>
>>
>> On Fri, Dec 13, 2013 at 12:01 PM, Keith Bray wrote:
>>
>>>  PT as in Pacific Standard Time?
>>>
>>> -Keith
>>> On Dec 13, 2013 1:56 PM, Georgy Okrokvertskhov <
>>> gokrokvertsk...@mirantis.com> wrote:
>>>  Hi,
>>>
>>>  It is PT. I will add this info to the doodle pool.
>>>
>>>  Thanks
>>> Georgy
>>>
>>>
>>> On Fri, Dec 13, 2013 at 11:50 AM, Keith Bray 
>>> wrote:
>>>
  What timezone is the poll in?   It doesn't say on the Doodle page.

  Thanks,
 -Keith

   From: Georgy Okrokvertskhov 
 Reply-To: "OpenStack Development Mailing List (not for usage
 questions)" 
 Date: Friday, December 13, 2013 12:21 PM
 To: OpenStack Development Mailing List <
 openstack-dev@lists.openstack.org>
 Subject: [openstack-dev] [Heat] [Murano] [Solum] Metadata repository
 initiative discussion for Glance

   Hi,

  Recently a Heater proposal was announced in openstack-dev mailing
 list. This discussion lead to a decision to add unified metadata service \
 catalog capabilities into Glance.

  On the Glance weekly meeting this initiative was discussed and Glance
 team agreed to take a look onto BPs and API documents for metadata
 repository\catalog, in order to understand what can be done during Icehouse
 release and how to organize this work in general.

  There will be a separate meeting devoted to this initiative on
 Tuesday 12/17 in #openstack-glance channel. Exact time is not defined yet
 and I need time preferences from all parties. Here is a link to a doodle
 poll http://doodle.com/9f2vxrftizda9pun . Please select time slot
 which will be suitable for you.

  The agenda for this meeting is the following:
 1. Define project goals in general
 2. Discuss API for this service and find out what can be implemented
 during IceHouse release.
 3. Define organizational stuff like how this initiative should be
 developed (branch of Glance or separate project within Glance program)

  Here is an etherpad
 https://etherpad.openstack.org/p/MetadataRepository-API for initial
 API version for this service.

  All project which are interested in metadata repository are welcome
 to discuss API and service itself.

  Currently there are several possible use cases for this service:
 1. Heat template catalog
 2. HOT Software orchestration scripts\recipes storage
 3. Murano Application Catalog object storage
 4. Solum assets storage

  Thanks
 Georgy


>>>
>>>
>>>  --
>>> Georgy Okrokvertskhov
>>> Technical Program Manager,
>>> Cloud and Infrastructure Services,
>>> Mirantis
>>> http://www.mirantis.com
>>> Tel. +1 650 963 9828
>>> Mob. +1 650 996 3284
>>>
>>
>>
>>
>> --
>> Georgy Okrokvertskhov
>> Technical Program Manager,
>> Cloud and Infrastructure Services,
>> Mirantis
>> http://www.mirantis.com
>> Tel. +1 650 963 9828
>> Mob. +1 650 996 3284
>>
>
>
>
> --
> Georgy Okrokvertskhov
> Technical Program Manager,
> Cloud and Infrastructure Services,
> Mirantis
> http://www.mirantis.com
> Tel. +1 650 963 9828
> Mob. +1 650 996 3284
>



-- 
Georgy Okrokvertskhov
Technical Program Manager,
Cloud and Infrastructure Services,
Mirantis
http://www.mirantis.com
Tel. +1 650 963 9828
Mob. +1 650 996 3284
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Horizon] [Tuskar] [UI] Horizon and Tuskar-UI merge

2013-12-16 Thread Matthias Runge
On 12/16/2013 04:22 PM, Jiri Tomasek wrote:

> 
> Thanks for pointing this out, In Horizon you can easily decide which
> dashboards to show, so the Infrastructure management Horizon instance
> can have Project and Admin dashboards disabled.
> 
> I think there has been discussed that some panels of Admin dashboard
> should be required for infrastructure management. We can solve this by
> adding those selected Admin panels also into Infrastructure dashboard.
> 
> Jirka
Oh, I would expect a new role for an infrastructure admin; that role
shouldn't necessarily see running instances or tenants etc. at all.

Matthias

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Does any plugin require hairpinning to be enabled?

2013-12-16 Thread Collins, Sean
Hi,

I have registered two blueprints, one in Nova and one in Neutron to make
it a VIF attribute that the libvirt driver in Nova will honor.

https://blueprints.launchpad.net/neutron/+spec/vif-attribute-for-hairpinning

https://blueprints.launchpad.net/nova/+spec/nova-hairpin-vif-attribute

-- 
Sean M. Collins
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][db] Thoughts on making instances.uuid non-nullable?

2013-12-16 Thread Jay Pipes

On 12/16/2013 11:59 AM, Russell Bryant wrote:

On 12/16/2013 11:45 AM, Matt Riedemann wrote:

1. Add a migration to change instances.uuid to non-nullable. Besides the
obvious con of having yet another migration script, this seems the most
straight-forward. The instance object class already defines the uuid
field as non-nullable, so it's constrained at the objects layer, just
not in the DB model.  Plus I don't think we'd ever have a case where
instance.uuid is null, right?  Seems like a lot of things would break
down if that happened.  With this option I can build on top of it for
the DB2 migration support to add the same FKs as the other engines.


Yeah, having instance.uuid nullable doesn't seem valuable to me, so this
seems OK.


+1

-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Unified Guest Agent proposal

2013-12-16 Thread Steven Dake

On 12/16/2013 10:29 AM, Fox, Kevin M wrote:

Yeah, this is similar to what I am proposing. I think we just about have just 
about everything we need already.

Thread started out discussing a slightly different use case then below. The use 
case is processing events like:
User performs "backup database B" in Trove UI, Trove sends event "backup-database" with params B to 
the vm, vm response sometime later with "done" "backup database B", Trove UI updates.

The idea is we need a unified agent to receive the messages, perform the action 
and respond back to the event,.

The main issues are, as I see it:
  * The VM might be on a private neutron network only. This is desirable for 
increased security.
  * We want the agent to be minimal so as not to have to maintain much in the 
VM's. Its hard to keep all those ducks in a row.
  * There is a desire not to have the agent allow arbitrary commands to execute 
in the VM for security reasons.
If security is a concern of the unified agent, the best way to reduce 
the attack surface is to limit the number of interactions the agent can 
actually do.  Special purpose code for each operation could easily be 
implemented.


I know salt was mentioned as a possibility to solving this problem, but 
brings a whole host of new problems to content with.


Having a unified agent doesn't mean we can't put special-purpose code 
for each service (eg trove) for each operation (eg backup) in said 
unified agent.  We could even do this using cloud-init using the 
part-handler logic.


We really need someone from the community to step up and drive this 
effort, as opposed to beating this thread into too much complexity, as 
mentioned previously by Clint.


Regards
-steve


Thanks,
Kevin

From: Robert Collins [robe...@robertcollins.net]
Sent: Sunday, December 15, 2013 6:44 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] Unified Guest Agent proposal

On 15 December 2013 21:17, Clint Byrum  wrote:

Excerpts from Steven Dake's message of 2013-12-14 09:00:53 -0800:

On 12/13/2013 01:13 PM, Clint Byrum wrote:

Excerpts from Dmitry Mescheryakov's message of 2013-12-13 12:01:01 -0800:

Still, what about one more server process users will have to run? I see
unified agent as library which can be easily adopted by both exiting and
new OpenStack projects. The need to configure and maintain Salt server
process is big burden for end users. That idea will definitely scare off
adoption of the agent. And at the same time what are the gains of having
that server process? I don't really see to many of them.


I tend to agree, I don't see a big advantage to using something like
salt, when the current extremely simplistic cfn-init + friends do the job.

What specific problem does salt solve?  I guess I missed that context in
this long thread.


Yes you missed the crux of the thread. There is a need to have agents that
are _not_ general purpose like cfn-init and friends. They specifically
need to be narrow in focus and not give the higher level service operator
backdoor access to everything via SSH-like control.

So, just spitballing, but:

We have a metadata service.

We want low-latency updates there (e.g. occ listening on long-poll).
Ignore implementation for now.

I assert that agent restrictness is really up to the agent. For
instance, an agent that accepts one command 'do something' with args
'something', is clearly not restricted.

So - mainly to tease requirements out:

How would salt be different to:

- heat-metadata with push notification of updates
- an ORC script that looks for a list of requests in post-configure.d
and executes them.

trove-agent:
  - 'backup':
   db-id: '52'
  - 'backup':
   db-id: '43'
  - 'create':
   db-id: '93'
   initial-schema: [.]

etc.

?


--
Robert Collins 
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][VMware] Deploy from vCenter template

2013-12-16 Thread Shawn Hartsock
IIRC someone who shows up at
https://wiki.openstack.org/wiki/Meetings/VMwareAPI#Meetings is planning on
working on that again for Icehouse-3 but there's some new debate on the
best way to implement the desired effect. The goal of that change would be
to avoid streaming the disk image out of vCenter for the purpose of then
streaming the same image back into the same vCenter. That's really
inefficient.

So there's a Nova level change that could happen (that's the patch you saw)
and there's a Glance level change that could happen, and there's a
combination of both approaches that could happen.

If you want to discuss it informally with the group that's looking into the
problem I could probably make sure you end up talking to the right people
on #openstack-vmware or if you pop into the weekly team meeting on IRC you
could mention it during open discussion time.


On Mon, Dec 16, 2013 at 3:27 AM, Qing Xin Meng  wrote:

> I saw a commit for Deploying from VMware vCenter template and found it's
> abandoned.
> *https://review.openstack.org/#/c/34903*
>
>
> Anyone knows the plan to support the deployment from VMware vCenter
> template?
>
>
> Thanks!
>
>
>
> Best Regards
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
# Shawn.Hartsock - twitter: @hartsock - plus.google.com/+ShawnHartsock
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [marconi] Meeting agenda for tomorrow at 1500 UTC

2013-12-16 Thread Kurt Griffiths
The Marconi project team holds a weekly meeting in #openstack-meeting-alt on 
Tuesdays, 1500 
UTC.

The next meeting is Tomorrow, Dec. 10. Everyone is welcome, but please take a 
minute to review the wiki before attending for the first time:

http://wiki.openstack.org/marconi

This week we will discuss progress made toward graduation, and should have some 
time to triage new bugs/bps.


  |""-..._
  '-._"'`|
  \  ``` ``"---... _ |
  |  /  /#\
  }--..__..-{   ###
 } _   _ {
   6   6  
{^}
   {{\  -=-  /}}
   {{{;.___.;}}}
{{{)   (}}}'
 `""'"':   :'"'"'`
 after/jgs  `@`



Proposed Agenda:

  *   Review actions from last time
  *   Review Graduation BPs/Bugs
  *   Updates on bugs
  *   Updates on blueprints
  *   SQLAlchemy storage driver strategy
  *   Open discussion (time permitting)

If you have additions to the agenda, please add them to the wiki and note your 
IRC name so we can call on you during the meeting:

http://wiki.openstack.org/Meetings/Marconi

Cheers,

---
@kgriffs
Kurt Giffiths

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Unified Guest Agent proposal

2013-12-16 Thread Fox, Kevin M
Yeah, this is similar to what I am proposing. I think we just about have just 
about everything we need already.

Thread started out discussing a slightly different use case then below. The use 
case is processing events like:
User performs "backup database B" in Trove UI, Trove sends event 
"backup-database" with params B to the vm, vm response sometime later with 
"done" "backup database B", Trove UI updates.

The idea is we need a unified agent to receive the messages, perform the action 
and respond back to the event,.

The main issues are, as I see it:
 * The VM might be on a private neutron network only. This is desirable for 
increased security.
 * We want the agent to be minimal so as not to have to maintain much in the 
VM's. Its hard to keep all those ducks in a row.
 * There is a desire not to have the agent allow arbitrary commands to execute 
in the VM for security reasons.

Thanks,
Kevin

From: Robert Collins [robe...@robertcollins.net]
Sent: Sunday, December 15, 2013 6:44 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] Unified Guest Agent proposal

On 15 December 2013 21:17, Clint Byrum  wrote:
> Excerpts from Steven Dake's message of 2013-12-14 09:00:53 -0800:
>> On 12/13/2013 01:13 PM, Clint Byrum wrote:
>> > Excerpts from Dmitry Mescheryakov's message of 2013-12-13 12:01:01 -0800:
>> >> Still, what about one more server process users will have to run? I see
>> >> unified agent as library which can be easily adopted by both exiting and
>> >> new OpenStack projects. The need to configure and maintain Salt server
>> >> process is big burden for end users. That idea will definitely scare off
>> >> adoption of the agent. And at the same time what are the gains of having
>> >> that server process? I don't really see to many of them.
>> >>
>>
>> I tend to agree, I don't see a big advantage to using something like
>> salt, when the current extremely simplistic cfn-init + friends do the job.
>>
>> What specific problem does salt solve?  I guess I missed that context in
>> this long thread.
>>
>
> Yes you missed the crux of the thread. There is a need to have agents that
> are _not_ general purpose like cfn-init and friends. They specifically
> need to be narrow in focus and not give the higher level service operator
> backdoor access to everything via SSH-like control.

So, just spitballing, but:

We have a metadata service.

We want low-latency updates there (e.g. occ listening on long-poll).
Ignore implementation for now.

I assert that agent restrictness is really up to the agent. For
instance, an agent that accepts one command 'do something' with args
'something', is clearly not restricted.

So - mainly to tease requirements out:

How would salt be different to:

- heat-metadata with push notification of updates
- an ORC script that looks for a list of requests in post-configure.d
and executes them.

trove-agent:
 - 'backup':
  db-id: '52'
 - 'backup':
  db-id: '43'
 - 'create':
  db-id: '93'
  initial-schema: [.]

etc.

?


--
Robert Collins 
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Unified Guest Agent proposal

2013-12-16 Thread Fox, Kevin M
The idea being discussed is using 169.254.169.254 for long term messaging 
between a vm and some other process. For example, Trove -> TroveVM.

I guess this thread is getting too long. The details are getting lost.

Thanks,
Kevin



From: Lars Kellogg-Stedman [l...@redhat.com]
Sent: Monday, December 16, 2013 8:18 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] Unified Guest Agent proposal

On Fri, Dec 13, 2013 at 11:32:01AM -0800, Fox, Kevin M wrote:
> I hadn't thought about that use case, but that does sound like it
> would be a problem.

That, at least, is not much of a problem, because you can block access
to the metadata via a blackhole route or similar after you complete
your initial configuration:

  ip route add blackhole 169.254.169.254

This prevents access to the metadata unless someone already has root
access on the instance.

--
Lars Kellogg-Stedman  | larsks @ irc
Cloud Engineering / OpenStack  | "   "  @ twitter


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo]: implementing olso.messaging over amqp 1.0

2013-12-16 Thread Gordon Sim

On 12/16/2013 10:37 AM, Gordon Sim wrote:

On 12/12/2013 02:14 PM, Flavio Percoco wrote:

I've a draft in my head of how the amqp 1.0 driver could be
implemented and how to map the current expectations of the messaging
layer to the new protocol.

I think a separate thread to discuss this mapping is worth it. There
are some critical areas that definitely need more discussion


I have also been looking at this, and trying to write up a simple design
notes. Some of the questions that occurred to me while doing so are:

* Use one link for all sends, with 'to' field set, or use a link for
each target?

* How to handle calls to one of a group of servers?

* Use a distinct response address per request, or allow an address to be
shared by multiple requests in conjunction with correlation id on
responses?

* Support both intermediated and direct communication? For all patterns?

The aim in my view should be to have the driver support as many
alternatives in deployment as possible without overcomplicating things,
distorting the mapping or introducing server specific extensions.

I have some notes to share if anyone is interested. I can send them to
this list or put them upon the wiki or an etherpad or something.


I've pasted these into an etherpad[1] for anyone interested. Please feel 
free to edit/augment etc, or even to query anything on this list. It's 
really just an initial draft to get the ball rolling.


--Gordon

[1] https://etherpad.openstack.org/p/olso.messaging_amqp_1.0


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Mistral] Community meeting minutes - 12/16/2013

2013-12-16 Thread Renat Akhmerov
Hello,

Thanks for joining us today in IRC, here are the links to meeting minutes and 
logs:

Minutes: 
http://eavesdrop.openstack.org/meetings/mistral/2013/mistral.2013-12-16-16.00.html
Logs: 
http://eavesdrop.openstack.org/meetings/mistral/2013/mistral.2013-12-16-16.00.log.html

Join us next time.

Renat Akhmerov
@ Mirantis Inc.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][db] Thoughts on making instances.uuid non-nullable?

2013-12-16 Thread Russell Bryant
On 12/16/2013 11:45 AM, Matt Riedemann wrote:
> 1. Add a migration to change instances.uuid to non-nullable. Besides the
> obvious con of having yet another migration script, this seems the most
> straight-forward. The instance object class already defines the uuid
> field as non-nullable, so it's constrained at the objects layer, just
> not in the DB model.  Plus I don't think we'd ever have a case where
> instance.uuid is null, right?  Seems like a lot of things would break
> down if that happened.  With this option I can build on top of it for
> the DB2 migration support to add the same FKs as the other engines.

Yeah, having instance.uuid nullable doesn't seem valuable to me, so this
seems OK.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][db] Thoughts on making instances.uuid non-nullable?

2013-12-16 Thread Shawn Hartsock
+1 on a migration to make uuid a non-nullable column. I advocated a few
patches back in Havana that make assumptions based on the UUID being
present and unique per instance. If it gets nulled the VMware drivers will
have have breakage and I have no idea how to avoid that reasonably without
the UUID.


On Mon, Dec 16, 2013 at 11:59 AM, Russell Bryant  wrote:

> On 12/16/2013 11:45 AM, Matt Riedemann wrote:
> > 1. Add a migration to change instances.uuid to non-nullable. Besides the
> > obvious con of having yet another migration script, this seems the most
> > straight-forward. The instance object class already defines the uuid
> > field as non-nullable, so it's constrained at the objects layer, just
> > not in the DB model.  Plus I don't think we'd ever have a case where
> > instance.uuid is null, right?  Seems like a lot of things would break
> > down if that happened.  With this option I can build on top of it for
> > the DB2 migration support to add the same FKs as the other engines.
>
> Yeah, having instance.uuid nullable doesn't seem valuable to me, so this
> seems OK.
>
> --
> Russell Bryant
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
# Shawn.Hartsock - twitter: @hartsock - plus.google.com/+ShawnHartsock
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Re-initializing or dynamically configuring cinder driver

2013-12-16 Thread Joshua Harlow
Ah, u might be able to do what u said. Try it out and see how far u can get :)

I would be interested to know how u plan on waiting for all existing operations 
to finish. Maybe it's not so hard, not really sure...

Sent from my really tiny device...

On Dec 15, 2013, at 9:43 PM, "iKhan" 
mailto:ik.ibadk...@gmail.com>> wrote:

Ok, I though we can make make cinder-volume aware of SIGTERM call and make sure 
it terminates with cleaning all the existing operations. If its not possible 
then probably SIGHUB is the only solution. :(


On Mon, Dec 16, 2013 at 10:25 AM, Joshua Harlow 
mailto:harlo...@yahoo-inc.com>> wrote:
It depends on the "corruption" that u are willing to tolerate. Sigterm means 
the process just terminates, what if said process was 3/4 through some 
operation (create_volume for example)??

Personally I am willing to tolerate zero corruption, reliability and 
consistency are foundational things for me. Others may be more tolerant though, 
seems worth further discussion IMHO.


Sent from my really tiny device...

On Dec 15, 2013, at 8:39 PM, "iKhan" 
mailto:ik.ibadk...@gmail.com>> wrote:

How about sending SIGTERM to child processes and then starting them? I know 
this is the hard way of achieving the objective and SIGHUP approach will handle 
it more gracefully. As you mentioned it is a major change, tentatively can we 
use SIGTERM to achieve the objective?


On Mon, Dec 16, 2013 at 9:50 AM, Joshua Harlow 
mailto:harlo...@yahoo-inc.com>> wrote:
In your proposal does it means that the child process will be restarted (that 
means kill -9 or sigint??). If so, without taskflow to help (or other solution) 
that means operations in progress will be corrupted/lost. That seems bad...

A SIGHUP approach could be handled more gracefully (but it does require some 
changes in the underlying codebase to do this "refresh").


Sent from my really tiny device...

On Dec 15, 2013, at 3:11 AM, "iKhan" 
mailto:ik.ibadk...@gmail.com>> wrote:

I don't know if this is being planned in Icehouse, if not probably proposing an 
approach will help. We have seen cinder-volume service initialization part. 
Similarly if we can get our hands on child process that are running under 
cinder-volume service, if we terminate those process and restart them along 
with newly added backends. It might help us achieve the target.


On Sun, Dec 15, 2013 at 12:49 PM, Joshua Harlow 
mailto:harlo...@yahoo-inc.com>> wrote:
I don't currently know of a one size fits all solution here. There was talk at 
the summit of having the cinder app respond to a SIGHUP signal and attempting 
to reload config on this signal. Dynamic reloading is tricky business 
(basically u need to unravel anything holding references to the old config 
values/affected by the old config values).

I would start with a simple trial of this if u want to so it, part if the issue 
will likely be oslo.config (can that library understand dynamic reloading?) and 
then cinder drivers themselves (perhaps u need to create a registry of drivers 
that can dynamically reload on config reloads?). Start out with something 
simple, isolate the reloading as much as u can to a single area (something like 
the mentioned registry of objects that can be reloaded when a SIGHUP arrives) 
and see how it goes.

It does seem like a nice feature if u can get it right :-)

Sent from my really tiny device...

On Dec 13, 2013, at 8:57 PM, "iKhan" 
mailto:ik.ibadk...@gmail.com>> wrote:

Hi All,

At present cinder driver can be only configured with adding entries in conf 
file. Once these driver related entries are modified or added in conf file, we 
need to restart cinder-volume service to validate the conf entries and create a 
child process that runs in background.

I am thinking of a way to re-initialize or dynamically configure cinder driver. 
So that I can accept the configuration from user on fly and perform operations. 
I think solution lies somewhere around "oslo.config.cfg", but I am still 
unclear about how re-initializing can be achieved.

Let know if anyone here is aware of any approach to re-initialize or 
dynamically configure a driver.

--
Thanks,
IK
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Thanks,
Ibad Khan
9686594607
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://list

[openstack-dev] [nova][db] Thoughts on making instances.uuid non-nullable?

2013-12-16 Thread Matt Riedemann
I've got a blueprint [1] scheduled for icehouse-3 to add DB2 support to 
Nova. That's blocked by a patch working it's way through 
sqlalchemy-migrate to add DB2 support [2] there.


I've held off pushing any nova patches up until the sqlalchemy-migrate 
DB2 support is merged (which is also blocked by 3rd party CI, which is a 
WIP of it's own).


Thinking ahead though for nova, one of the main issues with DB2 in the 
migration scripts is DB2 10.5 doesn't support unique constraints over 
nullable columns.  The sqlalchemy-migrate code will instead create a 
unique index, since that's DB2's alternative.  However, since a unique 
index is not a unique constraint, the FK creation fails if the UC 
doesn't exist.


There are a lot of foreign keys in nova based on the instances.uuid 
column [3].  I need to figure out how I'm going to solve the UC problem 
for DB2 in that case.  Here are the options as I see them, looking for 
input on the best way to go.


1. Add a migration to change instances.uuid to non-nullable. Besides the 
obvious con of having yet another migration script, this seems the most 
straight-forward. The instance object class already defines the uuid 
field as non-nullable, so it's constrained at the objects layer, just 
not in the DB model.  Plus I don't think we'd ever have a case where 
instance.uuid is null, right?  Seems like a lot of things would break 
down if that happened.  With this option I can build on top of it for 
the DB2 migration support to add the same FKs as the other engines.


2. When I push up the migration script changes for DB2, I make the 
instances.uuid (and any other similar cases) work in the DB2 case only, 
i.e. if the engine is 'ibm_db_sa', then instances.uuid is non-nullable. 
 This could be done in the 160_havana migration script since moving to 
DB2 with nova is going to require a fresh migration anyway (there are 
some other older scripts that I'll have to change to work with migrating 
to DB2).  I don't particularly care for this option since it makes the 
model inconsistent between backends, but the upside is it doesn't 
require a new migration for any other backend, only DB2 - and you'd have 
to run the migrations for DB2 support anyway.


I'm trying to flesh this out early since I could start working on option 
1 at any time if it's the agreed upon solution, but looking for input 
first because I don't want to make assumptions about what everyone 
thinks here.


[1] https://blueprints.launchpad.net/nova/+spec/db2-database
[2] https://review.openstack.org/#/c/55572/
[3] 
https://github.com/openstack/nova/blob/master/nova/db/sqlalchemy/migrate_repo/versions/160_havana.py#L1335


--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [infra] Meeting Tuesday December 17th at 19:00 UTC

2013-12-16 Thread Elizabeth Krumbach Joseph
The OpenStack Infrastructure (Infra) team is hosting our weekly
meeting tomorrow, Tuesday December 17th, at 19:00 UTC in
#openstack-meeting

Meeting agenda available here:
https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting (anyone is
welcome to to add agenda items)

Everyone interested in infrastructure and process surrounding
automated testing and deployment is encouraged to attend.

Note: As discussed at our last meeting, with our next two meeting
dates landing on Christmas Eve and New Years Eve, this is likely to be
the last formal team meeting team of the year.

-- 
Elizabeth Krumbach Joseph || Lyz || pleia2
http://www.princessleia.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Unified Guest Agent proposal

2013-12-16 Thread Lars Kellogg-Stedman
On Fri, Dec 13, 2013 at 11:32:01AM -0800, Fox, Kevin M wrote:
> I hadn't thought about that use case, but that does sound like it
> would be a problem.

That, at least, is not much of a problem, because you can block access
to the metadata via a blackhole route or similar after you complete
your initial configuration:

  ip route add blackhole 169.254.169.254 

This prevents access to the metadata unless someone already has root
access on the instance.

-- 
Lars Kellogg-Stedman  | larsks @ irc
Cloud Engineering / OpenStack  | "   "  @ twitter



pgp4mFXCAneZr.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][libvirt]when deleting instance which is in migrating state, instance files can be stay in destination node forever

2013-12-16 Thread Yaguang Tang
could we use Taskflow  to manage
task state and resource for this kind of tasks in Nova? Cinder has been an
pilot to use Taskflow for volume backup tasks. anyone interested in this
suggestion or has done some research to improve the live migration
workflow?


2013/12/17 Vladik Romanovsky 

> I would block it in the API or have the API cancelling the migration first.
> I don't see a reason why to start an operation that is meant to fail,
> which also has a complex chain of event, following it failure.
>
> Regardless of the above, I think that the suggested exception handling is
> needed in any case.
>
>
> Vladik
>
> - Original Message -
> > From: "Loganathan Parthipan" 
> > To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> > Sent: Monday, 16 December, 2013 8:25:09 AM
> > Subject: Re: [openstack-dev] [Nova][libvirt]when deleting instance which
> is in migrating state, instance files can be
> > stay in destination node forever
> >
> >
> >
> > Isn’t just handling the exception instance_not_found enough? By this time
> > source would’ve been cleaned up. Destination VM resources will get
> cleaned
> > up by the periodic task since the VM is not associated with this host.
> Am I
> > missing something here?
> >
> >
> >
> >
> >
> >
> > From: 王宏 [mailto:w.wangho...@gmail.com]
> > Sent: 16 December 2013 11:32
> > To: openstack-dev@lists.openstack.org
> > Subject: [openstack-dev] [Nova][libvirt]when deleting instance which is
> in
> > migrating state, instance files can be stay in destination node forever
> >
> >
> >
> >
> >
> > Hi all.
> >
> >
> > When I try to fix a bug: https://bugs.launchpad.net/nova/+bug/1242961 ,
> >
> >
> > I get a trouble.
> >
> >
> >
> >
> >
> > To reproduce the bug is very easy. Live migrate a vm in block_migration
> mode,
> >
> >
> > and then delelte the vm immediately.
> >
> >
> >
> >
> >
> > The reason of this bug is as follow:
> >
> >
> > 1. Because live migrate costs more time, so the vm will be deleted
> > sucessfully
> >
> >
> > before live migrate complete. And then, we will get an exception while
> live
> >
> >
> > migrating.
> >
> >
> > 2. After live migrate failed, we start to rollback. But, in the rollback
> > method
> >
> >
> > we will get or modify the info of vm from db. Because the vm has been
> deleted
> >
> >
> > already, so we will get instance_not_found exception and rollback will be
> >
> >
> > faild too.
> >
> >
> >
> >
> >
> > I have two ways to fix the bug:
> >
> >
> > i)Add check in nova-api. When try to delete a vm, we return an error
> message
> > if
> >
> >
> > the vm_state is LIVE_MIGRATING. This way is very simple, but need to
> > carefully
> >
> >
> > consider. I have found a related discussion:
> >
> >
> >
> http://lists.openstack.org/pipermail/openstack-dev/2013-October/017454.html,
> > but
> >
> >
> > it has no result in the discussion.
> >
> >
> > ii)Before live migrate we get all the data needed by rollback method,
> and add
> > a
> >
> >
> > new rollback method. The new method will clean up resources at
> destination
> > based
> >
> >
> > on the above data(The resouces at source has been already cleaned up by
> >
> >
> > deleting).
> >
> >
> >
> >
> >
> > I have no idea whitch one I should choose. Or, any other ideas?:)
> >
> >
> >
> >
> >
> > Regards,
> >
> >
> > wanghong
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Tang Yaguang

Canonical Ltd. | www.ubuntu.com | www.canonical.com
Mobile:  +86 152 1094 6968
gpg key: 0x187F664F
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Nomination of Sandy Walsh to core team

2013-12-16 Thread Neal, Phil
+1. Sandy has been both helpful and insightful, plus he happens to have a good 
handle on the many moving parts that make up this project. :-)

-Phil

> -Original Message-
> From: Lu, Lianhao [mailto:lianhao...@intel.com]
> Sent: Sunday, December 15, 2013 5:29 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [Ceilometer] Nomination of Sandy Walsh to
> core team
> +1 in support.
> 
> -Lianhao
> 
> Nicolas Barcet wrote on 2013-12-13:
>> +1 in support of Sandy.  He is a proven contributor and reviewer and he
>> brings a great business vision and experience to the team.
>> 
>> Cheers,
>> Nick
>> 
>> 
>> On Wed, Dec 11, 2013 at 8:18 PM, Gordon Chung  wrote:
>> 
>> 
>>  > To that end, I would like to nominate Sandy Walsh from Rackspace to
>>  > ceilometer-core. Sandy is one of the original authors of StackTach,
>> and  > spearheaded the original stacktach-ceilometer integration. He
>> has been > instrumental in many of my codes reviews, and has
>> contributed much of the  > existing event storage and querying code.
>> 
>> 
>>  +1 in support of Sandy.  the Event work he's led in Ceilometer has
>> been an important feature and i think he has some valuable ideas.
>> 
>>  cheers, gordon chungopenstack, ibm software standards
>>  ___ OpenStack-dev 
>> mailing
>> list OpenStack-dev@lists.openstack.org
>>  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][libvirt]when deleting instance which is in migrating state, instance files can be stay in destination node forever

2013-12-16 Thread Vladik Romanovsky
I would block it in the API or have the API cancelling the migration first. 
I don't see a reason why to start an operation that is meant to fail, which 
also has a complex chain of event, following it failure.

Regardless of the above, I think that the suggested exception handling is 
needed in any case.


Vladik

- Original Message -
> From: "Loganathan Parthipan" 
> To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Sent: Monday, 16 December, 2013 8:25:09 AM
> Subject: Re: [openstack-dev] [Nova][libvirt]when deleting instance which is 
> in migrating state, instance files can be
> stay in destination node forever
> 
> 
> 
> Isn’t just handling the exception instance_not_found enough? By this time
> source would’ve been cleaned up. Destination VM resources will get cleaned
> up by the periodic task since the VM is not associated with this host. Am I
> missing something here?
> 
> 
> 
> 
> 
> 
> From: 王宏 [mailto:w.wangho...@gmail.com]
> Sent: 16 December 2013 11:32
> To: openstack-dev@lists.openstack.org
> Subject: [openstack-dev] [Nova][libvirt]when deleting instance which is in
> migrating state, instance files can be stay in destination node forever
> 
> 
> 
> 
> 
> Hi all.
> 
> 
> When I try to fix a bug: https://bugs.launchpad.net/nova/+bug/1242961 ,
> 
> 
> I get a trouble.
> 
> 
> 
> 
> 
> To reproduce the bug is very easy. Live migrate a vm in block_migration mode,
> 
> 
> and then delelte the vm immediately.
> 
> 
> 
> 
> 
> The reason of this bug is as follow:
> 
> 
> 1. Because live migrate costs more time, so the vm will be deleted
> sucessfully
> 
> 
> before live migrate complete. And then, we will get an exception while live
> 
> 
> migrating.
> 
> 
> 2. After live migrate failed, we start to rollback. But, in the rollback
> method
> 
> 
> we will get or modify the info of vm from db. Because the vm has been deleted
> 
> 
> already, so we will get instance_not_found exception and rollback will be
> 
> 
> faild too.
> 
> 
> 
> 
> 
> I have two ways to fix the bug:
> 
> 
> i)Add check in nova-api. When try to delete a vm, we return an error message
> if
> 
> 
> the vm_state is LIVE_MIGRATING. This way is very simple, but need to
> carefully
> 
> 
> consider. I have found a related discussion:
> 
> 
> http://lists.openstack.org/pipermail/openstack-dev/2013-October/017454.html ,
> but
> 
> 
> it has no result in the discussion.
> 
> 
> ii)Before live migrate we get all the data needed by rollback method, and add
> a
> 
> 
> new rollback method. The new method will clean up resources at destination
> based
> 
> 
> on the above data(The resouces at source has been already cleaned up by
> 
> 
> deleting).
> 
> 
> 
> 
> 
> I have no idea whitch one I should choose. Or, any other ideas?:)
> 
> 
> 
> 
> 
> Regards,
> 
> 
> wanghong
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] VM diagnostics - V3 proposal

2013-12-16 Thread Gary Kotton


On 12/16/13 5:25 PM, "Daniel P. Berrange"  wrote:

>On Mon, Dec 16, 2013 at 06:58:24AM -0800, Gary Kotton wrote:
>> Hi,
>> At the moment the administrator is able to retrieve diagnostics for a
>>running VM. Currently the implementation is very loosely defined, that
>>is, each driver returns whatever they have to return. This is
>>problematic in a number of respects:
>> 
>>  1.  The tempest tests were written specifically for one driver and
>>break with all other drivers (the test was removed to prevent this ­ bug
>>1240043)
>>  2.  An admin is unable to write tools that may work with a hybrid cloud
>>  3.  Adding support for get_diagnostics for drivers that do not support
>>is painful
>
>Technically 3 is currently easy, since currently you don't need to care
>about what the other drivers have done - you can return any old info
>for your new driver's get_diagnostics API impl ;-)

To be honest it was not easy at all.

>
>Seriously though, I agree the current API is a big trainwreck.
>
>> I'd like to propose the following for the V3 API (we will not touch V2
>> in case operators have applications that are written against this ­ this
>> may be the case for libvirt or xen. The VMware API support was added
>> in I1):
>> 
>>  1.  We formalize the data that is returned by the API [1]
>
>Before we debate what standard data should be returned we need
>detail of exactly what info the current 3 virt drivers return.
>IMHO it would be better if we did this all in the existing wiki
>page associated with the blueprint, rather than etherpad, so it
>serves as a permanent historical record for the blueprint design.

I will add this to the wiki. Not sure what this will achieve - other than
the fact that it will crystalize the fact that we need to have common data
returned.

>
>While we're doing this I think we should also consider whether
>the 'get_diagnostics' API is fit for purpose more generally.
>eg currently it is restricted to administrators. Some, if
>not all, of the data libvirt returns is relevant to the owner
>of the VM but they can not get at it.

This is configurable. The default is for an admin user. This is in the
policy.json file - 
https://github.com/openstack/nova/blob/master/etc/nova/policy.json#L202

>
>For a cloud administrator it might be argued that the current
>API is too inefficient to be useful in many troubleshooting
>scenarios since it requires you to invoke it once per instance
>if you're collecting info on a set of guests, eg all VMs on
>one host. It could be that cloud admins would be better
>served by an API which returned info for all VMs ona host
>at once, if they're monitoring say, I/O stats across VM
>disks to identify one that is causing I/O trouble ? IOW, I
>think we could do with better identifying the usage scenarios
>for this API if we're to improve its design / impl.

Host diagnostics would be a nice feature to have. I do not think that it
is part of the scope of what we want to achieve here but I will certainly
be happy to work on this afterwards.

>
>
>>  2.  We enable the driver to add extra information that will assist the
>>administrators in troubleshooting problems for VM's
>> 
>> I have proposed a BP for this -
>>https://urldefense.proofpoint.com/v1/url?u=https://blueprints.launchpad.n
>>et/nova/%2Bspec/diagnostics-namespace&k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0A
>>&r=eH0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0A&m=xY7Bdz7UGQQFxbe2
>>g6zVO%2FBUpkZfN%2BImM564xugLjsk%3D%0A&s=9d0ecd519b919b6c87bbdd2e1e1bf9a51
>>6f469143d15797d272cfd8c7e2d0686 (I'd like to change the name to
>>v3-api-diagnostics ­ which is more apt)
>
>The bp rename would be a good idea.
>
>> [1] 
>>https://urldefense.proofpoint.com/v1/url?u=https://etherpad.openstack.org
>>/p/vm-diagnostics&k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0A&r=eH0pxTUZo8NPZyF6h
>>goMQu%2BfDtysg45MkPhCZFxPEq8%3D%0A&m=xY7Bdz7UGQQFxbe2g6zVO%2FBUpkZfN%2BIm
>>M564xugLjsk%3D%0A&s=d1386b91ca07f5504844e7f4312dc5b53b709660fe71ca96c76c3
>>8d447bec2e5
>
>Regards,
>Daniel
>-- 
>|: 
>https://urldefense.proofpoint.com/v1/url?u=http://berrange.com/&k=oIvRg1%2
>BdGAgOoM1BIlLLqw%3D%3D%0A&r=eH0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%
>3D%0A&m=xY7Bdz7UGQQFxbe2g6zVO%2FBUpkZfN%2BImM564xugLjsk%3D%0A&s=c421c25857
>f1ca0294b5cc318e87a758a2b49ecc35b3ca9f75b57be574ce0299  -o-
>https://urldefense.proofpoint.com/v1/url?u=http://www.flickr.com/photos/db
>errange/&k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0A&r=eH0pxTUZo8NPZyF6hgoMQu%2BfD
>tysg45MkPhCZFxPEq8%3D%0A&m=xY7Bdz7UGQQFxbe2g6zVO%2FBUpkZfN%2BImM564xugLjsk
>%3D%0A&s=281520d30342d840da18dac821fdc235faf903c0bb7e8fcb51620217bf7b236a
>:|
>|: 
>https://urldefense.proofpoint.com/v1/url?u=http://libvirt.org/&k=oIvRg1%2B
>dGAgOoM1BIlLLqw%3D%3D%0A&r=eH0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3
>D%0A&m=xY7Bdz7UGQQFxbe2g6zVO%2FBUpkZfN%2BImM564xugLjsk%3D%0A&s=9424295e978
>7fe72415305745f36556f0b8167ba0da8ac9632a4f8a183a926aa  -o-
> 
>https://urldefense.proofpoint.com/v1/url?u=http://virt-manager.org/&k=oIvR

Re: [openstack-dev] [nova] VM diagnostics - V3 proposal

2013-12-16 Thread Daniel P. Berrange
On Mon, Dec 16, 2013 at 03:37:39PM +, John Garbutt wrote:
> On 16 December 2013 15:25, Daniel P. Berrange  wrote:
> > On Mon, Dec 16, 2013 at 06:58:24AM -0800, Gary Kotton wrote:
> >> I'd like to propose the following for the V3 API (we will not touch V2
> >> in case operators have applications that are written against this – this
> >> may be the case for libvirt or xen. The VMware API support was added
> >> in I1):
> >>
> >>  1.  We formalize the data that is returned by the API [1]
> >
> > Before we debate what standard data should be returned we need
> > detail of exactly what info the current 3 virt drivers return.
> > IMHO it would be better if we did this all in the existing wiki
> > page associated with the blueprint, rather than etherpad, so it
> > serves as a permanent historical record for the blueprint design.
> 
> +1
> 
> > While we're doing this I think we should also consider whether
> > the 'get_diagnostics' API is fit for purpose more generally.
> > eg currently it is restricted to administrators. Some, if
> > not all, of the data libvirt returns is relevant to the owner
> > of the VM but they can not get at it.
> 
> Ceilometer covers that ground, we should ask them about this API.

If we consider what is potentially in scope for ceilometer and
subtract that from what the libvirt get_diagnostics impl currently
returns, you pretty much end up with the empty set. This might cause
us to question if 'get_diagnostics' should exist at all from the
POV of the libvirt driver's impl. Perhaps vmware/xen return data
that is out of scope for ceilometer ?

> > For a cloud administrator it might be argued that the current
> > API is too inefficient to be useful in many troubleshooting
> > scenarios since it requires you to invoke it once per instance
> > if you're collecting info on a set of guests, eg all VMs on
> > one host. It could be that cloud admins would be better
> > served by an API which returned info for all VMs ona host
> > at once, if they're monitoring say, I/O stats across VM
> > disks to identify one that is causing I/O trouble ? IOW, I
> > think we could do with better identifying the usage scenarios
> > for this API if we're to improve its design / impl.
> 
> I like the API that helps you dig into info for a specific host that
> other system highlight as problematic.
> You can do things that could be expensive to compute, but useful for
> troubleshooting.

If things get expensive to compute, then it may well be preferrable to
have separate APIs for distinct pieces of "interesting" diagnostic
data. eg If they only care about one particular thing, they don't want
to trigger expensive computations of things they don't care about seeing.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Icehouse Requirements

2013-12-16 Thread Will Foster

On 13/12/13 19:06 +1300, Robert Collins wrote:

On 13 December 2013 06:24, Will Foster  wrote:


I just wanted to add a few thoughts:


Thank you!


For some comparative information here "from the field" I work
extensively on deployments of large OpenStack implementations,
most recently with a ~220node/9rack deployment (scaling up to 42racks / 1024
nodes soon).  My primary role is of a Devops/Sysadmin nature, and not a
specific development area so rapid provisioning/tooling/automation is an
area I almost exclusively work within (mostly using API-driven
using Foreman/Puppet).  The infrastructure our small team designs/builds
supports our development and business.

I am the target user base you'd probably want to cater to.


Absolutely!


I can tell you the philosophy and mechanics of Tuskar/OOO are great,
something I'd love to start using extensively but there are some needed
aspects in the areas of control that I feel should be added (though arguably
less for me and more for my ilk who are looking to expand their OpenStack
footprint).

* ability to 'preview' changes going to the scheduler


What does this give you? How detailed a preview do you need? What
information is critical there? Have you seen the proposed designs for
a heat template preview feature - would that be sufficient?


Thanks for the reply.  Preview-wise it'd be useful to see node
allocation prior to deployment - nothing too in-depth.
I have not seen the heat template preview features, are you referring
to the YAML templating[1] or something else[2]?  I'd like to learn
more.

[1] -
http://docs.openstack.org/developer/heat/template_guide/hot_guide.html
[2] - https://github.com/openstack/heat-templates




* ability to override/change some aspects within node assignment


What would this be used to do? How often do those situations turn up?
Whats the impact if you can't do that?


One scenario might be that autodiscovery does not pick up an available
node in your pool of resources, or detects incorrectly - you could
manually change things as you like it.  Another (more common)
scenario is that you don't have an isolated, flat network with which
to deploy and nodes are picked that you do not want included in the
provisioning - you could remove those from the set of resources prior
to launching overcloud creation.  The impact would be that the tooling
would seem inflexible to those lacking a thoughtfully prepared 
network/infrastructure, or more commonly in cases where the existing

network design is too inflexible the usefulness and quick/seamless
provisioning benefits would fall short.




* ability to view at least minimal logging from within Tuskar UI


Logging of what - the deployment engine? The heat event-log? Nova
undercloud logs? Logs from the deployed instances? If it's not there
in V1, but you can get, or already have credentials for the [instances
that hold the logs that you wanted] would that be a big adoption
blocker, or just a nuisance?



Logging of the deployment engine status during the bootstrapping
process initially, and some rudimentary node success/failure
indication.  It should be simplistic enough to not rival existing monitoring/log
systems but at least provide deployment logs as the overcloud is being
built and a general node/health 'check-in' that it's complete.

Afterwards as you mentioned the logs are available on the deployed
systems.  Think of it as providing some basic written navigational signs 
for people crossing a small bridge before they get to the highway,

there's continuity from start -> finish and a clear sense of what's
occurring.  From my perspective, absence of this type of verbosity may
impede adoption of new users (who are used to this type of
information with deployment tooling).




Here's the main reason - most new adopters of OpenStack/IaaS are going to be
running legacy/mixed hardware and while they might have an initiative to
explore and invest and even a decent budget most of them are not going to
have
completely identical hardware, isolated/flat networks and things set
aside in such a way that blind auto-discovery/deployment will just work all
the time.


Thats great information (and something I reasonably well expected, to
a degree). We have a hard dependency on no wildcard DHCP servers in
the environment (or we can't deploy). Autodiscovery is something we
don't have yet, but certainly debugging deployment failures is a very
important use case and one we need to improve both at the plumbing
layer and in the stories around it in the UI.


There will be a need to sometimes adjust, and those coming from a more
vertically-scaling infrastructure (most large orgs.) will not have
100% matching standards in place of vendor, machine spec and network design
which may make Tuscar/OOO seem inflexible and 'one-way'.  This may just be a
carry-over or fear of the old ways of deployment but nonetheless it
is present.


I'm not sure what you mean by matching standards here :). Ironic is
designed to support

Re: [openstack-dev] [Nova][Docker] Environment variables

2013-12-16 Thread Russell Bryant
On 12/16/2013 10:39 AM, Daniel P. Berrange wrote:
> On Mon, Dec 16, 2013 at 04:18:52PM +0100, Daniel Kuffner wrote:
>> Hi Russell,
>>
>> You actually propose to extend the whole nova stack to support
>> environment variables. Would any other driver benefit from this API
>> extension?
>>
>> Is that what you imagine?
>> nova --env SQL_URL=postgres://user:password --image 
>>
>> Regarding the discussion you mentioned. Are there any public resources
>> to read. I kind of missed it. Most likely it was before I was part of
>> this community :)
> 
> With glance images we have a way to associate arbitrary metadata
> attributes with the image. I could see using this mechanism to
> associate some default set of environment variables.
> 
> eg use a 'env_' prefix for glance image attributes
> 
> We've got a couple of cases now where we want to overrides these
> same things on a per-instance basis. Kernel command line args
> is one other example. Other hardware overrides like disk/net device
> types are another possibility
> 
> Rather than invent new extensions for each, I think we should
> have a way to pass arbitrary attributes alon with the boot
> API call, that a driver would handle in much  the same way as
> they do for glance image properties. Basically think of it as
> a way to custom any image property per instance created.

That's a pretty nice idea.  I like it.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Docker] Environment variables

2013-12-16 Thread Daniel P. Berrange
On Mon, Dec 16, 2013 at 04:18:52PM +0100, Daniel Kuffner wrote:
> Hi Russell,
> 
> You actually propose to extend the whole nova stack to support
> environment variables. Would any other driver benefit from this API
> extension?
> 
> Is that what you imagine?
> nova --env SQL_URL=postgres://user:password --image 
> 
> Regarding the discussion you mentioned. Are there any public resources
> to read. I kind of missed it. Most likely it was before I was part of
> this community :)

With glance images we have a way to associate arbitrary metadata
attributes with the image. I could see using this mechanism to
associate some default set of environment variables.

eg use a 'env_' prefix for glance image attributes

We've got a couple of cases now where we want to overrides these
same things on a per-instance basis. Kernel command line args
is one other example. Other hardware overrides like disk/net device
types are another possibility

Rather than invent new extensions for each, I think we should
have a way to pass arbitrary attributes alon with the boot
API call, that a driver would handle in much  the same way as
they do for glance image properties. Basically think of it as
a way to custom any image property per instance created.

Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Docker] Environment variables

2013-12-16 Thread Russell Bryant
On 12/16/2013 10:18 AM, Daniel Kuffner wrote:
> Hi Russell,
> 
> You actually propose to extend the whole nova stack to support
> environment variables. Would any other driver benefit from this API
> extension?
> 
> Is that what you imagine?
> nova --env SQL_URL=postgres://user:password --image 

Yes.

> Regarding the discussion you mentioned. Are there any public resources
> to read. I kind of missed it. Most likely it was before I was part of
> this community :)

It started here back in November:


http://lists.openstack.org/pipermail/openstack-dev/2013-November/019637.html

and then there have been a few messages on that thread this month, too.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] VM diagnostics - V3 proposal

2013-12-16 Thread John Garbutt
On 16 December 2013 15:25, Daniel P. Berrange  wrote:
> On Mon, Dec 16, 2013 at 06:58:24AM -0800, Gary Kotton wrote:
>> Hi,
>> At the moment the administrator is able to retrieve diagnostics for a 
>> running VM. Currently the implementation is very loosely defined, that is, 
>> each driver returns whatever they have to return. This is problematic in a 
>> number of respects:
>>
>>  1.  The tempest tests were written specifically for one driver and break 
>> with all other drivers (the test was removed to prevent this – bug 1240043)
>>  2.  An admin is unable to write tools that may work with a hybrid cloud
>>  3.  Adding support for get_diagnostics for drivers that do not support is 
>> painful
>
> Technically 3 is currently easy, since currently you don't need to care
> about what the other drivers have done - you can return any old info
> for your new driver's get_diagnostics API impl ;-)
>
> Seriously though, I agree the current API is a big trainwreck.

+1

>> I'd like to propose the following for the V3 API (we will not touch V2
>> in case operators have applications that are written against this – this
>> may be the case for libvirt or xen. The VMware API support was added
>> in I1):
>>
>>  1.  We formalize the data that is returned by the API [1]
>
> Before we debate what standard data should be returned we need
> detail of exactly what info the current 3 virt drivers return.
> IMHO it would be better if we did this all in the existing wiki
> page associated with the blueprint, rather than etherpad, so it
> serves as a permanent historical record for the blueprint design.

+1

> While we're doing this I think we should also consider whether
> the 'get_diagnostics' API is fit for purpose more generally.
> eg currently it is restricted to administrators. Some, if
> not all, of the data libvirt returns is relevant to the owner
> of the VM but they can not get at it.

Ceilometer covers that ground, we should ask them about this API.

> For a cloud administrator it might be argued that the current
> API is too inefficient to be useful in many troubleshooting
> scenarios since it requires you to invoke it once per instance
> if you're collecting info on a set of guests, eg all VMs on
> one host. It could be that cloud admins would be better
> served by an API which returned info for all VMs ona host
> at once, if they're monitoring say, I/O stats across VM
> disks to identify one that is causing I/O trouble ? IOW, I
> think we could do with better identifying the usage scenarios
> for this API if we're to improve its design / impl.

I like the API that helps you dig into info for a specific host that
other system highlight as problematic.
You can do things that could be expensive to compute, but useful for
troubleshooting.

But you are right, we should think about it first.

>
>>  2.  We enable the driver to add extra information that will assist the 
>> administrators in troubleshooting problems for VM's
>>

I think we need to version this information, if possible. I don't like
the idea of the driver just changing the public API as it wishes.

>> I have proposed a BP for this - 
>> https://blueprints.launchpad.net/nova/+spec/diagnostics-namespace (I'd like 
>> to change the name to v3-api-diagnostics – which is more apt)
>
> The bp rename would be a good idea.
+1

>> [1] https://etherpad.openstack.org/p/vm-diagnostics

John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Docker] Environment variables

2013-12-16 Thread Daniel Kuffner
That would be great, I have also a couple of change request waiting
for approval. Would be good to know if it has any relevance in the
future.

https://review.openstack.org/#/c/59824/
https://review.openstack.org/#/c/62182/
https://review.openstack.org/#/c/62183/
https://review.openstack.org/#/c/62220/

Daniel

On Mon, Dec 16, 2013 at 4:17 PM, Russell Bryant  wrote:
> On 12/16/2013 10:12 AM, Chuck Short wrote:
>> I have something that is pushing it for to stay in nova (at least the
>> compute drivers). I should have a gerrit branch for people to review soon.
>
> OK.  Do you have any design notes for whatever you're proposing?  That
> would probably be easier to review and discuss.
>
> --
> Russell Bryant
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] VM diagnostics - V3 proposal

2013-12-16 Thread Daniel P. Berrange
On Mon, Dec 16, 2013 at 06:58:24AM -0800, Gary Kotton wrote:
> Hi,
> At the moment the administrator is able to retrieve diagnostics for a running 
> VM. Currently the implementation is very loosely defined, that is, each 
> driver returns whatever they have to return. This is problematic in a number 
> of respects:
> 
>  1.  The tempest tests were written specifically for one driver and break 
> with all other drivers (the test was removed to prevent this – bug 1240043)
>  2.  An admin is unable to write tools that may work with a hybrid cloud
>  3.  Adding support for get_diagnostics for drivers that do not support is 
> painful

Technically 3 is currently easy, since currently you don't need to care
about what the other drivers have done - you can return any old info
for your new driver's get_diagnostics API impl ;-)

Seriously though, I agree the current API is a big trainwreck.

> I'd like to propose the following for the V3 API (we will not touch V2
> in case operators have applications that are written against this – this
> may be the case for libvirt or xen. The VMware API support was added
> in I1):
> 
>  1.  We formalize the data that is returned by the API [1]

Before we debate what standard data should be returned we need
detail of exactly what info the current 3 virt drivers return.
IMHO it would be better if we did this all in the existing wiki
page associated with the blueprint, rather than etherpad, so it
serves as a permanent historical record for the blueprint design.

While we're doing this I think we should also consider whether
the 'get_diagnostics' API is fit for purpose more generally. 
eg currently it is restricted to administrators. Some, if
not all, of the data libvirt returns is relevant to the owner
of the VM but they can not get at it.

For a cloud administrator it might be argued that the current
API is too inefficient to be useful in many troubleshooting
scenarios since it requires you to invoke it once per instance
if you're collecting info on a set of guests, eg all VMs on
one host. It could be that cloud admins would be better
served by an API which returned info for all VMs ona host
at once, if they're monitoring say, I/O stats across VM
disks to identify one that is causing I/O trouble ? IOW, I
think we could do with better identifying the usage scenarios
for this API if we're to improve its design / impl.


>  2.  We enable the driver to add extra information that will assist the 
> administrators in troubleshooting problems for VM's
> 
> I have proposed a BP for this - 
> https://blueprints.launchpad.net/nova/+spec/diagnostics-namespace (I'd like 
> to change the name to v3-api-diagnostics – which is more apt)

The bp rename would be a good idea.

> [1] https://etherpad.openstack.org/p/vm-diagnostics

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Solum] Third working group meeting on language packs today

2013-12-16 Thread Clayton Coleman
Hi,

We will hold our third Git Integration working group meeting on IRC in #solum 
on Monday, December 16, 2013 1700 UTC / 0900 PST.  Previous meeting notes are 
here [4]

Agenda for today's meeting:
* Administrative:
* Topics:
* Discuss lang-pack-examples spec for inclusion into M1 [1]
* Discuss specify-lang-pack proposal [2] 
  * Feedback on etherpad items created last week [3]
* Discuss alternative names for language-pack and set up a poll
* General discussion

[1] https://blueprints.launchpad.net/solum/+spec/lang-pack-examples
[2] https://blueprints.launchpad.net/solum/+spec/specify-lang-pack
[3] https://etherpad.openstack.org/p/Solum-Language-pack-json-format
[4] http://irclogs.solum.io/2013/solum.2013-12-09-17.01.html

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Horizon] [Tuskar] [UI] Horizon and Tuskar-UI merge

2013-12-16 Thread Jiri Tomasek

On 12/16/2013 03:32 PM, Jaromir Coufal wrote:

On 2013/16/12 14:03, Matthias Runge wrote:

On 12/13/2013 03:08 PM, Ladislav Smola wrote:

Horizoners,

As discussed in TripleO and Horizon meetings, we are proposing to move
Tuskar UI under the Horizon umbrella. Since we are building our UI
solution on top of Horizon, we think this is a good fit. It will allow
us to get feedback and reviews from the appropriate group of 
developers.



I don't think, we really disagree here.

My main concern would be more: what do we get, if we make up another
project under the umbrella of horizon? I mean, what does that mean at 
all?


My proposal would be, to send patches directly to horizon. As discussed
in last weeks horizon  meeting, tuskar UI would become integrated in
Horizon, but disabled by default. This would enable a faster integration
in Horizon and would reduce the overhead of creating a separate
repositoy, installation instructions, packaging etc. etc.

 From the horizon side: we would get some new contributors (and 
hopefully

reviewers), which is very much appreciated.

Matthias


This is important note. From information architecture and user 
interaction point of view, I don't think it makes sense to keep all 
the three tabs visible together (Project, Admin, Infrastructure). 
There are lot of reasons, but main points:


* Infrastructure itself is undercloud concept running in different 
instance of Horizon.


* Users dealing with deployment and infrastructure management are not 
the users of OpenStack UI / Dashboard. It is different set of users. 
So it doesn't make sense to have giant application, which provides 
each and every possible feature. I think we need to keep focused.


So by default, I would say that there should exist Project + Admin tab 
together or Infrastructure. But never all three together. So when 
Matthias say 'disabled by default', I would mean completely hidden for 
user and if user wants to use Infrastructure management, he can enable 
it in different horizon instance, but it will be the only visible tab 
for him. So it will be sort of separate application, but still running 
on top of Horizon.


-- Jarda

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Thanks for pointing this out, In Horizon you can easily decide which 
dashboards to show, so the Infrastructure management Horizon instance 
can have Project and Admin dashboards disabled.


I think there has been discussed that some panels of Admin dashboard 
should be required for infrastructure management. We can solve this by 
adding those selected Admin panels also into Infrastructure dashboard.


Jirka

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Docker] Environment variables

2013-12-16 Thread Daniel Kuffner
Hi Chuck,

yes please, I'm eager to see what you have. :)

Daniel

On Mon, Dec 16, 2013 at 4:12 PM, Chuck Short  wrote:
> Hi Russel,
>
> I have something that is pushing it for to stay in nova (at least the
> compute drivers). I should have a gerrit branch for people to review soon.
>
> Regards
> chuck
>
>
> On Mon, Dec 16, 2013 at 10:07 AM, Russell Bryant  wrote:
>>
>> On 12/16/2013 09:27 AM, Daniel Kuffner wrote:
>> > Hi All,
>> >
>> > I have submitted a new blueprint which addresses the a common pattern
>> > in the docker world. A usual pattern in the docker world is to use
>> > environment variables to configure a container.
>> >
>> > docker run -e "SQL_URL=postgres://user:password@/db" my-app
>> >
>> > The nova docker driver doesn't support to set any environment
>> > variables. To work around this issue I used cloud-init which works
>> > fine. But this approach has of course the drawback that a) I have to
>> > install the cloud init service. and b) my docker container doesn't
>> > work outside of openstack.
>> >
>> > I propose to allow a user to set docker environment variables via nova
>> > instance metadata. The metadata key should have a prefix like ENV_
>> > which can be used to determine all environment variables. The prefix
>> > should be removed and the remainder key and vaule will be injected.
>> >
>> > The metadata can unfortunately not be set in horizon but can be used
>> > from the nova command line tool and from heat. Example heat:
>> >
>> > myapp:
>> > Type: OS::Nova::Server
>> > Properties:
>> >   flavor: m1.small
>> >   image: my-app:latest
>> >   meta-data:
>> > - ENV_SQL_URL: postgres://user:password@/db
>> > - ENV_SOMETHING_ELSE: Value
>> >
>> >
>> > Let me know what you think about that.
>> >
>> > Blueprint:
>> > https://blueprints.launchpad.net/nova/+spec/docker-env-via-meta-data
>>
>> Thanks for starting the discussion.  More people should do this for
>> their blueprints.  :-)
>>
>> One of the things we should be striving for is to provide as consistent
>> of an experience as we can across drivers.  Right now, we have the
>> metadata service and config drive, and neither of those are driver
>> specific.  In the case of config drive, whether it's used or not is
>> exposed through the API.  As you point out, the meta-data service does
>> technically work with the docker driver.
>>
>> I don't think we should support environment variables like this
>> automatically.  Instead, I think it would be more appropriate to add an
>> API extension for specifying env vars.  That way the behavior is more
>> explicit and communicated through the API.  The env vars would be passed
>> through all of the appropriate plumbing and down to drivers that are
>> able to support it.
>>
>> This is all also assuming that containers support is staying in Nova and
>> not a new service.  That discussion seems to have stalled.  Is anyone
>> still pushing on that?  Any updates?
>>
>> --
>> Russell Bryant
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Docker] Environment variables

2013-12-16 Thread Daniel Kuffner
Hi Russell,

You actually propose to extend the whole nova stack to support
environment variables. Would any other driver benefit from this API
extension?

Is that what you imagine?
nova --env SQL_URL=postgres://user:password --image 

Regarding the discussion you mentioned. Are there any public resources
to read. I kind of missed it. Most likely it was before I was part of
this community :)

thanks,
Daniel

On Mon, Dec 16, 2013 at 4:07 PM, Russell Bryant  wrote:
> On 12/16/2013 09:27 AM, Daniel Kuffner wrote:
>> Hi All,
>>
>> I have submitted a new blueprint which addresses the a common pattern
>> in the docker world. A usual pattern in the docker world is to use
>> environment variables to configure a container.
>>
>> docker run -e "SQL_URL=postgres://user:password@/db" my-app
>>
>> The nova docker driver doesn't support to set any environment
>> variables. To work around this issue I used cloud-init which works
>> fine. But this approach has of course the drawback that a) I have to
>> install the cloud init service. and b) my docker container doesn't
>> work outside of openstack.
>>
>> I propose to allow a user to set docker environment variables via nova
>> instance metadata. The metadata key should have a prefix like ENV_
>> which can be used to determine all environment variables. The prefix
>> should be removed and the remainder key and vaule will be injected.
>>
>> The metadata can unfortunately not be set in horizon but can be used
>> from the nova command line tool and from heat. Example heat:
>>
>> myapp:
>> Type: OS::Nova::Server
>> Properties:
>>   flavor: m1.small
>>   image: my-app:latest
>>   meta-data:
>> - ENV_SQL_URL: postgres://user:password@/db
>> - ENV_SOMETHING_ELSE: Value
>>
>>
>> Let me know what you think about that.
>>
>> Blueprint: 
>> https://blueprints.launchpad.net/nova/+spec/docker-env-via-meta-data
>
> Thanks for starting the discussion.  More people should do this for
> their blueprints.  :-)
>
> One of the things we should be striving for is to provide as consistent
> of an experience as we can across drivers.  Right now, we have the
> metadata service and config drive, and neither of those are driver
> specific.  In the case of config drive, whether it's used or not is
> exposed through the API.  As you point out, the meta-data service does
> technically work with the docker driver.
>
> I don't think we should support environment variables like this
> automatically.  Instead, I think it would be more appropriate to add an
> API extension for specifying env vars.  That way the behavior is more
> explicit and communicated through the API.  The env vars would be passed
> through all of the appropriate plumbing and down to drivers that are
> able to support it.
>
> This is all also assuming that containers support is staying in Nova and
> not a new service.  That discussion seems to have stalled.  Is anyone
> still pushing on that?  Any updates?
>
> --
> Russell Bryant
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Docker] Environment variables

2013-12-16 Thread Russell Bryant
On 12/16/2013 10:12 AM, Chuck Short wrote:
> I have something that is pushing it for to stay in nova (at least the
> compute drivers). I should have a gerrit branch for people to review soon.

OK.  Do you have any design notes for whatever you're proposing?  That
would probably be easier to review and discuss.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] time for a new major network rpc api version?

2013-12-16 Thread Russell Bryant
On 12/15/2013 05:12 PM, Robert Collins wrote:
> That said, doing anything to the network RPC API seems premature until
> the Neutron question is resolved.

This.

I've been pretty much ignoring this API since it has been frozen and
"almost deprecated" for a long time.  My plan was to revisit the status
of nova-network after the release of icehouse-2.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Docker] Environment variables

2013-12-16 Thread Chuck Short
Hi Russel,

I have something that is pushing it for to stay in nova (at least the
compute drivers). I should have a gerrit branch for people to review soon.

Regards
chuck


On Mon, Dec 16, 2013 at 10:07 AM, Russell Bryant  wrote:

> On 12/16/2013 09:27 AM, Daniel Kuffner wrote:
> > Hi All,
> >
> > I have submitted a new blueprint which addresses the a common pattern
> > in the docker world. A usual pattern in the docker world is to use
> > environment variables to configure a container.
> >
> > docker run -e "SQL_URL=postgres://user:password@/db" my-app
> >
> > The nova docker driver doesn't support to set any environment
> > variables. To work around this issue I used cloud-init which works
> > fine. But this approach has of course the drawback that a) I have to
> > install the cloud init service. and b) my docker container doesn't
> > work outside of openstack.
> >
> > I propose to allow a user to set docker environment variables via nova
> > instance metadata. The metadata key should have a prefix like ENV_
> > which can be used to determine all environment variables. The prefix
> > should be removed and the remainder key and vaule will be injected.
> >
> > The metadata can unfortunately not be set in horizon but can be used
> > from the nova command line tool and from heat. Example heat:
> >
> > myapp:
> > Type: OS::Nova::Server
> > Properties:
> >   flavor: m1.small
> >   image: my-app:latest
> >   meta-data:
> > - ENV_SQL_URL: postgres://user:password@/db
> > - ENV_SOMETHING_ELSE: Value
> >
> >
> > Let me know what you think about that.
> >
> > Blueprint:
> https://blueprints.launchpad.net/nova/+spec/docker-env-via-meta-data
>
> Thanks for starting the discussion.  More people should do this for
> their blueprints.  :-)
>
> One of the things we should be striving for is to provide as consistent
> of an experience as we can across drivers.  Right now, we have the
> metadata service and config drive, and neither of those are driver
> specific.  In the case of config drive, whether it's used or not is
> exposed through the API.  As you point out, the meta-data service does
> technically work with the docker driver.
>
> I don't think we should support environment variables like this
> automatically.  Instead, I think it would be more appropriate to add an
> API extension for specifying env vars.  That way the behavior is more
> explicit and communicated through the API.  The env vars would be passed
> through all of the appropriate plumbing and down to drivers that are
> able to support it.
>
> This is all also assuming that containers support is staying in Nova and
> not a new service.  That discussion seems to have stalled.  Is anyone
> still pushing on that?  Any updates?
>
> --
> Russell Bryant
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Docker] Environment variables

2013-12-16 Thread Russell Bryant
On 12/16/2013 09:27 AM, Daniel Kuffner wrote:
> Hi All,
> 
> I have submitted a new blueprint which addresses the a common pattern
> in the docker world. A usual pattern in the docker world is to use
> environment variables to configure a container.
> 
> docker run -e "SQL_URL=postgres://user:password@/db" my-app
> 
> The nova docker driver doesn't support to set any environment
> variables. To work around this issue I used cloud-init which works
> fine. But this approach has of course the drawback that a) I have to
> install the cloud init service. and b) my docker container doesn't
> work outside of openstack.
> 
> I propose to allow a user to set docker environment variables via nova
> instance metadata. The metadata key should have a prefix like ENV_
> which can be used to determine all environment variables. The prefix
> should be removed and the remainder key and vaule will be injected.
> 
> The metadata can unfortunately not be set in horizon but can be used
> from the nova command line tool and from heat. Example heat:
> 
> myapp:
> Type: OS::Nova::Server
> Properties:
>   flavor: m1.small
>   image: my-app:latest
>   meta-data:
> - ENV_SQL_URL: postgres://user:password@/db
> - ENV_SOMETHING_ELSE: Value
> 
> 
> Let me know what you think about that.
> 
> Blueprint: 
> https://blueprints.launchpad.net/nova/+spec/docker-env-via-meta-data

Thanks for starting the discussion.  More people should do this for
their blueprints.  :-)

One of the things we should be striving for is to provide as consistent
of an experience as we can across drivers.  Right now, we have the
metadata service and config drive, and neither of those are driver
specific.  In the case of config drive, whether it's used or not is
exposed through the API.  As you point out, the meta-data service does
technically work with the docker driver.

I don't think we should support environment variables like this
automatically.  Instead, I think it would be more appropriate to add an
API extension for specifying env vars.  That way the behavior is more
explicit and communicated through the API.  The env vars would be passed
through all of the appropriate plumbing and down to drivers that are
able to support it.

This is all also assuming that containers support is staying in Nova and
not a new service.  That discussion seems to have stalled.  Is anyone
still pushing on that?  Any updates?

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tempest][Neutron] Network client and API tests refactoring.

2013-12-16 Thread Rossella Sblendido
Hi Eugene,

as you have already noticed there's already some overlap with your work
and the current tests development.
We should find a productive way to coordinate the efforts.
Thanks for starting the refactoring, in my opinion it's needed.

cheers,

Rossella


On Sat, Dec 14, 2013 at 4:53 PM, Jay Pipes  wrote:

> On Sat, 2013-12-14 at 19:09 +0400, Eugene Nikanorov wrote:
> > Hi Jay,
> >
> > Sure, that is understood. In fact such refactoring could be a big
> > change so I'd split it to two or more patches.
> > Hope this will not overlap with ongoing neutron API tests development.
>
> Hehe, given the sheer number of new tests that get added to Tempest
> every week, I'd say that any sort of base refactoring like this will
> need to be heavily coordinated with the Tempest core reviewers and other
> contributors pushing code!
>
> Best,
> -jay
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] VM diagnostics - V3 proposal

2013-12-16 Thread Gary Kotton
Hi,
At the moment the administrator is able to retrieve diagnostics for a running 
VM. Currently the implementation is very loosely defined, that is, each driver 
returns whatever they have to return. This is problematic in a number of 
respects:

 1.  The tempest tests were written specifically for one driver and break with 
all other drivers (the test was removed to prevent this – bug 1240043)
 2.  An admin is unable to write tools that may work with a hybrid cloud
 3.  Adding support for get_diagnostics for drivers that do not support is 
painful

I'd like to propose the following for the V3 API (we will not touch V2 in case 
operators have applications that are written against this – this may be the 
case for libvirt or xen. The VMware API support was added in I1):

 1.  We formalize the data that is returned by the API [1]
 2.  We enable the driver to add extra information that will assist the 
administrators in troubleshooting problems for VM's

I have proposed a BP for this - 
https://blueprints.launchpad.net/nova/+spec/diagnostics-namespace (I'd like to 
change the name to v3-api-diagnostics – which is more apt)

And as Nelson Mandel would have said: “It always seems impossible until it's 
done.”

Moving forwards we decide to provide administrator the option of using the for 
V2 (it may be very helpful with debugging issues). But that is another 
discussion.

Thanks
Gary

[1] https://etherpad.openstack.org/p/vm-diagnostics
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [governance] Becoming a Program, before applying for incubation

2013-12-16 Thread Thierry Carrez
Flavio Percoco wrote:
> What I'm arguing here is:
> 
> 1. Programs that are not part of OpenStack's release cycle shouldn't
> be considered official nor they should have the rights that integrated
> projects have.
> 
> 2. I think requesting Programs to exist at the early stages of the
> project is not necessary. I don't even think incubated projects should
> have programs. I do agree the project's mission and goals have to be
> clear but the program should be officially created *after* the project
> graduates from incubation.
> 
> The reasoning here is that anything could happen during incubation.
> For example, a program created for project A - which is incubated -
> may change to cover a broader mission that will allow a newborn
> project B to fall under its umbrella, hence my previous proposal of
> having a incubation stage for programs as well.

I think your concerns can be covered if we consider that programs
covering incubated or "promising" projects should also somehow incubate.
To avoid confusion I'd use a different term, let's say "incoming"
programs for the sake of the discussion.

Incoming programs would automatically graduate when one of their
deliveries graduate to "integrated" status (for projects with such
deliveries), or when the TC decides so (think: for "horizontal" programs
like Documentation or Deployment).

That doesn't change most of this proposal, which is that we'd encourage
teams to ask to become an (incoming) program before they consider filing
one of their projects for incubation.

FWIW we already distinguish (on
https://wiki.openstack.org/wiki/Programs) programs that are born out of
an incubated project from other programs, so adding this "incoming"
status would not change much.

> My proposal is to either not requesting any program to be created for
> incubated projects / emerging technologies or to have a program called
> 'Emerging Technologies' were all these projects could fit in.

I don't think an "Emerging Technologies" program would make sense, since
that would just be a weird assemblage of separate teams (how would that
program elect a PTL ?). I prefer that they act as separate teams (which
they are) and use the "incoming Program" concept described above.

> The only
> difference is that, IMHO, projects under this program should not have
> all the rights that integrated projects and other programs have,
> although the program will definitely fall under the TCs authority. For
> example, projects under this program shouldn't be able to vote on the
> TCs elections.

So *that* would be a change from where we stand today, which is that
incubated project contributors get ATC status and vote on TC elections.
We can go either way, consider "incoming programs" to be "OpenStack
programs" in the sense of the TC charter, or not.

I'm not convinced there is so much value in restricting TC voting access
(or ATC status) to "OpenStack programs". Incoming programs would all be
placed under the authority of the TC so it's only fair that they have a
vote. Also giving them ATC status gets them automatically invited to
Design Summits, and getting "incoming" programs in Design Summits sounds
like a good thing to do...

-- 
Thierry Carrez (ttx)



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Horizon] [Tuskar] [UI] Horizon and Tuskar-UI merge

2013-12-16 Thread Jaromir Coufal

On 2013/16/12 14:03, Matthias Runge wrote:

On 12/13/2013 03:08 PM, Ladislav Smola wrote:

Horizoners,

As discussed in TripleO and Horizon meetings, we are proposing to move
Tuskar UI under the Horizon umbrella. Since we are building our UI
solution on top of Horizon, we think this is a good fit. It will allow
us to get feedback and reviews from the appropriate group of developers.


I don't think, we really disagree here.

My main concern would be more: what do we get, if we make up another
project under the umbrella of horizon? I mean, what does that mean at all?

My proposal would be, to send patches directly to horizon. As discussed
in last weeks horizon  meeting, tuskar UI would become integrated in
Horizon, but disabled by default. This would enable a faster integration
in Horizon and would reduce the overhead of creating a separate
repositoy, installation instructions, packaging etc. etc.

 From the horizon side: we would get some new contributors (and hopefully
reviewers), which is very much appreciated.

Matthias


This is important note. From information architecture and user 
interaction point of view, I don't think it makes sense to keep all the 
three tabs visible together (Project, Admin, Infrastructure). There are 
lot of reasons, but main points:


* Infrastructure itself is undercloud concept running in different 
instance of Horizon.


* Users dealing with deployment and infrastructure management are not 
the users of OpenStack UI / Dashboard. It is different set of users. So 
it doesn't make sense to have giant application, which provides each and 
every possible feature. I think we need to keep focused.


So by default, I would say that there should exist Project + Admin tab 
together or Infrastructure. But never all three together. So when 
Matthias say 'disabled by default', I would mean completely hidden for 
user and if user wants to use Infrastructure management, he can enable 
it in different horizon instance, but it will be the only visible tab 
for him. So it will be sort of separate application, but still running 
on top of Horizon.


-- Jarda

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Performance Regression in Neutron/Havana compared to Quantum/Grizzly

2013-12-16 Thread Nathani, Sreedhar (APS)
Hello Salvatore,

I agree with you on we need both items to improve the scaling and performance 
of neutron server.
I am not a developer so can't implement the changes myself. If somebody  is 
going to implement I am more than happy to do the tests.

Thanks & Regards,
Sreedhar Nathani


From: Salvatore Orlando [mailto:sorla...@nicira.com]
Sent: Monday, December 16, 2013 6:18 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] Performance Regression in Neutron/Havana compared 
to Quantum/Grizzly

Multiple RPC servers is something we should definitely look at.
I don't see a show-stopper reason for which this would not work, although I 
recall we found out a few caveats one should be aware of when doing multiple 
RPC servers when reviewing the patch for multiple API server (I wrote them in 
some other ML thread, I will dig them later). If you are thinking of 
implementing this support, you might want to sync up with Mark McClain who's 
working on splitting API and RPC servers.

While horizontal scaling is surely desirable, evidence we gathered from 
analysis like the one you did showed that probably we can make the interactions 
between the neutron server and the agents a lot more efficient and reliable. I 
reckon both items are needed and can be implemented independently.

Regards,
Salvatore


On 16 December 2013 12:42, Nathani, Sreedhar (APS) 
mailto:sreedhar.nath...@hp.com>> wrote:
Hello Salvatore,

Thanks for the updates.  All the changes which you talked is from the agent 
side.

>From my tests,  with multiple L2 agents running and sending/requesting 
>messages at the same time from the single neutron rpc server process is not 
>able to handle
All the load fast enough and causing the bottleneck.

With the Carl's patch (https://review.openstack.org/#/c/60082), we now support 
multiple neutron API process,
My question is why can't we support multiple neutron rpc server process as well?

Horizontal scaling with multiple neutron-server hosts would be one option, but 
having support of multiple neutron rpc servers process in in the same
System would be really helpful for the scaling of neutron server especially 
during concurrent instance deployments.

Thanks & Regards,
Sreedhar Nathani


From: Salvatore Orlando [mailto:sorla...@nicira.com]
Sent: Monday, December 16, 2013 4:55 PM

To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] Performance Regression in Neutron/Havana compared 
to Quantum/Grizzly

Hello Sreedhar,

I am focusing only on the OVS agent at the moment.
Armando fixed a few issues recently with the DHCP agent; those issues were 
triggering a perennial resync; with his fixes I reckon DHCP agent response 
times should be better.

I reckon Maru is also working on architectural improvements for the DHCP agent 
(see thread on DHCP agent reliability).

Regards,
Salvatore

On 13 December 2013 20:26, Nathani, Sreedhar (APS) 
mailto:sreedhar.nath...@hp.com>> wrote:
Hello All,

Update with my testing.

I have installed one more VM as neutron-server host and configured under the 
Load Balancer.
Currently I have 2 VMs running neutron-server process (one is Controller and 
other is dedicated neutron-server VM)

With this configuration during the batch instance deployment with a batch size 
of 30 and sleep time of 20min,
180 instances could get an IP during the first boot. During 181-210 instance 
creation some instances could not get an IP.

This is much better than when running with single neutron server where only 120 
instances could get an IP during the first boot in Havana.

When the instances are getting created, parent neutron-server process spending 
close to 90% of the cpu time on both the servers,
While rest of the neutron-server process (APIs) are spending very low CPU 
utilization.

I think it's good idea to expand the current multiple neutron-server api 
process to support rpc messages as well.

Even with current setup (multiple neutron-server hosts), we still see rpc 
timeouts in DHCP, L2 agents
and dnsmasq process is getting restarted due to SIGKILL though.

Thanks & Regards,
Sreedhar Nathani

From: Nathani, Sreedhar (APS)
Sent: Friday, December 13, 2013 12:08 AM

To: OpenStack Development Mailing List (not for usage questions)
Subject: RE: [openstack-dev] Performance Regression in Neutron/Havana compared 
to Quantum/Grizzly

Hello Salvatore,

Thanks for your feedback. Does the patch 
https://review.openstack.org/#/c/57420/ which you are working on bug 
https://bugs.launchpad.net/neutron/+bug/1253993
will help to correct the OVS agent loop slowdown issue?
Does this patch address the DHCP agent updating the host file once in a minute 
and finally sending SIGKILL to dnsmasq process?

I have tested with Marun's patch https://review.openstack.org/#/c/61168/ 
regarding 'Send DHCP notifications regardless of agent status' but this patch
Also observed the same behavior.


Thanks & Regard

[openstack-dev] [Nova][Docker] Environment variables

2013-12-16 Thread Daniel Kuffner
Hi All,

I have submitted a new blueprint which addresses the a common pattern
in the docker world. A usual pattern in the docker world is to use
environment variables to configure a container.

docker run -e "SQL_URL=postgres://user:password@/db" my-app

The nova docker driver doesn't support to set any environment
variables. To work around this issue I used cloud-init which works
fine. But this approach has of course the drawback that a) I have to
install the cloud init service. and b) my docker container doesn't
work outside of openstack.

I propose to allow a user to set docker environment variables via nova
instance metadata. The metadata key should have a prefix like ENV_
which can be used to determine all environment variables. The prefix
should be removed and the remainder key and vaule will be injected.

The metadata can unfortunately not be set in horizon but can be used
from the nova command line tool and from heat. Example heat:

myapp:
Type: OS::Nova::Server
Properties:
  flavor: m1.small
  image: my-app:latest
  meta-data:
- ENV_SQL_URL: postgres://user:password@/db
- ENV_SOMETHING_ELSE: Value


Let me know what you think about that.

Blueprint: https://blueprints.launchpad.net/nova/+spec/docker-env-via-meta-data

regards,
Daniel

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [governance] Becoming a Program, before applying for incubation

2013-12-16 Thread Flavio Percoco

On 13/12/13 16:37 +0100, Flavio Percoco wrote:

On 13/12/13 15:53 +0100, Thierry Carrez wrote:

Hi everyone,

TL;DR:
Incubation is getting harder, why not ask efforts to apply for a new
program first to get the visibility they need to grow.

Long version:

Last cycle we introduced the concept of "Programs" to replace the
concept of "Official projects" which was no longer working that well for
us. This was recognizing the work of existing teams, organized around a
common mission, as an integral part of "delivering OpenStack".
Contributors to programs become ATCs, so they get to vote in Technical
Committee (TC) elections. In return, those teams place themselves under
the authority of the TC.

This created an interesting corner case. Projects applying for
incubation would actually request two concurrent things: be considered a
new "Program", and give "incubated" status to a code repository under
that program.

Over the last months we significantly raised the bar for accepting new
projects in incubation, learning from past integration and QA mistakes.
The end result is that a number of promising projects applied for
incubation but got rejected on maturity, team size, team diversity, or
current integration level grounds.

At that point I called for some specific label, like "Emerging
Technology" that the TC could grant to promising projects that just need
more visibility, more collaboration, more crystallization before they
can make good candidates to be made part of our integrated releases.

However, at the last TC meeting it became apparent we could leverage
"Programs" to achieve the same result. Promising efforts would first get
their mission, scope and existing results blessed and recognized as
something we'd really like to see in OpenStack one day. Then when they
are ready, they could have one of their deliveries apply for incubation
if that makes sense.

The consequences would be that the effort would place itself under the
authority of the TC. Their contributors would be ATCs and would vote in
TC elections, even if their deliveries never make it to incubation. They
would get (some) space at Design Summits. So it's not "free", we still
need to be pretty conservative about accepting them, but it's probably
manageable.

I'm still weighing the consequences, but I think it's globally nicer
than introducing another status. As long as the TC feels free to revoke
Programs that do not deliver the expected results (or that no longer
make sense in the new world order) I think this approach would be fine.

Comments, thoughts ?




With the above, I'm basically saying that a Queuing ;) program
shouldn't exist until there's an integrated team of folks working on
queuing. Incubation doesn't guarantees integration and "emerging
technology" doesn't guarantees incubation. Both stages mean there's
interest about that technology and that we're looking forward to see
it being part of OpenStack, period. Each stage probably means a bit
more than that but, IMHO, that's the 'community' point of view of
those stages.

What if we have a TC-managed* Program incubation period? The Program
won't be managed by the team working on the emerging technology, nor
the team working on the incubated project. Until those projects don't
graduate, the program won't be official nor will have the 'rights' of
other programs. And if the project fits into another program, then it
won't be officially part of it until it graduates.




Since I, most likely, won't make it to tomorrow's TC meeting, I'd like
to extend this argument a bit more and make sure I share my thoughts
about it. Hopefully they'll be of help.

What I'm arguing here is:

1. Programs that are not part of OpenStack's release cycle shouldn't
be considered official nor they should have the rights that integrated
projects have.

2. I think requesting Programs to exist at the early stages of the
project is not necessary. I don't even think incubated projects should
have programs. I do agree the project's mission and goals have to be
clear but the program should be officially created *after* the project
graduates from incubation.

The reasoning here is that anything could happen during incubation.
For example, a program created for project A - which is incubated -
may change to cover a broader mission that will allow a newborn
project B to fall under its umbrella, hence my previous proposal of
having a incubation stage for programs as well.

My proposal is to either not requesting any program to be created for
incubated projects / emerging technologies or to have a program called
'Emerging Technologies' were all these projects could fit in. The only
difference is that, IMHO, projects under this program should not have
all the rights that integrated projects and other programs have,
although the program will definitely fall under the TCs authority. For
example, projects under this program shouldn't be able to vote on the
TCs elections.

Hope this make sense and that is of help during the up

Re: [openstack-dev] [TripleO] [Tuskar] [UI] Icehouse Requirements - Summary, Milestones

2013-12-16 Thread Imre Farkas

On 12/13/2013 05:22 PM, James Slagle wrote:

On Fri, Dec 13, 2013 at 03:04:09PM +0100, Imre Farkas wrote:


One note to deploy: It's not done only by Heat and Nova. If we
expect a fully functional OpenStack installation as a result, we are
missing a few steps like creating users, initializing and
registering the service endpoints with Keystone. In TripleO this is
done by the init-keystone and setup-endpoints scripts. Check devtest
for more details: 
http://docs.openstack.org/developer/tripleo-incubator/devtest_undercloud.html


Excellent point Imre, as the deployment isn't really useable until those steps
are done.  The link to the overcloud setup steps is actually:
http://docs.openstack.org/developer/tripleo-incubator/devtest_overcloud.html
Very similar to what is done for the undercloud.


You are right, that's the correct link for the overcloud setup. However, 
I intentionally picked the one for the undercloud because I wanted to 
focus on the keystone configuration part and that's the same for the 
both (the init-keystone, setup-endpoints and keystone role-create 
workflow). There are some other stuff going on in the overcloud setup 
(eg. creating a vm for a user) which might distract those who are not 
familiar with devtest from what really is needed to deploy OpenStack. 
But it would have been better from me to note that the link is not for 
the overcloud.




I think most of that logic could be reimplemented to be done via direct calls
to the API using the client libs vs using a CLI.  Not sure about
"keystone-manage pki_setup" though, would need to look into that.



Yeah, we can put big part of the needed configuration steps into Tuskar 
as most of it just uses CLI client of Keystone which can be replaced by 
using the direct API calls of the same library. The rest might go to 
Heat or os-refresh-config or else.


Imre

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][libvirt]when deleting instance which is in migrating state, instance files can be stay in destination node forever

2013-12-16 Thread Parthipan, Loganathan
Isn’t just handling the exception instance_not_found enough? By this time 
source would’ve been cleaned up. Destination VM resources will get cleaned up 
by the periodic task since the VM is not associated with this host. Am I 
missing something here?


From: 王宏 [mailto:w.wangho...@gmail.com]
Sent: 16 December 2013 11:32
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [Nova][libvirt]when deleting instance which is in 
migrating state, instance files can be stay in destination node forever

Hi all.
When I try to fix a bug:https://bugs.launchpad.net/nova/+bug/1242961,
I get a trouble.

To reproduce the bug is very easy. Live migrate a vm in block_migration mode,
and then delelte the vm immediately.

The reason of this bug is as follow:
1. Because live migrate costs more time, so the vm will be deleted sucessfully
   before live migrate complete. And then, we will get an exception while live
   migrating.
2. After live migrate failed, we start to rollback. But, in the rollback method
   we will get or modify the info of vm from db. Because the vm has been deleted
   already, so we will get instance_not_found exception and rollback will be
   faild too.

I have two ways to fix the bug:
i)Add check in nova-api. When try to delete a vm, we return an error message if
the vm_state is LIVE_MIGRATING. This way is very simple, but need to carefully
consider. I have found a related discussion:
http://lists.openstack.org/pipermail/openstack-dev/2013-October/017454.html, but
it has no result in the discussion.
ii)Before live migrate we get all the data needed by rollback method, and add a
new rollback method. The new method will clean up resources at destination based
on the above data(The resouces at source has been already cleaned up by
deleting).

I have no idea whitch one I should choose. Or, any other ideas?:)

Regards,
wanghong
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Horizon] [Tuskar] [UI] Horizon and Tuskar-UI merge

2013-12-16 Thread Matthias Runge
On 12/13/2013 03:08 PM, Ladislav Smola wrote:
> Horizoners,
> 
> As discussed in TripleO and Horizon meetings, we are proposing to move
> Tuskar UI under the Horizon umbrella. Since we are building our UI
> solution on top of Horizon, we think this is a good fit. It will allow
> us to get feedback and reviews from the appropriate group of developers.
> 
I don't think, we really disagree here.

My main concern would be more: what do we get, if we make up another
project under the umbrella of horizon? I mean, what does that mean at all?

My proposal would be, to send patches directly to horizon. As discussed
in last weeks horizon  meeting, tuskar UI would become integrated in
Horizon, but disabled by default. This would enable a faster integration
in Horizon and would reduce the overhead of creating a separate
repositoy, installation instructions, packaging etc. etc.

>From the horizon side: we would get some new contributors (and hopefully
reviewers), which is very much appreciated.

Matthias

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Performance Regression in Neutron/Havana compared to Quantum/Grizzly

2013-12-16 Thread Salvatore Orlando
Multiple RPC servers is something we should definitely look at.
I don't see a show-stopper reason for which this would not work, although I
recall we found out a few caveats one should be aware of when doing
multiple RPC servers when reviewing the patch for multiple API server (I
wrote them in some other ML thread, I will dig them later). If you are
thinking of implementing this support, you might want to sync up with Mark
McClain who's working on splitting API and RPC servers.

While horizontal scaling is surely desirable, evidence we gathered from
analysis like the one you did showed that probably we can make the
interactions between the neutron server and the agents a lot more efficient
and reliable. I reckon both items are needed and can be implemented
independently.

Regards,
Salvatore



On 16 December 2013 12:42, Nathani, Sreedhar (APS)
wrote:

>  Hello Salvatore,
>
>
>
> Thanks for the updates.  All the changes which you talked is from the
> agent side.
>
>
>
> From my tests,  with multiple L2 agents running and sending/requesting
> messages at the same time from the single neutron rpc server process is not
> able to handle
>
> All the load fast enough and causing the bottleneck.
>
>
>
> With the Carl’s patch (https://review.openstack.org/#/c/60082), we now
> support multiple neutron API process,
>
> My question is why can’t we support multiple neutron rpc server process as
> well?
>
>
>
> Horizontal scaling with multiple neutron-server hosts would be one option,
> but having support of multiple neutron rpc servers process in in the same
>
> System would be really helpful for the scaling of neutron server
> especially during concurrent instance deployments.
>
>
>
> Thanks & Regards,
>
> Sreedhar Nathani
>
>
>
>
>
> *From:* Salvatore Orlando [mailto:sorla...@nicira.com]
> *Sent:* Monday, December 16, 2013 4:55 PM
>
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] Performance Regression in Neutron/Havana
> compared to Quantum/Grizzly
>
>
>
> Hello Sreedhar,
>
>
>
> I am focusing only on the OVS agent at the moment.
>
> Armando fixed a few issues recently with the DHCP agent; those issues were
> triggering a perennial resync; with his fixes I reckon DHCP agent response
> times should be better.
>
>
>
> I reckon Maru is also working on architectural improvements for the DHCP
> agent (see thread on DHCP agent reliability).
>
>
>
> Regards,
>
> Salvatore
>
>
>
> On 13 December 2013 20:26, Nathani, Sreedhar (APS) <
> sreedhar.nath...@hp.com> wrote:
>
> Hello All,
>
>
>
> Update with my testing.
>
>
>
> I have installed one more VM as neutron-server host and configured under
> the Load Balancer.
>
> Currently I have 2 VMs running neutron-server process (one is Controller
> and other is dedicated neutron-server VM)
>
>
>
> With this configuration during the batch instance deployment with a batch
> size of 30 and sleep time of 20min,
>
> 180 instances could get an IP during the first boot. During 181-210
> instance creation some instances could not get an IP.
>
>
>
> This is much better than when running with single neutron server where
> only 120 instances could get an IP during the first boot in Havana.
>
>
>
> When the instances are getting created, parent neutron-server process
> spending close to 90% of the cpu time on both the servers,
>
> While rest of the neutron-server process (APIs) are spending very low CPU
> utilization.
>
>
>
> I think it’s good idea to expand the current multiple neutron-server api
> process to support rpc messages as well.
>
>
>
> Even with current setup (multiple neutron-server hosts), we still see rpc
> timeouts in DHCP, L2 agents
>
> and dnsmasq process is getting restarted due to SIGKILL though.
>
>
>
> Thanks & Regards,
>
> Sreedhar Nathani
>
>
>
> *From:* Nathani, Sreedhar (APS)
> *Sent:* Friday, December 13, 2013 12:08 AM
>
>
> *To:* OpenStack Development Mailing List (not for usage questions)
>
> *Subject:* RE: [openstack-dev] Performance Regression in Neutron/Havana
> compared to Quantum/Grizzly
>
>
>
> Hello Salvatore,
>
>
>
> Thanks for your feedback. Does the patch
> https://review.openstack.org/#/c/57420/ which you are working on bug
> https://bugs.launchpad.net/neutron/+bug/1253993
>
> will help to correct the OVS agent loop slowdown issue?
>
> Does this patch address the DHCP agent updating the host file once in a
> minute and finally sending SIGKILL to dnsmasq process?
>
>
>
> I have tested with Marun’s patch 
> https://review.openstack.org/#/c/61168/regarding ‘Send
> DHCP notifications regardless of agent status’ but this patch
>
> Also observed the same behavior.
>
>
>
>
>
> Thanks & Regards,
>
> Sreedhar Nathani
>
>
>
> *From:* Salvatore Orlando [mailto:sorla...@nicira.com]
>
> *Sent:* Thursday, December 12, 2013 6:21 PM
>
>
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] Performance Regression in Neutron/Havana
> compared to Quantum/Grizzly
>
>
>
> I b

Re: [openstack-dev] [tuskar] How to install tuskar-ui from packaging point of view

2013-12-16 Thread Imre Farkas

On 12/16/2013 08:52 AM, Stephen Gran wrote:

On 16/12/13 03:47, Thomas Goirand wrote:

Hi,

I've been working over the last 2 months to get Ironic, TripleO and
Tuskar ready for an upload in Debian. However, for tuskar-ui, I'm facing
the fact that there's a lack of documentation.

It was easy to get Tuskar packaged. If I understand well, it only needs
2 daemons: tuskar-api, and tuskar-manager. Is this right? If not, what
did I miss? Is tuskar-manager really a daemon? (I have to admit that I
didn't find yet the time to try, so I would appreciate some guidance
here)



I think you are right here.


As for tuskar-ui, the install.rst is quite vague about how to install. I
got the python-tuskar-ui binary package done, with egg-info and all,
that's not the problem. What worries me is this part:

"Go into horizon and create a symlink to the tuskar-ui code:

cd horizon
ln -s ../tuskar-ui/tuskar_ui

Then, install a virtual environment for your setup:


Add this to debian/links or something?  It sounds like it needs a
dependency on horizon to make sure that the directory exists.



Not sure how it translates to Debian packaging but you need to 
copy/symlink the Tuskar-UI source *inside* the Horizon directory.



python tools/install_venv.py"


This means "turn the list of dependencies in the source package into
dependencies in the debian package", I would think.



Yes, that's correct.


3/ The install.rst has:

If everything has gone according to plan, you should be able to run:

tools/with_venv.sh ./manage.py runserver

and have the application start on port 8080. The Tuskar dashboard will
be located at http://localhost:8080/infrastructure

does this mean that on top of Horizon running through Apache, tuskar-ui
needs to run independently? Why is that? Can't we just have tuskar-ui
simply integrated with the rest of Horizon?


Yes, Tuskar-UI runs on top of Horizon. You don't have to create a 
separate Horizon+Tuskar-UI installation, it does not run independently 
of the existing Horizon installation but you have to modify it.


When you create a symlink into the Horizon source, that makes the 
Infrastructure dashboard provided by Tuskar-UI autodiscovered when the 
Horizon application boots up. Tuskar-UI creates and additional tab 
inside the Horizon application, which will be available at 
http://localhost:8080/infrastructure (or on whatever port you set 
Horizon up) and at http://localhost:8080/ you can access the Project and 
Admin dashboards provided by Horizon.


It's not stated in tuskar-ui/install.rst but this guide is meant to set 
up the development environment. It is also worth mentioning that the 
current solution is only temporary, in the long term Tuskar-UI will be a 
part of Horizon (see the Horizon and Tuskar-UI merge thread).


Imre

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Mistral] Community meeting agenda - 12/16/2013

2013-12-16 Thread Renat Akhmerov
Hi!

This is a reminder that we will have another community meeting today in IRC 
(#openstack-meeting) at 16.00 UTC.

Here’s the agenda: https://wiki.openstack.org/wiki/Meetings/MistralAgenda

As usually, you’re welcome to join!

Renat Akhmerov
@ Mirantis Inc.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Performance Regression in Neutron/Havana compared to Quantum/Grizzly

2013-12-16 Thread Nathani, Sreedhar (APS)
Hello Salvatore,

Thanks for the updates.  All the changes which you talked is from the agent 
side.

>From my tests,  with multiple L2 agents running and sending/requesting 
>messages at the same time from the single neutron rpc server process is not 
>able to handle
All the load fast enough and causing the bottleneck.

With the Carl's patch (https://review.openstack.org/#/c/60082), we now support 
multiple neutron API process,
My question is why can't we support multiple neutron rpc server process as well?

Horizontal scaling with multiple neutron-server hosts would be one option, but 
having support of multiple neutron rpc servers process in in the same
System would be really helpful for the scaling of neutron server especially 
during concurrent instance deployments.

Thanks & Regards,
Sreedhar Nathani


From: Salvatore Orlando [mailto:sorla...@nicira.com]
Sent: Monday, December 16, 2013 4:55 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] Performance Regression in Neutron/Havana compared 
to Quantum/Grizzly

Hello Sreedhar,

I am focusing only on the OVS agent at the moment.
Armando fixed a few issues recently with the DHCP agent; those issues were 
triggering a perennial resync; with his fixes I reckon DHCP agent response 
times should be better.

I reckon Maru is also working on architectural improvements for the DHCP agent 
(see thread on DHCP agent reliability).

Regards,
Salvatore

On 13 December 2013 20:26, Nathani, Sreedhar (APS) 
mailto:sreedhar.nath...@hp.com>> wrote:
Hello All,

Update with my testing.

I have installed one more VM as neutron-server host and configured under the 
Load Balancer.
Currently I have 2 VMs running neutron-server process (one is Controller and 
other is dedicated neutron-server VM)

With this configuration during the batch instance deployment with a batch size 
of 30 and sleep time of 20min,
180 instances could get an IP during the first boot. During 181-210 instance 
creation some instances could not get an IP.

This is much better than when running with single neutron server where only 120 
instances could get an IP during the first boot in Havana.

When the instances are getting created, parent neutron-server process spending 
close to 90% of the cpu time on both the servers,
While rest of the neutron-server process (APIs) are spending very low CPU 
utilization.

I think it's good idea to expand the current multiple neutron-server api 
process to support rpc messages as well.

Even with current setup (multiple neutron-server hosts), we still see rpc 
timeouts in DHCP, L2 agents
and dnsmasq process is getting restarted due to SIGKILL though.

Thanks & Regards,
Sreedhar Nathani

From: Nathani, Sreedhar (APS)
Sent: Friday, December 13, 2013 12:08 AM

To: OpenStack Development Mailing List (not for usage questions)
Subject: RE: [openstack-dev] Performance Regression in Neutron/Havana compared 
to Quantum/Grizzly

Hello Salvatore,

Thanks for your feedback. Does the patch 
https://review.openstack.org/#/c/57420/ which you are working on bug 
https://bugs.launchpad.net/neutron/+bug/1253993
will help to correct the OVS agent loop slowdown issue?
Does this patch address the DHCP agent updating the host file once in a minute 
and finally sending SIGKILL to dnsmasq process?

I have tested with Marun's patch https://review.openstack.org/#/c/61168/ 
regarding 'Send DHCP notifications regardless of agent status' but this patch
Also observed the same behavior.


Thanks & Regards,
Sreedhar Nathani

From: Salvatore Orlando [mailto:sorla...@nicira.com]
Sent: Thursday, December 12, 2013 6:21 PM

To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] Performance Regression in Neutron/Havana compared 
to Quantum/Grizzly


I believe your analysis is correct and inline with the findings reported in the 
bug concerning OVS agent loop slowdown.

The issue has become even more prominent with the ML2 plugin due to an 
increased number of notifications sent.

Another issue which makes delays on the DHCP agent worse is that instances send 
a discover message once a minute.

Salvatore
Il 11/dic/2013 11:50 "Nathani, Sreedhar (APS)" 
mailto:sreedhar.nath...@hp.com>> ha scritto:
Hello Peter,

Here are the tests I have done. Already have 240 instances active across all 
the 16 compute nodes. To make the tests and data collection easy,
I have done the tests on single compute node

First Test -
*   240 instances already active,  16 instances on the compute node where I 
am going to do the tests
*   deploy 10 instances concurrently using nova boot command with 
num-instances option in single compute node
*   All the instances could get IP during the instance boot time.

-   Instances are created at  2013-12-10 13:41:01
-   From the compute host, DHCP requests are sent from 13:41:20 but those 
are not reaching the DHCP server
Reply from the DHCP server got at 13:43:08 (A delay of 108 

[openstack-dev] [multipath] Could I use the multipath software provided by the SAN vendors instead of dm-multipath in openstack?

2013-12-16 Thread Qixiaozhen
Hi,all

The storage array used by cinder in my experiment is produced by Huawei. The 
vendor releases its own multipath named Ultrapath with the SAN.

Could I use the Ultrapath instead of dm-multipath in openstack?

Best wishes,

Qi



Qi Xiaozhen
CLOUD OS PDU, IT Product Line, Huawei Enterprise Business Group
enterprise.huawei.com



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova][libvirt]when deleting instance which is in migrating state, instance files can be stay in destination node forever

2013-12-16 Thread 王宏
Hi all.
When I try to fix a bug:https://bugs.launchpad.net/nova/+bug/1242961,
I get a trouble.

To reproduce the bug is very easy. Live migrate a vm in block_migration
mode,
and then delelte the vm immediately.

The reason of this bug is as follow:
1. Because live migrate costs more time, so the vm will be deleted
sucessfully
   before live migrate complete. And then, we will get an exception while
live
   migrating.
2. After live migrate failed, we start to rollback. But, in the rollback
method
   we will get or modify the info of vm from db. Because the vm has been
deleted
   already, so we will get instance_not_found exception and rollback will be
   faild too.

I have two ways to fix the bug:
i)Add check in nova-api. When try to delete a vm, we return an error
message if
the vm_state is LIVE_MIGRATING. This way is very simple, but need to
carefully
consider. I have found a related discussion:
http://lists.openstack.org/pipermail/openstack-dev/2013-October/017454.html,
but
it has no result in the discussion.
ii)Before live migrate we get all the data needed by rollback method, and
add a
new rollback method. The new method will clean up resources at destination
based
on the above data(The resouces at source has been already cleaned up by
deleting).

I have no idea whitch one I should choose. Or, any other ideas?:)

Regards,
wanghong
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >