Re: [openstack-dev] FreeBSD hypervisor (bhyve) driver

2013-11-28 Thread Roman Bogorodskiy
On Wed, Nov 27, 2013 at 7:32 PM, Rafał Jaworowski  wrote:

> The maintenance aspect and testing coverage are valid points, on the
> other hand future changes would have to go a longer way for us: first
> upstream to libvirt, then downstream to the FreeBSD ports collection
> (+ perhaps some OpenStack code bits as well), which makes the process
> more complicated.
>

I don't think that there would be a huge problem with that, because libvirt
releases quite often, and Jason Helfman, who maintains libvirt package in
FreeBSD ports tree, always updates to the new versions on time.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] FreeBSD hypervisor (bhyve) driver

2013-11-28 Thread Roman Bogorodskiy
Hello,

Yes, libvirt's qemu driver works almost fine currently, except the fact
that it
needs a 'real' bridge driver, so all the networking configuration like
filtering rules, NAT, etc
could be done automatically, like for Linux now, instead of making user to
perform
all the configuration manually.

I've been planning to get to bhyve driver as well, but probably after
finishing with the bridge driver
(but unfortunately, I don't have a full picture what would be the best way
to implement that).


On Mon, Nov 25, 2013 at 3:50 PM, Daniel P. Berrange wrote:

> On Fri, Nov 22, 2013 at 10:46:19AM -0500, Russell Bryant wrote:
> > On 11/22/2013 10:43 AM, Rafał Jaworowski wrote:
> > > Russell,
> > > First, thank you for the whiteboard input regarding the blueprint for
> > > FreeBSD hypervisor nova driver:
> > > https://blueprints.launchpad.net/nova/+spec/freebsd-compute-node
> > >
> > > We were considering libvirt support for bhyve hypervisor as well, only
> > > wouldn't want to do this as the first approach for FreeBSD+OpenStack
> > > integration. We'd rather bring bhyve bindings for libvirt later as
> > > another integration option.
> > >
> > > For FreeBSD host support a native hypervisor driver is important and
> > > desired long-term and we would like to have it anyways. Among things
> > > to consider are the following:
> > > - libvirt package is additional (non-OpenStack), external dependency
> > > (maintained in the 'ports' collection, not included in base system),
> > > while native API (via libvmmapi.so library) is integral part of the
> > > base system.
> > > - libvirt license is LGPL, which might be an important aspect for some
> users.
> >
> > That's perfectly fine if you want to go that route as a first step.
> > However, that doesn't mean it's appropriate for merging into Nova.
> > Unless there are strong technical justifications for why this approach
> > should be taken, I would probably turn down this driver until you were
> > able to go the libvirt route.
>
> The idea of a FreeBSD bhyve driver for libvirt has been mentioned
> a few times. We've already got a FreeBSD port of libvirt being
> actively maintained to support QEMU (and possibly Xen, not 100% sure
> on that one), and we'd be more than happy to see further contributions
> such as a bhyve driver.
>
> I am of course biased, as libvirt project maintainer, but I do agree
> that supporting bhyve via libvirt would make sense, since it opens up
> opportunities beyond just OpenStack. There are a bunch of applications
> built on libvirt that could be used to manage bhyve, and a fair few
> applications which have plugins using libvirt
>
> Taking on maint work for a new OpenStack driver is a non-trivial amount
> of work in itself. If the burden for OpenStack maintainers can be reduced
> by, pushing work out to / relying on support from, libvirt, that makes
> sense from OpenStack/Nova's POV.
>
> Regards,
> Daniel
> --
> |: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/:|
> |: http://libvirt.org  -o- http://virt-manager.org:|
> |: http://autobuild.org   -o- http://search.cpan.org/~danberr/:|
> |: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc:|
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][heat][[keystone] RFC: introducing "request identification"

2013-11-28 Thread haruka tanizawa
Thank you for your reply.
I completely misunderstood.

>You're correct on request_id and task_id.
>What I'm planning is a string field that a user can pass in with the
request and it will be part of the task representation.
>That field will have no meaning to Nova, but a client like Heat could use
it to ensure that they don't send requests twice
>by checking if there's a task with that field set.
I see.
Especially, this point is so good.
'Heat could use it to ensure that they don't send requests twice by
checking if there's a task with that field set.'

Moreover, I want to ask some questions about instance-tasks-api.
(I'm sorry it's a little bit long...)

* Is instance-tasks-api process outside of Nova? Is it standalone?
* About 'user can pass in with the request'
  When user specifies task_id, task_id would be which user specified.
  And if user doesn't specify task_id, does task_id generate automatically
by Nova?
  (like correlation_id is generated by oslo auto when specified from
noonne.)
* About management state of API
  Which is correct 'Queued, Active, Error, Complete' or ' pendig, in
progress, and completed'?
  And for exmple 'live migration', there are 'pre migration',
'migration(migrateToURI)' and 'post migration'.
  Do you care about each detailed task? or care about 'live migrating ' ?
  Does 'in progress'(for example) say about in progress of 'pre migration'
or in progress of 'live migration'?
* About relation with 'Taskflow'.
  Nova's taskflow-nize is not yet.
  However, taskflow's persistence of flow state is good helper for
cancelling tasks, I think.
  (I think cancelling is not scope of i-2.)
  How do you think of this relation and the fiture?

I would appriciate updating etherpad or blueprint if you have more detail
or data flow of instance-tasks-api.

Sincerely, Haruka Tanizawa


2013/11/28 Andrew Laski 

> On 11/22/13 at 10:14am, haruka tanizawa wrote:
>
>> Thanks for your reply.
>>
>>  I'm working on the implementation of instance-tasks-api[0] in Nova and
>>>
>> this is what I've been moving towards so far.
>> Yes, I know. I think that is good idea.
>>
>>  The API will accept a string to be a part of the task but it will have
>>>
>> meaning only to the client, not to Nova.  Then if >tasks can be searched
>> or
>> filtered by that field I think that would meet the requirements you layed
>> out above, or is >something missing?
>> Hmmm, as far as I understand, keystone(keystone work plan blueprint)
>> generate request_id to each request.
>> (I think that is a good idea.)
>> And task_id is generated by instance-tasks-api.
>> Is my understanding of this correct?
>> Or if I miss something, thanks for telling me anything.
>>
>
> You're correct on request_id and task_id.  What I'm planning is a string
> field that a user can pass in with the request and it will be part of the
> task representation.  That field will have no meaning to Nova, but a client
> like Heat could use it to ensure that they don't send requests twice by
> checking if there's a task with that field set.
>
>
>> Haruka Tanizawa
>>
>
>  ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] How to handle "simple" janitorial tasks?

2013-11-28 Thread Zhi Yan Liu
https://bugs.launchpad.net/bugs/1256207

On Fri, Nov 29, 2013 at 1:08 PM, Zhi Yan Liu  wrote:
> Hi Koo,
>
> On Fri, Nov 29, 2013 at 9:15 AM, David koo  wrote:
>> Hi All,
>>
>> A quick question about simple "janitorial" tasks ...
>>
>> I noticed that glance.api.v2.image_data.ImageDataController.upload has 
>> two
>> identical "except" clauses (circa line 98):
>> except exception.StorageFull as e:
>> msg = _("Image storage media is full: %s") % e
>> LOG.error(msg)
>> raise webob.exc.HTTPRequestEntityTooLarge(explanation=msg,
>>   request=req)
>>
>> except exception.StorageFull as e:
>> msg = _("Image storage media is full: %s") % e
>> LOG.error(msg)
>> raise webob.exc.HTTPRequestEntityTooLarge(explanation=msg,
>>   request=req)
>>
>> Obviously one of the "except" clauses can be removed (or am I missing
>> something glaringly obvious?) - I shall be happy to do that but should I 
>> first
>> raise some kind of "bug" or should I directly commit a fix or should I bring 
>> up
>> such simple janitorial tasks to the mailing list here on a case-by-case basis
>> for discussion first?
>>
>
> eagle-eyed man, I think it's a defect. I prefer you file a bug report
> first then to prepare patch. (and put the bug id into the commit
> message)
>
> Actually reviewers can give you some valuable message when they look
> your patch, and you can discussing them in team room within IRC if you
> needed. ML is a good place but it has some delay than IRC, I think
> simple questions can be talked in Gerrit or IRC directly but IMO ML is
> better for complicated topic or you want to get more feedback cross
> different project. And if you consider those topic which has epic
> effect change you can involve etherpad or wiki also.
>
> zhiyan
>
>> I do realize that the definition of "simple" can vary from person to 
>> person
>> and so (ideally) such cases should perhaps should be brought to the list for
>> discussion first. But I also worry about introducing noise into the list.
>>
>> --
>> Koo
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Reg : Security groups implementation using openflows in quantum ovs plugin

2013-11-28 Thread Jian Wen
I don't think we can implement a stateful firewall[1] now.

Once connection tracking capability[2] is added to the Linux OVS, we
could start to implement the ovs-firewall-driver blueprint.

[1] http://en.wikipedia.org/wiki/Stateful_firewall
[2]
http://wiki.xenproject.org/wiki/Xen_Development_Projects#Add_connection_tracking_capability_to_the_Linux_OVS


On Tue, Nov 26, 2013 at 2:23 AM, Mike Wilson  wrote:

> Adding Jun to this thread since gmail is failing him.
>
>
> On Tue, Nov 19, 2013 at 10:44 AM, Amir Sadoughi <
> amir.sadou...@rackspace.com> wrote:
>
>>  Yes, my work has been on ML2 with neutron-openvswitch-agent.  I’m
>> interested to see what Jun Park has. I might have something ready before he
>> is available again, but would like to collaborate regardless.
>>
>>  Amir
>>
>>
>>
>>  On Nov 19, 2013, at 3:31 AM, Kanthi P  wrote:
>>
>>  Hi All,
>>
>>  Thanks for the response!
>> Amir,Mike: Is your implementation being done according to ML2 plugin
>>
>>  Regards,
>> Kanthi
>>
>>
>> On Tue, Nov 19, 2013 at 1:43 AM, Mike Wilson wrote:
>>
>>> Hi Kanthi,
>>>
>>>  Just to reiterate what Kyle said, we do have an internal
>>> implementation using flows that looks very similar to security groups. Jun
>>> Park was the guy that wrote this and is looking to get it upstreamed. I
>>> think he'll be back in the office late next week. I'll point him to this
>>> thread when he's back.
>>>
>>>  -Mike
>>>
>>>
>>> On Mon, Nov 18, 2013 at 3:39 PM, Kyle Mestery (kmestery) <
>>> kmest...@cisco.com> wrote:
>>>
 On Nov 18, 2013, at 4:26 PM, Kanthi P 
 wrote:
  > Hi All,
 >
 > We are planning to implement quantum security groups using openflows
 for ovs plugin instead of iptables which is the case now.
 >
 > Doing so we can avoid the extra linux bridge which is connected
 between the vnet device and the ovs bridge, which is given as a work around
 since ovs bridge is not compatible with iptables.
 >
 > We are planning to create a blueprint and work on it. Could you
 please share your views on this
 >
  Hi Kanthi:

 Overall, this idea is interesting and removing those extra bridges
 would certainly be nice. Some people at Bluehost gave a talk at the Summit
 [1] in which they explained they have done something similar, you may want
 to reach out to them since they have code for this internally already.

 The OVS plugin is in feature freeze during Icehouse, and will be
 deprecated in favor of ML2 [2] at the end of Icehouse. I would advise you
 to retarget your work at ML2 when running with the OVS agent instead. The
 Neutron team will not accept new features into the OVS plugin anymore.

 Thanks,
 Kyle

 [1]
 http://www.openstack.org/summit/openstack-summit-hong-kong-2013/session-videos/presentation/towards-truly-open-and-commoditized-software-defined-networks-in-openstack
 [2] https://wiki.openstack.org/wiki/Neutron/ML2

 > Thanks,
 > Kanthi
 > ___
 > OpenStack-dev mailing list
 > OpenStack-dev@lists.openstack.org
 > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

>>>
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>  ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Cheers,
Jian
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] remote debugging

2013-11-28 Thread yatin kumbhare
Hello Tracy,

some of the problem I faced, to execute different nova cli(s), i required
to restart nova service every-time.

Nova service needs to fully start/realize, before debugging can start.

import pydev; works as break-point for debugger, needs to add at all the
places.

For ex: in case one wants to debug nova boot flow end-to-end.
needs to know, the exact code location to put break-point in
each service.


Regards,
Yatin



On Mon, Nov 25, 2013 at 10:43 PM, Tracy Jones  wrote:

> Thanks Yatin - that is the change I am proposing in my patch
>
>
> On Nov 25, 2013, at 9:09 AM, yatin kumbhare 
> wrote:
>
> Hi,
>
> http://debugopenstack.blogspot.in/
>
> I have done Openstack remote debugging with eclipse and pydev.
>
> only change is to exclude python thread library from monkey patch at
> service start up.
>
> Regards,
> Yatin
>
>
> On Mon, Nov 25, 2013 at 9:10 PM, Russell Bryant wrote:
>
>> On 11/25/2013 10:28 AM, Tracy Jones wrote:
>> > Hi Folks - i am trying to add a patch to enable remote debugging in the
>> > nova services.  I can make this work very simply, but it requires a
>> > change to monkey_patching - i.e.
>> >
>> > eventlet.monkey_patch(os=False, select=True, socket=True,
>> thread=False,
>> >   time=True, psycopg=True)
>> >
>> > I’m working with some folks from the debugger vendor (pycharm) on why
>> > this is needed.   However - i’ve been using it with nova-compute for a
>> > month or so and do not see any issues which changing the
>> > monkey-patching.  Since this is done only when someone wants to use the
>> > debugger - is making this change so bad?
>> >
>> >  https://review.openstack.org/56287
>>
>> Last I looked at the review, it wasn't known that thread=False was
>> specifically what was needed IIRC>  That's good progress and is a bit
>> less surprising than the change before.
>>
>> I suppose if the options that enable this come with a giant warning,
>> it'd be OK.  Something like:
>>
>>   WARNING: Using this option changes how Nova uses the eventlet
>>   library to support async IO. This could result in failures that do
>>   not occur under normal opreation. Use at your own risk.
>>
>> --
>> Russell Bryant
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [eventlet] should we use spawn instead of spawn_n?

2013-11-28 Thread Jian Wen
eventlet.spawn_n is the same as eventlet.spawn, but it’s not possible
to know how the function terminated (i.e. no return value or exceptions)[1].
If an exception is raised in the function, spawn_n prints a stack trace.
The stack trace will not be written to the log file. It will be lost if we
restart the daemon.

Maybe we need to replace spawn_n with spawn. If an exception is raised in
the
function, we can log it if needed. Any thoughts?

related bug: https://bugs.launchpad.net/neutron/+bug/1254984

[1] http://eventlet.net/doc/basic_usage.html

-- 
Cheers,
Jian
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] How to handle "simple" janitorial tasks?

2013-11-28 Thread Zhi Yan Liu
Hi Koo,

On Fri, Nov 29, 2013 at 9:15 AM, David koo  wrote:
> Hi All,
>
> A quick question about simple "janitorial" tasks ...
>
> I noticed that glance.api.v2.image_data.ImageDataController.upload has two
> identical "except" clauses (circa line 98):
> except exception.StorageFull as e:
> msg = _("Image storage media is full: %s") % e
> LOG.error(msg)
> raise webob.exc.HTTPRequestEntityTooLarge(explanation=msg,
>   request=req)
>
> except exception.StorageFull as e:
> msg = _("Image storage media is full: %s") % e
> LOG.error(msg)
> raise webob.exc.HTTPRequestEntityTooLarge(explanation=msg,
>   request=req)
>
> Obviously one of the "except" clauses can be removed (or am I missing
> something glaringly obvious?) - I shall be happy to do that but should I first
> raise some kind of "bug" or should I directly commit a fix or should I bring 
> up
> such simple janitorial tasks to the mailing list here on a case-by-case basis
> for discussion first?
>

eagle-eyed man, I think it's a defect. I prefer you file a bug report
first then to prepare patch. (and put the bug id into the commit
message)

Actually reviewers can give you some valuable message when they look
your patch, and you can discussing them in team room within IRC if you
needed. ML is a good place but it has some delay than IRC, I think
simple questions can be talked in Gerrit or IRC directly but IMO ML is
better for complicated topic or you want to get more feedback cross
different project. And if you consider those topic which has epic
effect change you can involve etherpad or wiki also.

zhiyan

> I do realize that the definition of "simple" can vary from person to 
> person
> and so (ideally) such cases should perhaps should be brought to the list for
> discussion first. But I also worry about introducing noise into the list.
>
> --
> Koo
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Thursday subteam meeting

2013-11-28 Thread Itsuro ODA
Hi,

I found it in /meeting/nuetron_lbaas.

(neutron -> nuetron)

On Fri, 29 Nov 2013 07:51:56 +0900
Itsuro ODA  wrote:

> Hi,
> 
> I can't find the 28th meeting log.
> (Does not logs automatically generated ?)
> 
> Thanks.
> -- 
> Itsuro ODA 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
Itsuro ODA 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Glance] How to handle "simple" janitorial tasks?

2013-11-28 Thread David koo
Hi All,

A quick question about simple "janitorial" tasks ...

I noticed that glance.api.v2.image_data.ImageDataController.upload has two
identical "except" clauses (circa line 98):
except exception.StorageFull as e:
msg = _("Image storage media is full: %s") % e
LOG.error(msg)
raise webob.exc.HTTPRequestEntityTooLarge(explanation=msg,
  request=req)

except exception.StorageFull as e:
msg = _("Image storage media is full: %s") % e
LOG.error(msg)
raise webob.exc.HTTPRequestEntityTooLarge(explanation=msg,
  request=req)

Obviously one of the "except" clauses can be removed (or am I missing
something glaringly obvious?) - I shall be happy to do that but should I first
raise some kind of "bug" or should I directly commit a fix or should I bring up
such simple janitorial tasks to the mailing list here on a case-by-case basis
for discussion first?

I do realize that the definition of "simple" can vary from person to person
and so (ideally) such cases should perhaps should be brought to the list for
discussion first. But I also worry about introducing noise into the list.

--
Koo
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [heat][hadoop][template] Does anyone has a hadoop template

2013-11-28 Thread Jay Lau
Hi,

I'm now trying to deploy a hadoop cluster with heat, just wondering if
someone who has a heat template which can help me do the work.

Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Thursday subteam meeting

2013-11-28 Thread Itsuro ODA
Hi,

I can't find the 28th meeting log.
(Does not logs automatically generated ?)

Thanks.
-- 
Itsuro ODA 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Thursday subteam meeting

2013-11-28 Thread Itsuro ODA
Hi Eugene,

Thank you for the response.

I have a comment.
I think 'provider' attribute should be added to loadbalance resource
and used rather than pool's 'provider' since I think using multiple
driver within a loadbalancer does not make sense.
What do you think ?

I'm looking forward to your code up !

Thanks.
Itsuro Oda

On Thu, 28 Nov 2013 16:58:40 +0400
Eugene Nikanorov  wrote:

> Hi Itsuro,
> 
> I've updated the wiki with some examples of cli workflow that illustrate
> proposed API.
> Please see the updated page:
> https://wiki.openstack.org/wiki/Neutron/LBaaS/LoadbalancerInstance#API_change
> 
> Thanks,
> Eugene.
> 
> 
> On Thu, Nov 28, 2013 at 3:00 AM, Itsuro ODA  wrote:
> 
> > Hi,
> >
> > I'd like to review about LoadblancerInstance API specification.
> > Please update wiki page before the meeting.
> >
> > (It is a little bit hard to follow in the IRC for me since
> > I'm not English native. so I'd like to consider for the
> > API beforehand.)
> >
> > Thanks.
> > Itsuro Oda
> >
> > On Wed, 27 Nov 2013 14:07:47 +0400
> > Eugene Nikanorov  wrote:
> >
> > > Hi Neutron folks,
> > >
> > > LBaaS subteam meeting will be on Thursday, 27, at 14-00 UTC as usual.
> > > We'll discuss current progress and continue with feature design.
> > >
> > > Thanks,
> > > Eugene.
> >
> > --
> > Itsuro ODA 
> >
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >

-- 
Itsuro ODA 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Schduler] Volunteers wanted for a modest proposal for an external scheduler in our lifetime

2013-11-28 Thread Robert Collins
On 29 November 2013 09:44, Gary Kotton  wrote:
>
>
> The first stage is technical - move Nova scheduling code from A to be.
> What do we achieve - not much - we actually complicate things - there is
> always churn in Nova and we will have duplicate code bases. In addition to
> this the only service that can actually make use of they is Nova
>
> The second stage is defining an API that other modules can use (we have
> yet to decide if this will be RPC based or have a interface like Glance,
> Cinder etc.)
> We have yet to even talk about the API's.
> The third stage is adding shiny new features and trying to not have a
> community tar and feather us.

Yup; I look forward to our tar and feathering overlords. :)

> Prior to copying code we really need to discuss the API's.

I don't think we do: it's clear that we need to come up with them -
it's necessary, and noone has expressed any doubt about the ability to
do that. RPC API evolution is fairly well understood - we add a new
method, and have it do the necessary, then we go to the users and get
them using it, then we delete the old one.

> This can even
> be done in parallel if your concern is time and resources. But the point
> is we need a API to interface with the service. For a start we can just
> address the Nova use case. We need to at least address:
> 1. Scheduling interface
> 2. Statistics updates
> 3. API's for configuring the scheduling policies
>
> Later these will all need to bode well with all of the existing modules
> that we want to support - Nova, Cinder and Neutron (if I missed on then
> feel free to kick me whilst I am down)

Ironic perhaps.

> I do not think that we should focus on failure modes, we should plan it
> and break it up so that it will be usable and functional and most
> importantly useful in the near future.
>
> How about next week we sit together and draw up a wiki of the flows, data
> structures and interfaces. Lets go from there.

While I disagree about us needing to do it right now, I'm very happy
to spend some time on it - I don't want to stop folk doing work that
needs to be done!

-Rob



-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic][Ceilometer] get IPMI data for ceilometer

2013-11-28 Thread Doug Hellmann
On Thu, Nov 28, 2013 at 1:00 PM, Devananda van der Veen <
devananda@gmail.com> wrote:

>
> On Nov 25, 2013 7:13 PM, "Doug Hellmann" 
> wrote:
> >
> >
> >
> >
> > On Mon, Nov 25, 2013 at 3:56 PM, Devananda van der Veen <
> devananda@gmail.com> wrote:
> >>
> >> Hi!
> >>
> >> Very good questions. I think most of them are directed towards the
> Ceilometer team, but I have answered a few bits inline.
> >>
> >>
> >> On Mon, Nov 25, 2013 at 7:24 AM, wanghaomeng 
> wrote:
> >>>
> >>>
> >>> Hello all:
> >>>
> >>> Basically, I understand the solution is - Our Ironic will implement an
> IPMI driver
> >>
> >>
> >> We will need to add a new interface -- for example,
> ironic.drivers.base.BaseDriver:sensor and the corresponding
> ironic.drivers.base.SensorInterface class, then implement this interface as
> ironic.drivers.modules.ipmitool:IPMISensor
> >>
> >> We also need to define the methods this interface supports and what the
> return data type is for each method. I imagine it may be something like:
> >> - SensorInterface.list_available_sensors(node) returns a list of sensor
> names for that node
> >> - SensorInterface.get_measurements(node, list_of_sensor_names) returns
> a dict of dicts, eg, { 'sensor_1': {'key': 'value'}, 'sensor_2': ...}
> >>
> >>>
> >>> (extendable framework for more drivers) to collect hardware sensor
> data(cpu temp, fan speed, volts, etc) via IPMI protocol from hardware
> server node, and emit the AMQP message to Ceilometer Collector, Ceilometer
> have the framework to handle the valid sample message and save to the
> database for data retrieving by consumer.
> >>>
> >>> Now, how do you think if we should clearly define the interface & data
> model specifications between Ironic and Ceilometer to enable IPMI data
> collecting, then our two team can start the coding together?
> >>
> >>
> >> I think this is just a matter of understanding Ceilometer's API so that
> Ironic can emit messages in the correct format. You've got many good
> questions for the Ceilometer team on this below.
> >>
> >>>
> >>>
> >>> And I still have some concern with our interface and data model as
> below, the spec need to be discussed and finalized:
> >>>
> >>> 1. What is the Ceilometer sample data mandatory attributes, such as
> instance_id/tenant_id/user_id/resource_id, if they are not  optional, where
> are these data populated, from Ironic or Ceilomter side?
> >>>
> >>>
> >>>   name/type/unit/volume/timestamp - basic sample property, can be
> populated from Ironic side as data source
> >>>   user_id/project_id/resource_id - Ironic or Ceilometer populate these
> fields??
> >
> >
> > Ceilometer knows nothing about resources unless it is told, so all of
> the required fields have to be provided by the sender.
> >
> >
> >>>
> >>>   resource_metadata - this is for Ceilometer metadata query, Ironic
> know nothing for such resource metadata I think
> >
> >
> > The resource metadata depends on the resource type, but should be all of
> the user-visible attributes for that object stored in the database at the
> time the measurement is taken. For example, for instances we (try to) get
> all of the instance attributes.
> >
>
> We could send all the node.properties,  Getting into node.driver_info
> would expose passwords and such, so we shouldn't send that.
>

Agreed.



> >>>
> >>>   source - can we hard-code as 'hardware' as a source identifier?
> >
> >
> > No, the source is the source of the user and project ids, not the source
> of the measurement (the data source is implied by the meter name). The
> default source for user and project is "openstack" to differentiate from an
> add-on layer like a PaaS where there are different user or project ids.
> >
> >
> >>>
> >>>
> >>
> >> Ironic can cache the user_id and project_id of the instance. These will
> not be present for unprovisioned nodes.
> >>
> >> I'm not sure what "resource_id" is in this context, perhaps the nova
> instance_uuid? If so, Ironic has that as well.
> >
> >
> > Do end-users know about bare metal servers before they are provisioned
> as instances? Can a regular user, for example, as for the list of available
> servers or find details about one by name or id?
> >
> >
>
> There is an API service which exposes information about unprovisioned
> servers. At the moment, it is admin-only. If you think of an end-user as
> someone using tuskar, they will likely want to know about unprovisioned
> servers.
>
OK, then some form of auditing event (similar to the instance and network
"exists" events) might make sense. I think those are a lower priority than
the standard CRUD events, though.

> >>
> >>
> >>>
> >>> 2. Not sure if our Ceilometer only accept the signed-message, if it is
> case, how Ironic get the message trust for Ceilometer, and send the valid
> message which can be accepted by Ceilometer Collector?
> >
> >
> > I'm not sure it's appropriate for ironic to be sending messages using
> ceilometer's sample format. We receive data from the other projects using

Re: [openstack-dev] [Solum] Configuration options placement

2013-11-28 Thread Doug Hellmann
On Wed, Nov 27, 2013 at 5:21 PM, Georgy Okrokvertskhov <
gokrokvertsk...@mirantis.com> wrote:

> Hi,
>
> I am working on the user-authentication BP implementation. I need to
> introduce a new configuration option for enable or disable keystone
> authentication for incoming request. I am looking for a right place for
> this option.
>
> The current situation is that we have two places for configuration, one is
> oslo.config and second one is a pecan configuration. My initial intension
> was to add all parameters to solum.conf file like it is done for nova.
> Keystone middleware anyway use oslo.config for keystone connection
> parameters.
> At the same time there are projects (Ceilometer and Ironic) which have
> enable_acl parameter as a part of pecan config.
>
> From my perspective it is not reasonable to have authentication options in
> two different places. I would rather use solum.conf for all parameters and
> limit pecan config usage to pecan specific options.
>

Yes, I think the intent for ceilometer was to add a separate configuration
option to replace the one in the pecan config and that we just overlooked
doing that. It will certainly happen before any of that app code makes it
way into Oslo (planned for this cycle).

Doug



>
> I am looking for your input on this.
>
> Thanks,
> Georgy
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Schduler] Volunteers wanted for a modest proposal for an external scheduler in our lifetime

2013-11-28 Thread Gary Kotton


On 11/28/13 8:12 PM, "Robert Collins"  wrote:

>On 29 November 2013 04:50, Gary Kotton  wrote:
>
>> I am not really sure how we can have a client tree without even having
>> discussed the API's and interfaces. From the initial round of emails the
>> intention was to make use of the RPC mechanism to speak with the
>>scheduler.
>
>It still is. We have an existing RPC API in the nova/scheduler tree.
>
>> One option worth thinking about is to introduce a new scheduling driver
>>to
>> nova - this driver will interface with the external scheduler. This will
>> let us define the scheduling API, model etc, without being in the
>>current
>> confines of Nova. This will also enable all of the other modules, for
>> example Cinder to hook into it.
>
>The problem is that that is the boil-the-ocean approach that hasn't
>succeeded for several cycles. We have interest in trying something new
>- there are three failure modes I can see:
>A - we fail to split it out enough, and noone else uses it.
>B - we introduce a performance / correctness problem during the split out
>C - we stall and don't do anything

The first stage is technical - move Nova scheduling code from A to be.
What do we achieve - not much - we actually complicate things - there is
always churn in Nova and we will have duplicate code bases. In addition to
this the only service that can actually make use of they is Nova

The second stage is defining an API that other modules can use (we have
yet to decide if this will be RPC based or have a interface like Glance,
Cinder etc.)
We have yet to even talk about the API's.
The third stage is adding shiny new features and trying to not have a
community tar and feather us.

Prior to copying code we really need to discuss the API's. This can even
be done in parallel if your concern is time and resources. But the point
is we need a API to interface with the service. For a start we can just
address the Nova use case. We need to at least address:
1. Scheduling interface
2. Statistics updates
3. API's for configuring the scheduling policies

Later these will all need to bode well with all of the existing modules
that we want to support - Nova, Cinder and Neutron (if I missed on then
feel free to kick me whilst I am down)

I do not think that we should focus on failure modes, we should plan it
and break it up so that it will be usable and functional and most
importantly useful in the near future.

How about next week we sit together and draw up a wiki of the flows, data
structures and interfaces. Lets go from there.

Thanks
Gary


>
>Right now, I am mainly worried about B. Getting good APIs is step two
>after getting a solid split out scheduler.
>
>-Rob
>
>
>-- 
>Robert Collins 
>Distinguished Technologist
>HP Converged Cloud
>
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>https://urldefense.proofpoint.com/v1/url?u=http://lists.openstack.org/cgi-
>bin/mailman/listinfo/openstack-dev&k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0A&r=e
>H0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0A&m=1c2juHtCdurF15wlASmHX
>Xy7sW%2FYrZ1iUwZVFKhfDGs%3D%0A&s=061da913e6d2a1ea705e44dc0523f8be2b56d4245
>d34d8aed35bc95dca402064


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Stop logging non-exceptional conditions as ERROR

2013-11-28 Thread Salvatore Orlando
Perhaps it's because the l3 agent log it's now named q-vpn.log, if the vpn
service is enabled as well.

Salvatore




On 28 November 2013 18:50, Armando M.  wrote:

> I have been doing so in the number of patches I pushed to reduce error
> traces due to the communication between server and dhcp agent.
>
> I wanted to take care of the l3 agent too, but one thing I noticed is
> that I couldn't find a log for it (I mean on the artifacts that are
> published at job's completion). Actually, I couldn't find an l3 agent
> started by devstack either.
>
> Am I missing something?
>
> On 27 November 2013 09:08, Salvatore Orlando  wrote:
> > Thanks Maru,
> >
> > This is something my team had on the backlog for a while.
> > I will push some patches to contribute towards this effort in the next
> few
> > days.
> >
> > Let me know if you're already thinking of targeting the completion of
> this
> > job for a specific deadline.
> >
> > Salvatore
> >
> >
> > On 27 November 2013 17:50, Maru Newby  wrote:
> >>
> >> Just a heads up, the console output for neutron gate jobs is about to
> get
> >> a lot noisier.  Any log output that contains 'ERROR' is going to be
> dumped
> >> into the console output so that we can identify and eliminate
> unnecessary
> >> error logging.  Once we've cleaned things up, the presence of unexpected
> >> (non-whitelisted) error output can be used to fail jobs, as per the
> >> following Tempest blueprint:
> >>
> >> https://blueprints.launchpad.net/tempest/+spec/fail-gate-on-log-errors
> >>
> >> I've filed a related Neutron blueprint for eliminating the unnecessary
> >> error logging:
> >>
> >>
> >>
> https://blueprints.launchpad.net/neutron/+spec/log-only-exceptional-conditions-as-error
> >>
> >> I'm looking for volunteers to help with this effort, please reply in
> this
> >> thread if you're willing to assist.
> >>
> >> Thanks,
> >>
> >>
> >> Maru
> >> ___
> >> OpenStack-dev mailing list
> >> OpenStack-dev@lists.openstack.org
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Hacking] License headers in empty files

2013-11-28 Thread Julien Danjou
On Thu, Nov 28 2013, Sean Dague wrote:

> I'm totally in favor of going further and saying "empty files shouldn't
> have license headers, because their content of emptiness isn't
> copyrightable" [1]. That's just not how it's written today.

I went ahead and sent a first patch:

  https://review.openstack.org/#/c/59090/

Help appreciated. :)

-- 
Julien Danjou
-- Free Software hacker - independent consultant
-- http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Hacking] License headers in empty files

2013-11-28 Thread Roman Prykhodchenko
Sean, Julien,

Than really makes sense. I've seen cases when guys -1ed patches for not having 
the header
in empty files referring to that "...all source files..." phrase. That's why I 
think it's reasonable to
add your comments to the Hacking rules.

- Roman

On Nov 28, 2013, at 20:08 , Sean Dague  wrote:

> On 11/28/2013 01:01 PM, Julien Danjou wrote:
>> On Thu, Nov 28 2013, Roman Prykhodchenko wrote:
>> 
>>> The point of this email is _not_ to blame someone or to push my personal
>>> opinion to the folks who gave me the feedback. What I'm trying to do is to
>>> to bring more clarity to our hacking rules because, as I see, currently
>>> different folks interpret them differently.
>> 
>> Anyway, having headers in empty file sounds just dumb.
>> 
>> Maybe a mistake that has been transformed into a rule?
> 
> When we wrote the hacking rule for the license check basically we didn't
> want to overreach and cause a ton of work on projects to purge this. So
> basically any file < 10 lines, we don't enforce the Apache license
> header check. This allows __init__.py files to be either empty (which is
> what they should be), or have the header. It just doesn't check for
> trivially small files.
> 
> I'm totally in favor of going further and saying "empty files shouldn't
> have license headers, because their content of emptiness isn't
> copyrightable" [1]. That's just not how it's written today.
> 
>   -Sean
> 
> 1. Philip Glass might disagree -
> http://en.wikipedia.org/wiki/4%E2%80%B233%E2%80%B3
> 
> -- 
> Sean Dague
> http://dague.net
> 



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Schduler] Volunteers wanted for a modest proposal for an external scheduler in our lifetime

2013-11-28 Thread Robert Collins
On 29 November 2013 04:50, Gary Kotton  wrote:

> I am not really sure how we can have a client tree without even having
> discussed the API's and interfaces. From the initial round of emails the
> intention was to make use of the RPC mechanism to speak with the scheduler.

It still is. We have an existing RPC API in the nova/scheduler tree.

> One option worth thinking about is to introduce a new scheduling driver to
> nova - this driver will interface with the external scheduler. This will
> let us define the scheduling API, model etc, without being in the current
> confines of Nova. This will also enable all of the other modules, for
> example Cinder to hook into it.

The problem is that that is the boil-the-ocean approach that hasn't
succeeded for several cycles. We have interest in trying something new
- there are three failure modes I can see:
A - we fail to split it out enough, and noone else uses it.
B - we introduce a performance / correctness problem during the split out
C - we stall and don't do anything

Right now, I am mainly worried about B. Getting good APIs is step two
after getting a solid split out scheduler.

-Rob


-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Hacking] License headers in empty files

2013-11-28 Thread Sean Dague
On 11/28/2013 01:01 PM, Julien Danjou wrote:
> On Thu, Nov 28 2013, Roman Prykhodchenko wrote:
> 
>> The point of this email is _not_ to blame someone or to push my personal
>> opinion to the folks who gave me the feedback. What I'm trying to do is to
>> to bring more clarity to our hacking rules because, as I see, currently
>> different folks interpret them differently.
> 
> Anyway, having headers in empty file sounds just dumb.
> 
> Maybe a mistake that has been transformed into a rule?

When we wrote the hacking rule for the license check basically we didn't
want to overreach and cause a ton of work on projects to purge this. So
basically any file < 10 lines, we don't enforce the Apache license
header check. This allows __init__.py files to be either empty (which is
what they should be), or have the header. It just doesn't check for
trivially small files.

I'm totally in favor of going further and saying "empty files shouldn't
have license headers, because their content of emptiness isn't
copyrightable" [1]. That's just not how it's written today.

-Sean

1. Philip Glass might disagree -
http://en.wikipedia.org/wiki/4%E2%80%B233%E2%80%B3

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Hacking] License headers in empty files

2013-11-28 Thread Julien Danjou
On Thu, Nov 28 2013, Roman Prykhodchenko wrote:

> The point of this email is _not_ to blame someone or to push my personal
> opinion to the folks who gave me the feedback. What I'm trying to do is to
> to bring more clarity to our hacking rules because, as I see, currently
> different folks interpret them differently.

Anyway, having headers in empty file sounds just dumb.

Maybe a mistake that has been transformed into a rule?

-- 
Julien Danjou
// Free Software hacker / independent consultant
// http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic][Ceilometer] get IPMI data for ceilometer

2013-11-28 Thread Devananda van der Veen
On Nov 25, 2013 7:13 PM, "Doug Hellmann" 
wrote:
>
>
>
>
> On Mon, Nov 25, 2013 at 3:56 PM, Devananda van der Veen <
devananda@gmail.com> wrote:
>>
>> Hi!
>>
>> Very good questions. I think most of them are directed towards the
Ceilometer team, but I have answered a few bits inline.
>>
>>
>> On Mon, Nov 25, 2013 at 7:24 AM, wanghaomeng  wrote:
>>>
>>>
>>> Hello all:
>>>
>>> Basically, I understand the solution is - Our Ironic will implement an
IPMI driver
>>
>>
>> We will need to add a new interface -- for example,
ironic.drivers.base.BaseDriver:sensor and the corresponding
ironic.drivers.base.SensorInterface class, then implement this interface as
ironic.drivers.modules.ipmitool:IPMISensor
>>
>> We also need to define the methods this interface supports and what the
return data type is for each method. I imagine it may be something like:
>> - SensorInterface.list_available_sensors(node) returns a list of sensor
names for that node
>> - SensorInterface.get_measurements(node, list_of_sensor_names) returns a
dict of dicts, eg, { 'sensor_1': {'key': 'value'}, 'sensor_2': ...}
>>
>>>
>>> (extendable framework for more drivers) to collect hardware sensor
data(cpu temp, fan speed, volts, etc) via IPMI protocol from hardware
server node, and emit the AMQP message to Ceilometer Collector, Ceilometer
have the framework to handle the valid sample message and save to the
database for data retrieving by consumer.
>>>
>>> Now, how do you think if we should clearly define the interface & data
model specifications between Ironic and Ceilometer to enable IPMI data
collecting, then our two team can start the coding together?
>>
>>
>> I think this is just a matter of understanding Ceilometer's API so that
Ironic can emit messages in the correct format. You've got many good
questions for the Ceilometer team on this below.
>>
>>>
>>>
>>> And I still have some concern with our interface and data model as
below, the spec need to be discussed and finalized:
>>>
>>> 1. What is the Ceilometer sample data mandatory attributes, such as
instance_id/tenant_id/user_id/resource_id, if they are not  optional, where
are these data populated, from Ironic or Ceilomter side?
>>>
>>>
>>>   name/type/unit/volume/timestamp - basic sample property, can be
populated from Ironic side as data source
>>>   user_id/project_id/resource_id - Ironic or Ceilometer populate these
fields??
>
>
> Ceilometer knows nothing about resources unless it is told, so all of the
required fields have to be provided by the sender.
>
>
>>>
>>>   resource_metadata - this is for Ceilometer metadata query, Ironic
know nothing for such resource metadata I think
>
>
> The resource metadata depends on the resource type, but should be all of
the user-visible attributes for that object stored in the database at the
time the measurement is taken. For example, for instances we (try to) get
all of the instance attributes.
>

We could send all the node.properties,  Getting into node.driver_info would
expose passwords and such, so we shouldn't send that.

>>>
>>>   source - can we hard-code as 'hardware' as a source identifier?
>
>
> No, the source is the source of the user and project ids, not the source
of the measurement (the data source is implied by the meter name). The
default source for user and project is "openstack" to differentiate from an
add-on layer like a PaaS where there are different user or project ids.
>
>
>>>
>>>
>>
>> Ironic can cache the user_id and project_id of the instance. These will
not be present for unprovisioned nodes.
>>
>> I'm not sure what "resource_id" is in this context, perhaps the nova
instance_uuid? If so, Ironic has that as well.
>
>
> Do end-users know about bare metal servers before they are provisioned as
instances? Can a regular user, for example, as for the list of available
servers or find details about one by name or id?
>
>

There is an API service which exposes information about unprovisioned
servers. At the moment, it is admin-only. If you think of an end-user as
someone using tuskar, they will likely want to know about unprovisioned
servers.

>>
>>
>>>
>>> 2. Not sure if our Ceilometer only accept the signed-message, if it is
case, how Ironic get the message trust for Ceilometer, and send the valid
message which can be accepted by Ceilometer Collector?
>
>
> I'm not sure it's appropriate for ironic to be sending messages using
ceilometer's sample format. We receive data from the other projects using
the more generic notification system, and that seems like the right tool to
use here, too. Unless the other ceilometer devs disagree?
>
>
>>>
>>>
>>> 3. What is the Ceilometer sample data structure, and what is the min
data item set for the IPMI message be emitted to Collector?
>>>   name/type/unit/volume/timestamp/source - is this min data item set?
>>>
>>> 3. If the detailed data model should be defined for our IPMI data now?,
what is our the first version scope, how many IPMI data type we should
support? Here is a IPMI data sample l

Re: [openstack-dev] [Neutron] Stop logging non-exceptional conditions as ERROR

2013-11-28 Thread Armando M.
I have been doing so in the number of patches I pushed to reduce error
traces due to the communication between server and dhcp agent.

I wanted to take care of the l3 agent too, but one thing I noticed is
that I couldn't find a log for it (I mean on the artifacts that are
published at job's completion). Actually, I couldn't find an l3 agent
started by devstack either.

Am I missing something?

On 27 November 2013 09:08, Salvatore Orlando  wrote:
> Thanks Maru,
>
> This is something my team had on the backlog for a while.
> I will push some patches to contribute towards this effort in the next few
> days.
>
> Let me know if you're already thinking of targeting the completion of this
> job for a specific deadline.
>
> Salvatore
>
>
> On 27 November 2013 17:50, Maru Newby  wrote:
>>
>> Just a heads up, the console output for neutron gate jobs is about to get
>> a lot noisier.  Any log output that contains 'ERROR' is going to be dumped
>> into the console output so that we can identify and eliminate unnecessary
>> error logging.  Once we've cleaned things up, the presence of unexpected
>> (non-whitelisted) error output can be used to fail jobs, as per the
>> following Tempest blueprint:
>>
>> https://blueprints.launchpad.net/tempest/+spec/fail-gate-on-log-errors
>>
>> I've filed a related Neutron blueprint for eliminating the unnecessary
>> error logging:
>>
>>
>> https://blueprints.launchpad.net/neutron/+spec/log-only-exceptional-conditions-as-error
>>
>> I'm looking for volunteers to help with this effort, please reply in this
>> thread if you're willing to assist.
>>
>> Thanks,
>>
>>
>> Maru
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Hacking] License headers in empty files

2013-11-28 Thread Denis Makogon
Good question, Roman. I'm also interested in this. Are there any
best-practices of header usage ? Should we place headers whereven it needs ?


2013/11/28 Roman Prykhodchenko 

> Hi folks,
>
> according to our hacking rules all source files should contain the Apache
> license header in the beginning (
> http://docs.openstack.org/developer/hacking/#openstack-licensing).
> There are special files that in most of the cases are empty, i.e.,
> __init__.py.
>
> I used to put license headers to __init__ files when I was working on
> Neutron or Ironic. However, recently I got
> a feedback for one of my patches from several folks that said that licence
> headers should be removed from __init__ files because
> empty files are not source files.
>
> The point of this email is _not_ to blame someone or to push my personal
> opinion to the folks who gave me the feedback. What I'm trying to do is to
> to bring more clarity to our hacking rules because, as I see, currently
> different folks interpret them differently.
>
>
> - Roman
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack][qa][Tempest][Network] Test for external connectivity

2013-11-28 Thread Jeremy Stanley
On 2013-11-28 06:46:27 -0500 (-0500), Yair Fried wrote:
[...]
> 4. Jeremy Stanley - "test check for no fewer than three addresses"
> -- Why?

If your tests try to communicate with addresses which are truly
outside your own network, and thus outside your sphere of control,
you don't want them failing because of maintenance on the system
being pinged (don't just trust in a third party's HA to be working
around the clock--I've been burned there before as well). Also, the
Internet is not generally trustworthy, least of all for a
low-priority protocol like ICMP type echo/echo-reply. Send several
each to several remote addresses and as long as at least one reply
comes back, there is *probably* working egress.

Also, even at that level of belt-and-braces paranoia, I'm pretty
sure we won't want this test enabled in upstream CI unless we can
fake an "external" address on a loopback within the DevStack server
itself (or perhaps if the addresses belong to remote things on which
the test already depends, such as our PyPI mirror and Git farm, but
that still adds a requirement for layer 4 NAT on the DevStack VM).
-- 
Jeremy Stanley

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Hacking] License headers in empty files

2013-11-28 Thread Roman Prykhodchenko
Hi folks,

according to our hacking rules all source files should contain the Apache 
license header in the beginning 
(http://docs.openstack.org/developer/hacking/#openstack-licensing).
There are special files that in most of the cases are empty, i.e., __init__.py.

I used to put license headers to __init__ files when I was working on Neutron 
or Ironic. However, recently I got
a feedback for one of my patches from several folks that said that licence 
headers should be removed from __init__ files because
empty files are not source files.

The point of this email is _not_ to blame someone or to push my personal 
opinion to the folks who gave me the feedback. What I'm trying to do is to to 
bring more clarity to our hacking rules because, as I see, currently different 
folks interpret them differently.


- Roman



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Schduler] Volunteers wanted for a modest proposal for an external scheduler in our lifetime

2013-11-28 Thread Sylvain Bauza

Le 28/11/2013 17:04, Chris Friesen a écrit :

On 11/28/2013 09:50 AM, Gary Kotton wrote:

One option worth thinking about is to introduce a new scheduling 
driver to

nova - this driver will interface with the external scheduler. This will
let us define the scheduling API, model etc, without being in the 
current

confines of Nova. This will also enable all of the other modules, for
example Cinder to hook into it.


I see a couple nice things about this proposal:

1) Going this route means that we're free to mess with the APIs to 
some extent since they're not really "public" yet.


2) Once we have API parity with the current schedulers all in one 
place then we'll be able to more easily start extracting common stuff.


I agree with Gary. From my POV, I think it's really important to define 
what the interfaces are, what is passed to the scheduler and what is 
given by the scheduler.
Of course, Nova is the first starting point for knowing what to define, 
but we need to make sure the I/Os are enough modular to support any type 
of things to schedule (Cinder backends, Manila FSes, Climate 
compute-hosts and so far...)


-Sylvain

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Schduler] Volunteers wanted for a modest proposal for an external scheduler in our lifetime

2013-11-28 Thread Chris Friesen

On 11/28/2013 09:50 AM, Gary Kotton wrote:


One option worth thinking about is to introduce a new scheduling driver to
nova - this driver will interface with the external scheduler. This will
let us define the scheduling API, model etc, without being in the current
confines of Nova. This will also enable all of the other modules, for
example Cinder to hook into it.


I see a couple nice things about this proposal:

1) Going this route means that we're free to mess with the APIs to some 
extent since they're not really "public" yet.


2) Once we have API parity with the current schedulers all in one place 
then we'll be able to more easily start extracting common stuff.


Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Schduler] Volunteers wanted for a modest proposal for an external scheduler in our lifetime

2013-11-28 Thread Gary Kotton


On 11/28/13 12:10 AM, "Robert Collins"  wrote:

>On 25 November 2013 21:51, Sylvain Bauza  wrote:
>> As said earlier, I also would love to join the team, triggering a few
>> blueprints or so.
>>
>> By the way, I'm currently reviewing the Scheduler code. Do you began to
>> design the API queries or do you need help for that ?
>>
>> -Sylvain
>
>https://urldefense.proofpoint.com/v1/url?u=https://blueprints.launchpad.ne
>t/nova/%2Bspec/remove-cast-to-schedule-run-instance&k=oIvRg1%2BdGAgOoM1BIl
>LLqw%3D%3D%0A&r=eH0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0A&m=m51N
>cC8%2Byhvmtv%2FnrCQvfmoJK0QyJo5pl7iShl2bmck%3D%0A&s=bf6f26da40ba9acedc20fe
>3f1f84d4d3eb1a215282db3e59ff7088225da7e6f1
>is a pre-requisite for nova to use the split out scheduler, but I
>think we can begin before that is complete, by doing the work on the
>new trees:
>
> - setting up the basic trees we'll need (a service tree and a client
>tree) as openstack-infra/config changes

I am not really sure how we can have a client tree without even having
discussed the API's and interfaces. From the initial round of emails the
intention was to make use of the RPC mechanism to speak with the scheduler.

One option worth thinking about is to introduce a new scheduling driver to
nova - this driver will interface with the external scheduler. This will
let us define the scheduling API, model etc, without being in the current
confines of Nova. This will also enable all of the other modules, for
example Cinder to hook into it.

To be honest I think that that is a lot cleaner way of going about it.
Once the driver is working then we can speak about deprecating the
existing drivers.

My thoughts are:
1. Lets start to define the external scheduler API's - say V1 - support
all existing Nova, Cinder, Neutron etc - that is have parity with these
2. Start to think of the new and shiny scheduling features

How about we draw up a plan for #1 and then see how we can divide up the
work and set milestones etc.

The API's can evolve, but we need to get the initial engine (which will be
based on nova code) up and runningŠ.

Happy holidays

> - picking an interim name (e.g. external-scheduler and
>python-external-schedulerclient)
>
>However, lets get russelb to approve the blueprint
>https://urldefense.proofpoint.com/v1/url?u=https://blueprints.launchpad.ne
>t/nova/%2Bspec/forklift-scheduler-breakout&k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3
>D%0A&r=eH0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0A&m=m51NcC8%2Byhv
>mtv%2FnrCQvfmoJK0QyJo5pl7iShl2bmck%3D%0A&s=5b89f2239e66793a9d62e7a1249b60a
>bda511694a43ddb28c5e8109cc5f43ac1
>first.
>
>Cheers,
>Rob
>
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>https://urldefense.proofpoint.com/v1/url?u=http://lists.openstack.org/cgi-
>bin/mailman/listinfo/openstack-dev&k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0A&r=e
>H0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0A&m=m51NcC8%2Byhvmtv%2Fnr
>CQvfmoJK0QyJo5pl7iShl2bmck%3D%0A&s=4e696767b9510069b282cad72b0e37841731a66
>3c904fdf41fb7f94b4cc1b9dc


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] request-id in API response

2013-11-28 Thread Akihiro Motoki
Hi,

I am working on adding request-id to API response in Neutron.
After I checked what header is used in other projects
header name varies project by project.
It seems there is no consensus what header is recommended
and it is better to have some consensus.

  nova: x-compute-request-id
  cinder:   x-compute-request-id
  glance:   x-openstack-request-id
  neutron:  x-network-request-id  (under review)

request-id is assigned and used inside of each project now,
so x--request-id looks good. On the other hand,
if we have a plan to enhance request-id across projects,
x-openstack-request-id looks better.

Thought?

Thanks,
Akihiro
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] FreeBSD hypervisor (bhyve) driver

2013-11-28 Thread Sean Dague
On 11/28/2013 05:13 AM, Daniel P. Berrange wrote:

> NB, technically we should have separate CI running for each hypervisor
> that libvirt is able to talk to, so there'd likely want to be dedicated
> CI infrastructure for the libvirt+bhyve combination regardless, perhaps
> it would need less overall though.

Ideally, yes. However I think it's completely fair to have in tree Class
C hypervisors which don't have separate CI if they are behind the
libvirt interface, which is highly tested in tree. That's my opinion
only, so don't take it as project level guidance. However, if we are
really telling folks to do the right thing and go through libvirt
instead of building new Nova drivers because it reduces the nova
maintenance burden, there should be incentive for doing so, like getting
the really minimal nova changes in tree to just config their libvirt
endpoints correctly without having to stand up the whole CI path.

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Thursday subteam meeting

2013-11-28 Thread Eugene Nikanorov
Hi Itsuro,

I've updated the wiki with some examples of cli workflow that illustrate
proposed API.
Please see the updated page:
https://wiki.openstack.org/wiki/Neutron/LBaaS/LoadbalancerInstance#API_change

Thanks,
Eugene.


On Thu, Nov 28, 2013 at 3:00 AM, Itsuro ODA  wrote:

> Hi,
>
> I'd like to review about LoadblancerInstance API specification.
> Please update wiki page before the meeting.
>
> (It is a little bit hard to follow in the IRC for me since
> I'm not English native. so I'd like to consider for the
> API beforehand.)
>
> Thanks.
> Itsuro Oda
>
> On Wed, 27 Nov 2013 14:07:47 +0400
> Eugene Nikanorov  wrote:
>
> > Hi Neutron folks,
> >
> > LBaaS subteam meeting will be on Thursday, 27, at 14-00 UTC as usual.
> > We'll discuss current progress and continue with feature design.
> >
> > Thanks,
> > Eugene.
>
> --
> Itsuro ODA 
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] ml2 and vxlan configurations, neutron-server fails to start

2013-11-28 Thread Trinath Somanchi



Can u post the contents of neutron.conf file too ?

Also, the complete neutron log..

Check the sqlalchemy version compatibility

--
Trinath Somanchi - B39208
trinath.soman...@freescale.com | extn: 4048

From: Gopi Krishna B [mailto:gopi97...@gmail.com]
Sent: Thursday, November 28, 2013 5:32 PM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] ml2 and vxlan configurations, neutron-server fails to 
start


Hi
I am configuring Havana on fedora 19. Observing the below errors in case of 
neutron.
Please help me resolve this issue.
 copied only few lines from the server.log, in case full log is 
required, let me know.

/etc/neutron/plugins/ml2/ml2_conf.ini
type_drivers = vxlan,local
tenant_network_types = vxlan
mechanism_drivers = neutron.plugins.ml2.drivers.OpenvswitchMechanismDriver
network_vlan_ranges = physnet1:1000:2999
vni_ranges = 5000:6000
vxlan_group = 239.10.10.1


ERROR neutron.common.legacy [-] Skipping unknown group key: firewall_driver

ERROR stevedore.extension [-] Could not load 'local': (SQLAlchemy 0.8.3 
(/usr/lib64/python2.7/site-packages), 
Requirement.parse('SQLAlchemy>=0.7.8,<=0.7.99'))
 ERROR stevedore.extension [-] (SQLAlchemy 0.8.3 
(/usr/lib64/python2.7/site-packages), 
Requirement.parse('SQLAlchemy>=0.7.8,<=0.7.99'))

ERROR stevedore.extension [-] Could not load 'vxlan': (SQLAlchemy 0.8.3 
(/usr/lib64/python2.7/site-packages), 
Requirement.parse('SQLAlchemy>=0.7.8,<=0.7.99'))
ERROR stevedore.extension [-] (SQLAlchemy 0.8.3 
(/usr/lib64/python2.7/site-packages), 
Requirement.parse('SQLAlchemy>=0.7.8,<=0.7.99'))
TRACE stevedore.extension VersionConflict: (SQLAlchemy 0.8.3 
(/usr/lib64/python2.7/site-packages), 
Requirement.parse('SQLAlchemy>=0.7.8,<=0.7.99'))

ERROR neutron.common.config [-] Unable to load neutron from configuration file 
/etc/neutron/api-paste.ini.
TRACE neutron.common.config LookupError: No section 'quantum' (prefixed by 
'app' or 'application' or 'composite' or 'composit' or 'pipeline' or 
'filter-app') found in config /etc/neutron/api-paste.ini


 ERROR neutron.service [-] In serve_wsgi()
TRACE neutron.service RuntimeError: Unable to load quantum from configuration 
file /etc/neutron/api-paste.ini.

Regards
Gopi Krishna



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][libvirt] Gate bug 'libvirtError: Unable to read from monitor: Connection reset by peer'

2013-11-28 Thread Daniel P. Berrange
On Wed, Nov 27, 2013 at 07:26:03PM +, Jeremy Stanley wrote:
> On 2013-11-27 19:18:12 + (+), Daniel P. Berrange wrote:
> [...]
> > It would be desirable if the gate logs included details of all software
> > package versions installed. eg so we can see what libvirt, qemu and
> > kernel are present.
> [...]
> 
> It seems reasonable to me that we could dpkg -l or rpm -qa at the
> end of the job similar to how we currently already do a pip freeze.

Sounds like a good plan to me.

Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] ml2 and vxlan configurations, neutron-server fails to start

2013-11-28 Thread Gopi Krishna B
Hi
I am configuring Havana on fedora 19. Observing the below errors in case of
neutron.
Please help me resolve this issue.
 copied only few lines from the server.log, in case full log is
required, let me know.

/etc/neutron/plugins/ml2/ml2_conf.ini
type_drivers = vxlan,local
tenant_network_types = vxlan
mechanism_drivers = neutron.plugins.ml2.drivers.OpenvswitchMechanismDriver
network_vlan_ranges = physnet1:1000:2999
vni_ranges = 5000:6000
vxlan_group = 239.10.10.1


ERROR neutron.common.legacy [-] Skipping unknown group key: firewall_driver

ERROR stevedore.extension [-] Could not load 'local': (SQLAlchemy 0.8.3
(/usr/lib64/python2.7/site-packages),
Requirement.parse('SQLAlchemy>=0.7.8,<=0.7.99'))
 ERROR stevedore.extension [-] (SQLAlchemy 0.8.3
(/usr/lib64/python2.7/site-packages),
Requirement.parse('SQLAlchemy>=0.7.8,<=0.7.99'))

ERROR stevedore.extension [-] Could not load 'vxlan': (SQLAlchemy 0.8.3
(/usr/lib64/python2.7/site-packages),
Requirement.parse('SQLAlchemy>=0.7.8,<=0.7.99'))
ERROR stevedore.extension [-] (SQLAlchemy 0.8.3
(/usr/lib64/python2.7/site-packages),
Requirement.parse('SQLAlchemy>=0.7.8,<=0.7.99'))
TRACE stevedore.extension VersionConflict: (SQLAlchemy 0.8.3
(/usr/lib64/python2.7/site-packages),
Requirement.parse('SQLAlchemy>=0.7.8,<=0.7.99'))

ERROR neutron.common.config [-] Unable to load neutron from configuration
file /etc/neutron/api-paste.ini.
TRACE neutron.common.config LookupError: No section 'quantum' (prefixed by
'app' or 'application' or 'composite' or 'composit' or 'pipeline' or
'filter-app') found in config /etc/neutron/api-paste.ini


 ERROR neutron.service [-] In serve_wsgi()
TRACE neutron.service RuntimeError: Unable to load quantum from
configuration file /etc/neutron/api-paste.ini.

Regards
Gopi Krishna
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack][qa][Tempest][Network] Test for external connectivity

2013-11-28 Thread Yair Fried
Thanks for the input. I apologize for the delay.

1. A working patch is here - https://review.openstack.org/#/c/55146. Reviews 
will be much appreciated.
2. Default setting has external_connectivity=False so tempest gate doesn't 
check this. I wonder if we could somehow set the neutron gate to configure 
external access and enable this feature for all tests?
3. Tomoe "How can we test the internal network connectivity?" -- I'm pinging 
router and dhcp ports from the VM via ssh and floating IP, since l2 and l3 
agents might reside on different hosts.
4. Jeremy Stanley - "test check for no fewer than three addresses" -- Why?

- Original Message -
From: "Jeremy Stanley" 
To: openstack-dev@lists.openstack.org
Sent: Thursday, November 21, 2013 12:17:52 AM
Subject: Re: [openstack-dev] [Openstack][qa][Tempest][Network] Test for 
external connectivity

On 2013-11-20 14:07:49 -0800 (-0800), Sean Dague wrote:
> On 11/18/2013 02:41 AM, Yair Fried wrote:
> [...]
> > 2. add fields in tempest.conf for
> >  * external connectivity = False/True
> >  * external ip to test against (ie 8.8.8.8)
> 
> +1 for #2. In the gate we'll need to think about what that address
> can / should be. It may be different between different AZs. At this
> point I'd leave the rest of the options off the table until #2 is
> working reliably.
[...]

Having gone down this path in the past, I suggest the test check for
no fewer than three addresses, sending several probes to each, and
be considered successful if at least one gets a response.
-- 
Jeremy Stanley

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Summit session wrapup

2013-11-28 Thread Jiří Stránský

Hi all,

just a few thoughts (subjective opinions) regarding the whole debate:

* I think that having a manually picking images for machines approach 
would make TripleO more usable in the beginning. I think it will take a 
good deal of time to get our smart solution working with the admin 
rather than against him [1], and a possibility of manual override is a 
good safety catch.


E.g. one question that i wonder about - how would our smart flavor-based 
approach solve this situation: I have homogenous nodes on which i want 
to deploy Cinder and Swift. Half of those nodes has better connectivity 
to the internet than the other half. I want Swift on the ones with 
better internet connectivity. How will i ensure such deployment with 
flavor-based approach? Could we use e.g. host aggregates defined on the 
undercloud for this? I think it will take time before our smart solution 
can understand such and similar conditions.


* On the other hand, i think relying on Nova to pick hosts feels more 
TripleO-spirited solution to me. It means using OpenStack to deploy 
OpenStack.


So i can't really lean towards one solution or the other. Maybe it's 
most important to make *something*, gather some feedback, and tweak what 
needs tweaking.



Cheers

Jirka


[1] http://i.technet.microsoft.com/dynimg/IC284957.jpg

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][horizon]Heat UI related requirements & roadmap

2013-11-28 Thread Zane Bitter

On 27/11/13 23:37, Fox, Kevin M wrote:

Hmm... Yeah. when you tell heat client the url to a template file, you could 
set a flag telling the heat client it is in a git repo. It could then 
automatically look for repo information and set a stack metadata item pointing 
back to it.


Or just store the URL.


If you didn't care about taking a performance hit, heat client could always try 
and check to see if it was a git repo url. That may add several extra http 
requests though...

Thanks,
Kevin

From: Clint Byrum [cl...@fewbar.com]
Sent: Wednesday, November 27, 2013 1:04 PM
To: openstack-dev
Subject: Re: [openstack-dev] [heat][horizon]Heat UI related requirements &  
roadmap

Excerpts from Fox, Kevin M's message of 2013-11-27 08:58:16 -0800:

This use case is sort of a providence case. Where did the stack come from so I 
can find out more about it.



This exhibits similar problems to our Copyright header problems. Relying
on authors to maintain their authorship information in two places is
cumbersome and thus the one that is not automated will likely fall out
of sync fairly quickly.


You could put a git commit field in the template itself but then it would be 
hard to keep updated.



Or you could have Heat able to pull from any remote source rather than
just allowing submission of the template directly. It would just be
another column in the stack record. This would allow said support person
to see where it came from by viewing the stack, which solves the use case.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] excessively difficult to support both iso8601 0.1.4 and 0.1.8 as deps

2013-11-28 Thread Matthias Runge
On 11/27/2013 06:46 PM, Alan Pevec wrote:
> 2013/11/27 Sean Dague :
>> The problem is you can't really support both iso8601 was dormant
>> for years, and the revived version isn't compatible with the old
>> version. So supporting both means basically forking iso8601 and
>> maintaining you own version of it monkey patched in your own tree.
> 
> Right, hence glance was added https://review.openstack.org/55998 to 
> unblock the previous gate failure. Issue now is that stable/grizzly
> Tempest uses clients from git trunk, which is not going to work since
> trunk will add more and more incompatible dependencies, even if
> backward compatbility is preserved against the old service APIs!
> 
> Solutions could be that Tempest installs clients into separate venv
> to avoid dependecy conflicts or establish stable/* branches for 
> clients[1] which are created around OpenStack release time.
> 
I'd like to propose to switch testing for stable branches:

We should switch to install environments for stable releases through
other methods, such as packages. There are quite a few provisioning
methods out there right now.

The benefit would be, we'd have a very reproducible way to build
identical environments for each run; the cost would be, that we'd need
to create a test environment for each project: install everything but
the project to test via packages.

When choosing packages to install: which one do we want to take? Just a
single source or take for each (major) distribution, thus multiplying
effort here?

Matthias


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone][Marconi][Oslo] Discoverable home document for APIs (Was: Re: [Nova][Glance] Support of v1 and v2 glance APIs in Nova)

2013-11-28 Thread Flavio Percoco

On 26/11/13 10:57 -0600, Dolph Mathews wrote:


On Tue, Nov 26, 2013 at 2:47 AM, Flavio Percoco  wrote:
   As crazy as it sounds, have you guys considered migrating to
   Nottingham's approach?



It only sounds crazy because I have no idea how to "migrate" an unversioned
endpoint :) Skimming through his proposal, it looks like it might actually be
compatible with ours to include side by side? If so, we could support both for
a couple releases before moving on.


This would be awesome. We'll still have to write a python library for
it but that's something we could do as part of Marconi's client
development and then move it to its own repo - or oslo - once it's
ready.


Does this proposal have much traction outside the OpenStack community? (and how
much traction does it have within the community already?)


It does. I don't have the exact numbers of users but I know for sure
the work there is moving forward. A new version will be released soon,
AFAIU.

Cheers,
FF

--
@flaper87
Flavio Percoco


pgp5LOk44Ranp.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] rpc concurrency control rfc

2013-11-28 Thread Daniel P. Berrange
On Wed, Nov 27, 2013 at 07:34:15PM +, Daniel P. Berrange wrote:
> On Wed, Nov 27, 2013 at 06:43:42PM +, Edward Hope-Morley wrote:
> > On 27/11/13 18:20, Daniel P. Berrange wrote:
> > > On Wed, Nov 27, 2013 at 06:10:47PM +, Edward Hope-Morley wrote:
> > >> On 27/11/13 17:43, Daniel P. Berrange wrote:
> > >>> On Wed, Nov 27, 2013 at 05:39:30PM +, Edward Hope-Morley wrote:
> >  On 27/11/13 15:49, Daniel P. Berrange wrote:
> > > On Wed, Nov 27, 2013 at 02:45:22PM +, Edward Hope-Morley wrote:
> > >> Moving this to the ml as requested, would appreciate
> > >> comments/thoughts/feedback.
> > >>
> > >> So, I recently proposed a small patch to the oslo rpc code 
> > >> (initially in
> > >> oslo-incubator then moved to oslo.messaging) which extends the 
> > >> existing
> > >> support for limiting the rpc thread pool so that concurrent requests 
> > >> can
> > >> be limited based on type/method. The blueprint and patch are here:
> > >>
> > >> https://blueprints.launchpad.net/oslo.messaging/+spec/rpc-concurrency-control
> > >>
> > >> The basic idea is that if you have server with limited resources you 
> > >> may
> > >> want restrict operations that would impact those resources e.g. live
> > >> migrations on a specific hypervisor or volume formatting on 
> > >> particular
> > >> volume node. This patch allows you, admittedly in a very crude way, 
> > >> to
> > >> apply a fixed limit to a set of rpc methods. I would like to know
> > >> whether or not people think this is sort of thing would be useful or
> > >> whether it alludes to a more fundamental issue that should be dealt 
> > >> with
> > >> in a different manner.
> > > Based on this description of the problem I have some observations
> > >
> > >  - I/O load from the guest OS itself is just as important to consider
> > >as I/O load from management operations Nova does for a guest. Both
> > >have the capability to impose denial-of-service on a host. IIUC, 
> > > the
> > >flavour specs have the ability to express resource constraints for
> > >the virtual machines to prevent a guest OS initiated DOS-attack
> > >
> > >  - I/O load from live migration is attributable to the running
> > >virtual machine. As such I'd expect that any resource controls
> > >associated with the guest (from the flavour specs) should be
> > >applied to control the load from live migration.
> > >
> > >Unfortunately life isn't quite this simple with KVM/libvirt
> > >currently. For networking we've associated each virtual TAP
> > >device with traffic shaping filters. For migration you have
> > >to set a bandwidth cap explicitly via the API. For network
> > >based storage backends, you don't directly control network
> > >usage, but instead I/O operations/bytes. Ultimately though
> > >there should be a way to enforce limits on anything KVM does,
> > >similarly I expect other hypervisors can do the same
> > >
> > >  - I/O load from operations that Nova does on behalf of a guest
> > >that may be running, or may yet to be launched. These are not
> > >directly known to the hypervisor, so existing resource limits
> > >won't apply. Nova however should have some capability for
> > >applying resource limits to I/O intensive things it does and
> > >somehow associate them with the flavour limits  or some global
> > >per user cap perhaps.
> > >
> > >> Thoughts?
> > > Overall I think that trying to apply caps on the number of API calls
> > > that can be made is not really a credible way to avoid users 
> > > inflicting
> > > DOS attack on the host OS. Not least because it does nothing to 
> > > control
> > > what a guest OS itself may do. If you do caps based on num of APIs 
> > > calls
> > > in a time period, you end up having to do an extremely pessistic
> > > calculation - basically have to consider the worst case for any single
> > > API call, even if most don't hit the worst case. This is going to hurt
> > > scalability of the system as a whole IMHO.
> > >
> > > Regards,
> > > Daniel
> >  Daniel, thanks for this, these are all valid points and essentially tie
> >  with the fundamental issue of dealing with DOS attacks but for this bp 
> >  I
> >  actually want to stay away from this area i.e. this is not intended to
> >  solve any tenant-based attack issues in the rpc layer (although that
> >  definitely warrants a discussion e.g. how do we stop a single tenant
> >  from consuming the entire thread pool with requests) but rather I'm
> >  thinking more from a QOS perspective i.e. to allow an admin to account
> >  for a resource bias e.g. slow RAID controller, on a given node (not
>

Re: [openstack-dev] FreeBSD hypervisor (bhyve) driver

2013-11-28 Thread Daniel P. Berrange
On Wed, Nov 27, 2013 at 05:29:35PM -0500, Sean Dague wrote:
> On 11/27/2013 01:32 PM, Rafał Jaworowski wrote:
> > On Mon, Nov 25, 2013 at 3:50 PM, Daniel P. Berrange  
> > wrote:
> >> On Fri, Nov 22, 2013 at 10:46:19AM -0500, Russell Bryant wrote:
> >>> On 11/22/2013 10:43 AM, Rafał Jaworowski wrote:
>  Russell,
>  First, thank you for the whiteboard input regarding the blueprint for
>  FreeBSD hypervisor nova driver:
>  https://blueprints.launchpad.net/nova/+spec/freebsd-compute-node
> 
>  We were considering libvirt support for bhyve hypervisor as well, only
>  wouldn't want to do this as the first approach for FreeBSD+OpenStack
>  integration. We'd rather bring bhyve bindings for libvirt later as
>  another integration option.
> 
>  For FreeBSD host support a native hypervisor driver is important and
>  desired long-term and we would like to have it anyways. Among things
>  to consider are the following:
>  - libvirt package is additional (non-OpenStack), external dependency
>  (maintained in the 'ports' collection, not included in base system),
>  while native API (via libvmmapi.so library) is integral part of the
>  base system.
>  - libvirt license is LGPL, which might be an important aspect for some 
>  users.
> >>>
> >>> That's perfectly fine if you want to go that route as a first step.
> >>> However, that doesn't mean it's appropriate for merging into Nova.
> >>> Unless there are strong technical justifications for why this approach
> >>> should be taken, I would probably turn down this driver until you were
> >>> able to go the libvirt route.
> >>
> >> The idea of a FreeBSD bhyve driver for libvirt has been mentioned
> >> a few times. We've already got a FreeBSD port of libvirt being
> >> actively maintained to support QEMU (and possibly Xen, not 100% sure
> >> on that one), and we'd be more than happy to see further contributions
> >> such as a bhyve driver.
> > 
> > As mentioned, in general we like the idea of libvirt bhyve driver, but
> > sometimes it may not fit the bill (licensing, additional external
> > dependency to keep track of) and hence we consider the native option.
> > 
> >> I am of course biased, as libvirt project maintainer, but I do agree
> >> that supporting bhyve via libvirt would make sense, since it opens up
> >> opportunities beyond just OpenStack. There are a bunch of applications
> >> built on libvirt that could be used to manage bhyve, and a fair few
> >> applications which have plugins using libvirt
> > 
> > Could you perhaps give some pointers on the libvirt development
> > process, how to contribute changes and so on?
> > 
> > Another quick question: for cases like this, how does Nova manage
> > syncing with the required libvirt codebase when a new hypervisor
> > driver is added or for similar major updates happen?
> > 
> >> Taking on maint work for a new OpenStack driver is a non-trivial amount
> >> of work in itself. If the burden for OpenStack maintainers can be reduced
> >> by, pushing work out to / relying on support from, libvirt, that makes
> >> sense from OpenStack/Nova's POV.
> > 
> > The maintenance aspect and testing coverage are valid points, on the
> > other hand future changes would have to go a longer way for us: first
> > upstream to libvirt, then downstream to the FreeBSD ports collection
> > (+ perhaps some OpenStack code bits as well), which makes the process
> > more complicated.
> 
> I think you also need to weigh that against the CI requirements for
> landing a driver in tree for nova, which is probably a dedicated rack of
> hardware running the zuul infrastructure reporting back success / fail
> results on every proposed nova patch. Made more complicated if your
> hypervisor doesn't nest (I have no idea if this one does).

NB, technically we should have separate CI running for each hypervisor
that libvirt is able to talk to, so there'd likely want to be dedicated
CI infrastructure for the libvirt+bhyve combination regardless, perhaps
it would need less overall though.


Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Summit session wrapup

2013-11-28 Thread Jaromir Coufal

On 2013/27/11 16:37, James Slagle wrote:

On Wed, Nov 27, 2013 at 8:39 AM, Jaromir Coufal  wrote:


V0: basic slick installer - flexibility and control first
- enable user to auto-discover (or manual register) nodes
- let user decide, which node is going to be controller, which is going to
be compute or storage
- associate images with these nodes
- deploy


I think you've made some good points about the user experience helping drive the
design of what Tuskar is targeting.  I think the conversation around
how to design
letting the user pick what to deploy where should continue.  I wonder
though, would
it be possible to not have that in a V0?

Basically make your V0 above even smaller (eliminating the middle 2
bullets), and just
letting nova figure it out, the same as what happens now when we run
"heat stack-create " from the CLI.

I see 2 possible reasons for trying this:
- Gets us to something people can try even sooner
- It may turn out we want this option in the long run ... a "figure it
out all out for me"
   type of approach, so it wouldn't be wasted effort.

Hey James,

well as long as we end up with possibility to have control over it in 
the Icehouse release , I am fine with that.

(The 'control' I tried to explain closer in response to Robert's e-mail).

As for the milestone approach:
I just think that more basic and traditional way for user is to do stuff 
manually. And that's where I think we can start. That's user's point of 
view.
From implementation point of view, there is already some magic in 
openstack, so it might be easier to start with that already existing 
magic, add manual support then and then enhance the magic to much 
smarter approach.


In the end, most of the audience will see the result in Icehouse 
release, so if we start one way or another - whatever works. I just want 
to make sure, that we are going to deliver usable solution.


-- Jarda
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer][horizon] The meaning of "Network Duration"

2013-11-28 Thread Ladislav Smola

Hello Daisy,

the tables were deleted from Horizon, because of that confusion. 
https://bugs.launchpad.net/horizon/+bug/1249279
We are going to clearly document each ceilometer meter first. Then this 
information will appear again in Horizon.


E.g. the duration as stated in doc 
http://docs.openstack.org/developer/ceilometer/measurements.html
is kind of misguiding, as it actually mean presence. So the samples of 
these metrics contains 1 or 0, depending
on whether network was up or down in that time. The actual duration then 
must be inferred from these samples.


This should be also backported to H.

Kind regards.
Ladislav


On 11/27/2013 10:16 AM, Ying Chun Guo wrote:


Hello,

While I translate Horizon web UI, I'm a little confused with "Network 
Duration",
"Port Duration", and "Router Duration" in the Resources Usage 
statistics table.


What does "Duration" mean here?
If I translate it exactly as the meaning of "Duration", my customers 
cannot understand.

Does it equal to "usage time"?

Regards
Ying Chun Guo (Daisy)


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Summit session wrapup

2013-11-28 Thread Jaromir Coufal

Hi Mark,

thanks for your insight, I mostly agree. Just few points below.

On 2013/27/11 21:54, Mark McLoughlin wrote:

Hi Jarda,

...

Yes, I buy this. And I think it's the point worth dwelling on.

It would be quite a bit of work to substantiate the point with hard data
- e.g. doing user testing of mockups with and without placement control
- so we have to at least try to build some consensus without that.
I agree here. It will be a lot of work. I'd love to have that, but 
creating distinct designs, finding users for real testing and testing 
with them will consume big amount of time and in this agile approach we 
can't afford it.


I believe that we are not very distinct in our goals and that we can get 
to consensus without that.


There was smaller confusion which I tried to clarify in answer to Rob's 
response.



We could do some work on a more detailed description of the persona and
their basic goals. This would clear up whether we're designing for the
case where one persona owns the undercloud and there's another overcloud
operator persona.
Yes, we need to have this written down. Or at least get to consensus if 
we can quickly get there and document it then. Whatever works and 
doesn't block us.



We could also look at other tools targeted to similar use cases and see
what they do.
I looked and they all do it very manual way. (or at least those which I 
have seen from Mirantis, Huawei, etc) - and there is some reason for 
this. As I wrote into Robert's answer, we can do much more, we can be 
smart, but we can't think that we are the smartest.



But yeah - my instinct is that all of that would show that we'd be
fighting an uphill battle to persuade our users that this type of magic
is what they want.

That's exactly my point. Thanks for saying that.

We want to help them and feed them with ready-to-deploy solution. But 
they need to have feeling that they have things under control (maybe 
just check the solution and/or allow to change).



...

=== Implementation ===

Above mentioned approach shouldn't lead to reimplementing scheduler. We
can still use nova-scheduler, but we can take advantage of extra params
(like unique identifier), so that we specify more concretely what goes
where.

It's hard to see how what you describe doesn't ultimately mean we
completely by pass the Nova scheduler. Yes, if you request placement on
a specific node, it does still go through the scheduler ... but it
doesn't do any actual scheduling.

Maybe we should separate the discussion/design around control nodes and
resource (i.e. compute/storage) nodes. Mostly because there should be a
large ratio of the latter to the former, so you'd expect it to be less
likely for such fine grained control over resource nodes to be useful.

e.g. maybe adding more compute nodes doesn't involve the user doing any
placement, and we just let the nova scheduler choose from the available
nodes which are suitable for compute workloads.
Yes, controller nodes will need to get better treatment, but I think not 
in our first steps. I believe that for now we are fine with going with 
generic controller node which is running all controller services.


I think what would be great to have is to let nova-scheduler to do its 
job (dry-run), show the distribution and just confirm (or do some change 
in there).


-- Jarda
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Summit session wrapup

2013-11-28 Thread Ladislav Smola

Hello,

just few notes from me:

https://etherpad.openstack.org/p/tripleo-feature-map sounds like a great 
idea, we should go through them one by one maybe on meeting.
We should agree on what is doable for I, without violating the Openstack 
way in some very ugly way. So do we want to be Openstack on Openstack

or Almost Openstack on Openstack? Or what is the goal here?

So let's take a simple example, flat network 2 racks (32 nodes), 2 
controllers nodes, 2 neutron nodes, 14 nova compute, 14 storage


I. Manual way using Heat and Scheduler could be assigning every group of 
nodes to special flavor by hard. Then nova scheduler will take care of it.
1. How hard it will be to implement 'Assigning a specific nodes to 
Flavor' ? (probably adding a condition for MAC address?)
Or do you have some other idea how to do this in an almost clean 
way? Without reimplementing nova scheduler. (though this is probably 
messing with scheduler)
2. How this will be implementable in UI? Just assigning nodes to flavors 
and uploading a Heat template?


II. Having homogeneous hardware, all will be one flavor and then nova 
scheduler will decide, where to put what. When you give heat e.g. I want 
to spawn 2 controller images.
1. How hard is to set the policies, like we want to spread those nodes 
over all racks?
2. How this will be implementable in UI? It is basically building a 
complex Heat template, right? So just uploading a Heat template?


III. Having more flavors
1. We will be able to set in Heat something like, I want Nova compute 
node on compute_flavor(amazon c1,c3) with high priority or on 
all_purpose_flavor(amazon m1)  with normal_priority. How hard is that?

2. How this will be implementable in UI? Just uploading a Heat template?

IV. Tripleo way


1. From the OOO name I infer, we want to use openstack, that means using 
Heat, Nova scheduler etc.
From my point of view having Heat template for deploying e.g. 
Wordpress installation seems the same to me like having a Heat template
to deploy Openstack, it's just much more complex. Is this a valid 
assumption? If you think it's not, explain why please.



"Radical idea : we could ask (e.g. on -operators) for a few potential 
users who'd be willing to let us interview them."

Yes please!!!

Talking to jcoufal, being able to edit Heat template in UI, being able 
to assign baremetals to flavors(later connected to template catalog). It 
could be all we need. Also later visualize
what will happen when you actually stack create the template, so we 
don't go blindly would be very needed.


Kind regards,
Ladislav


On 11/28/2013 06:41 AM, Robert Collins wrote:

Hey, I realise I've done a sort of point-bypoint thing below - sorry.
Let me say that I'm glad you're focused on what will help users, and
their needs - I am too. Hopefully we can figure out why we have
different opinions about what things are key, and/or how we can get
data to better understand our potential users.


On 28 November 2013 02:39, Jaromir Coufal  wrote:


Important point here is, that we agree on starting with very basics - grow
then. Which is great.

The whole deployment workflow (not just UI) is all about user experience
which is built on top of TripleO's approach. Here I see two important
factors:
- There are users who are having some needs and expectations.

Certainly. Do we have Personas for those people? (And have we done any
validation of them?)


- There is underlying concept of TripleO, which we are using for
implementing features which are satisfying those needs.

mmm, so the technical aspect of TripleO is about setting up a virtuous
circle: where improvements in deploying cluster software via OpenStack
makes deploying OpenStack better, and those of us working on deploying
OpenStack will make deploying cluster software via OpenStack better in
general, as part of solving 'deploying OpenStack' in a nice way.


We are circling around and trying to approach the problem from wrong end -
which is implementation point of view (how to avoid own scheduling).

Let's try get out of the box and start with thinking about our audience
first - what they expect, what they need. Then we go back, put our
implementation thinking hat on and find out how we are going to re-use
OpenStack components to achieve our goals. In the end we have detailed plan.

Certainly, +1.


=== Users ===

I would like to start with our targeted audience first - without milestones,
without implementation details.

I think here is the main point where I disagree and which leads to different
approaches. I don't think, that user of TripleO cares only about deploying
infrastructure without any knowledge where the things go. This is overcloud
user's approach - 'I want VM and I don't care where it runs'. Those are
self-service users / cloud users. I know we are OpenStack on OpenStack, but
we shouldn't go that far that we expect same behavior from undercloud users.
I can tell you various examples of why the operator will ca

Re: [openstack-dev] [TripleO] Summit session wrapup

2013-11-28 Thread Jaromir Coufal


On 2013/28/11 06:41, Robert Collins wrote:

Certainly. Do we have Personas for those people? (And have we done any
validation of them?)
We have shorter paragraph to each. But not verified by any survey, so we 
don't have very solid basis in this area right now and I believe we all 
are trying to assume at the moment.



This may be where we disagree indeed :). Wearing my sysadmin hat ( a
little dusty, but never really goes away :P) - I can tell you I spent
a lot of time worrying about what went on what machine. But it was
never actually what I was paid to do.

What I was paid to do was to deliver infrastructure and services to
the business. Everything that we could automate, that we could
describe with policy and still get robust, reliable results - we did.
It's how one runs many hundred machines with an ops team of 2.

Planning around failure domains for example, is tedious work; it's
needed at a purchasing level - you need to decide if you're buying
three datacentres or one datacentre with internal redundancy, but once
thats decided the actual mechanics of ensure that each HA service is
spread across the (three datacentres) or (three separate zones in the
one DC) is not interesting. So - I'm sure that many sysadmins do
manually assign work to machines to ensure a good result from
performance or HA concerns, but thats out of necessity, not desire.
Well, I think there is one small misunderstanding. I've never said that 
manual way should be primary workflow for us. I agree that we should 
lean toward as much automation and smartness as possible. But in the 
same time, I am adding that we need manual fallback for user to change 
that smart decision.


Primary way would be to let TripleO decide, where the stuff go. I think 
we agree here.


But I, as sysadmin, want to see the distribution of stuff before I 
deploy. And if there is some failure in the automation logic, I need to 
have possibility to change that. Not from scratch, but do the change in 
suggested distribution. There always should be way to do that manually. 
Let's imagine that TripleO will by some mistake or intentionally 
distribute nodes across my datacenter wrong (wrong for me, not 
necessarily for somebody else). What would I do? Would I let TripleO to 
deploy it anyway? No. I will not use TripleO. But If there is something 
what I need to change and I have a way to do that, I will keep with 
TripleO, because it allows me to satisfy all I need.


We can be smart, but we can't be the smartest and see all reasons of all 
users.



Why does that layout make you happy? What is it about that setup where
things will work better for you? Note that in the absence of a
sophisticated scheduler you'll have some volumes with redundancy of 3
end up all in one rack: you won't get rack-can-fail safety on the
delivered cloud workloads (I mention this as one attempt to understand
why knowing there is a control node / 3 storage /rest compute in each
rack makes you happy).
It doesn't have to make me happy, but somebody else might have strong 
reasoning for that (or any other setup which we didn't cover). We don't 
have to know it, but why can't we allow him to do this?


One more time, I want to stress this out - I am not fighting for absence 
of sophisticated scheduler, I am fighting for allowing user to control 
the stuff if he wants/needs to.



I think having that degree of control is failure. Our CloudOS team has
considerable experience now in deploying clouds using a high-touch
system like you describe - and they are utterly convinced that it
doesn't scale. Even at 20 nodes it is super tedious, and beyond that
it's ridiculous.
Right. And are they convinced that automated tool will do the best job 
for them? Are they trusting them so strongly, so that they would deploy 
their whole datacenter without checking the correct distribution? Would 
they say - OK I said I want 50 compute, 10 block storage, 3 control. As 
long as it will work, I don't care, be smart, do it for me.


It all depends on the GUI design. If we design it well enough, so that 
we allow user to do quick bulk actions, even manual distribution can be 
easy. Even for 100 nodes... or more.

(But I don't suggest we do that all manual.)


Flexibilty comes with a cost. Right now we have a large audience
interested in what we have, but we're delivering two separate things:
we have a functional sysadminny interface with command line scripts
and heat templates - , and we have a GUI where we can offer a better
interface which the tuskar folk are building up. I agree that
homogeneous hardware isn't a viable long term constraint. But if we
insist on fixing that issue first, we sacrifice our ability to learn
about the usefulness of a simple, straight forward interface. We'll be
doing a bunch of work - regardless of implementation - to deal with
heterogeneity, when we could be bringing Swift and Cinder up to
production readiness - which IMO will get many more folk onboard for
adoption.
I agree that t

Re: [openstack-dev] [Nova] [Infra] Support for PCI Passthrough

2013-11-28 Thread yongli he

On 2013年11月27日 23:43, Jeremy Stanley wrote:

On 2013-11-27 11:18:46 +0800 (+0800), yongli he wrote:
[...]

if you post -1, you should post testing log somewhere for people
to debug it, so does third party testing can post testing log to
the infra log server?

Not at the moment--the "infra log server" is just an Apache
name-based virtual host on the static.openstack.org VM using
mod_autoindex to serve log files out of the DocumentRoot (plus a
custom filter CGI Sean Dague wrote recently), and our Jenkins has a
shell account it can use to SCP files onto it. We can't really scale
that access control particularly safely to accommodate third
parties, nor do we have an unlimited amount of space on that machine
(we currently only preserve 6 months of test logs, and even
compressing the limit on how much Cinder block storage we can attach
to the VM is coming into sight).

There has been recent discussion about designing a more scalable
build/test artifact publication system backed by Swift object
storage, and suggestion that once it's working we might consider
support for handing out authorization to third-party-specific
containers for the purpose you describe. Until we have developed
something like that, however, you'll need to provide your own place

this need appoved by my supervisor or IT, i can not do anything about this.
does any one hear of any free space can host such thing?

Yongli He

to publish your logs (something like we use--bog standard Apache on
a public VM--should work fine I'd think?).



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Fw: [Neutron][IPv6] Meeting logs from the first IRC meeting

2013-11-28 Thread Édouard Thuleau
A temporary workaround waiting fix for bug:
https://bugs.launchpad.net/nova/+bug/1112912
(https://review.openstack.org/#/c/21946/)

diff --git a/nova/virt/libvirt/vif.py b/nova/virt/libvirt/vif.py
index 5bf0dba..5fd041c 100644
--- a/nova/virt/libvirt/vif.py
+++ b/nova/virt/libvirt/vif.py
@@ -159,7 +159,7 @@ class LibvirtGenericVIFDriver(LibvirtBaseVIFDriver):
 # has already applied firewall filtering itself.
 if CONF.firewall_driver != "nova.virt.firewall.NoopFirewallDriver":
 return True
-return False
+return True

 def get_config_bridge(self, instance, vif, image_meta, inst_type):
 """Get VIF configurations for bridge type."""
@@ -173,8 +173,8 @@ class LibvirtGenericVIFDriver(LibvirtBaseVIFDriver):

 mac_id = vif['address'].replace(':', '')
 name = "nova-instance-" + instance['name'] + "-" + mac_id
-if self.get_firewall_required():
-conf.filtername = name
 designer.set_vif_bandwidth_config(conf, inst_type)

 return conf

On Tue, Nov 26, 2013 at 10:42 PM, Collins, Sean (Contractor)
 wrote:
> On Tue, Nov 26, 2013 at 06:07:07PM +0800, Da Zhao Y Yu wrote:
>> Sean, what about your progress? I saw your code change jekins still in
>> failed status.
>
> Hi,
>
> I've been busy tracking down the IPv6 issue in our lab environment -
> we were using the Hybrid OVS driver in our Nova.conf and that was
> breaking IPv6 - so we changed over to the Generic VIF driver,
> only to hit the bug https://bugs.launchpad.net/devstack/+bug/1252620
> where the Security Group API doesn't work.
>
> Which leaves you with the following choices:
>
> A) Working V6 with the Generic VIF driver, but no Security Groups
> B) Working Security Groups but no V6, with the hybrid VIF driver.
>
> I'm going to try and see if I can make some of the patches against the
> hybrid driver work and get V6 working.
>
> --
> Sean M. Collins
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev