Re: [openstack-dev] pip requirements externally host (evil evil stab stab stab)

2013-07-20 Thread Salvatore Orlando
I reckon the netifaces package is only used in Neutron's Ryu plugin.
At a first glance, it should be possible to replace its current usage with
the iplib module which has been developed within neutron itself.

Unless we hear otherwise from contributors to the Ryu plugin it is my
opinion that we should move towards replacing netifaces.

Salvatore


On 19 July 2013 20:04, Monty Taylor mord...@inaugust.com wrote:

 Hey guys!

 PyPI is moving towards the world of getting people to stop hosting stuff
 via external links. It's been bad for us in the past and one of the
 reasons for the existence of our mirror. pip 1.4 has an option to
 disallow following external links, and in 1.5 it's going to be the
 default behavior.

 Looking forward, we have 5 pip packages that host their stuff
 externally. If we have any pull with their authors, we should get them
 to actually upload stuff to pypi. If we don't, we should strongly
 consider our use of these packages. As soon as pip 1.4 comes out, I
 would like to moving forward restrict the addition of NEW requirements
 that do not host on pypi. (all 5 of these host insecurely as well, fwiw)

 The culprits are:

 dnspython,lockfile,netifaces,psutil,pysendfile

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] pip requirements externally host (evil evil stab stab stab)

2013-07-20 Thread Alex Gaynor
netifaces is also used in swift for the whataremyips function. (Personaly
I'd love to replace that as it doesn't work on PyPy, but that's a rather
different conversation :))

Alex


On Sat, Jul 20, 2013 at 9:10 AM, Salvatore Orlando sorla...@nicira.comwrote:

 I reckon the netifaces package is only used in Neutron's Ryu plugin.
 At a first glance, it should be possible to replace its current usage with
 the iplib module which has been developed within neutron itself.

 Unless we hear otherwise from contributors to the Ryu plugin it is my
 opinion that we should move towards replacing netifaces.

 Salvatore


 On 19 July 2013 20:04, Monty Taylor mord...@inaugust.com wrote:

 Hey guys!

 PyPI is moving towards the world of getting people to stop hosting stuff
 via external links. It's been bad for us in the past and one of the
 reasons for the existence of our mirror. pip 1.4 has an option to
 disallow following external links, and in 1.5 it's going to be the
 default behavior.

 Looking forward, we have 5 pip packages that host their stuff
 externally. If we have any pull with their authors, we should get them
 to actually upload stuff to pypi. If we don't, we should strongly
 consider our use of these packages. As soon as pip 1.4 comes out, I
 would like to moving forward restrict the addition of NEW requirements
 that do not host on pypi. (all 5 of these host insecurely as well, fwiw)

 The culprits are:

 dnspython,lockfile,netifaces,psutil,pysendfile

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
I disapprove of what you say, but I will defend to the death your right to
say it. -- Evelyn Beatrice Hall (summarizing Voltaire)
The people's good is the highest law. -- Cicero
GPG Key fingerprint: 125F 5C67 DFE9 4084
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] lbaas installation guide

2013-07-20 Thread Eugene Nikanorov
Hi Quing.

The guide on the wiki is a bit outdated and doesn't reflect recent project
renaming from quantum to neutron.
Currently lbaas can be configured via devstack, the only thing that needs
to be done is line enable_service q-lbaas added to localrc.

Feel free to ask any further questions.

Thanks,
Eugene.



On Sat, Jul 20, 2013 at 11:44 PM, Qing He qing...@radisys.com wrote:

  Thanks Anne. That link does not work. Furthermore, Do we have a LBaas
 for compute node (nova)?

 ** **

 *From:* Anne Gentle [mailto:annegen...@justwriteclick.com]
 *Sent:* Saturday, July 20, 2013 5:41 AM

 *To:* OpenStack Development Mailing List
 *Subject:* Re: [openstack-dev] [Neutron] lbaas installation guide

 ** **

 Try this: https://wiki.openstack.org/wiki/Neutron/LBaaS/HowToRun. I have
 no idea how accurate but at least it was updated in April this year.

 ** **

 On Fri, Jul 19, 2013 at 9:26 PM, Qing He qing...@radisys.com wrote:

 By the way, I’m wondering if lbaas has a separate doc somewhere else?

  

 *From:* Anne Gentle [mailto:a...@openstack.org]
 *Sent:* Friday, July 19, 2013 6:33 PM
 *To:* OpenStack Development Mailing List
 *Subject:* [openstack-dev] [Neutron] lbaas installation guide

  

 Thanks for bringing it to the attention of the list -- I've logged this
 doc bug. https://bugs.launchpad.net/openstack-manuals/+bug/1203230Hopefully a 
 Neutron team member can pick it up and investigate.
 

  

 Anne

  

 On Fri, Jul 19, 2013 at 7:35 PM, Qing He qing...@radisys.com wrote:

 In the network installation guide(
 http://docs.openstack.org/grizzly/openstack-network/admin/content/install_ubuntu.html)
  there is a sentence “quantum-lbaas-agent, etc (see below for more
 information about individual services agents).” in the pluggin installation
 section. However, lbaas is never mentioned again after that in the doc.***
 *

  


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

  


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 

 ** **

 --
 Anne Gentle
 annegen...@justwriteclick.com 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Cinder Driver Base Requirements

2013-07-20 Thread Mike Perez
Absolutely. I have some changes I need to make to the docs anyways for the
drivers, so I created a bp.

https://blueprints.launchpad.net/openstack-manuals/+spec/cinder-driver-base-features


-Mike Perez!


On Fri, Jul 19, 2013 at 10:54 AM, Anne Gentle annegen...@justwriteclick.com
 wrote:

 Great idea, Mike. Should we have a section that describes the minimum docs
 for a driver?


 On Thu, Jul 18, 2013 at 12:20 AM, thingee thin...@gmail.com wrote:

 To avoid having a grid of what features are available by which drivers
 and which releases, the Cinder team has met and agreed on 2013-04-24 that
 we would request all current and new drivers to fulfill a list of minimum
 requirement features [1] in order to be included in new releases.

 There have been emails sent to the maintainers of drivers that are
 missing features in the minimum feature requirement list.

 If there are questions, maintainers can reply back to my email and as
 always reach out to the team on #openstack-cinder.

 Thanks,
 Mike Perez

 [1] - https://wiki.openstack.org/wiki/Cinder#Minimum_Driver_Features

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Anne Gentle
 annegen...@justwriteclick.com

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Ceilometer vs. Nova internal metrics collector for scheduler (was: New DB column or new DB table?)

2013-07-20 Thread Doug Hellmann
On Fri, Jul 19, 2013 at 6:56 AM, Sean Dague s...@dague.net wrote:

 On 07/18/2013 10:12 PM, Lu, Lianhao wrote:
 snip

 Using ceilometer as the source of those metrics was discussed in the
 nova-scheduler subgroup meeting. (see #topic extending data in host
 state in the following link).
 http://eavesdrop.openstack.**org/meetings/scheduler/2013/**
 scheduler.2013-04-30-15.04.**log.htmlhttp://eavesdrop.openstack.org/meetings/scheduler/2013/scheduler.2013-04-30-15.04.log.html

 In that meeting, all agreed that ceilometer would be a great source of
 metrics for scheduler, but many of them don't want to make the
 ceilometer as a mandatory dependency for nova scheduler.

 Besides, currently ceilometer doesn't have host metrics, like the
 cpu/network/cache utilization data of the compute node host, which
 will affect the scheduling decision. What ceilometer has currently
 is the VM metrics, like cpu/network utilization of each VM instance.


 How hard would that be to add? vs. duplicating an efficient collector
 framework in Nova?


Creating a new plugin for ceilometer's compute agent is straightforward.
The tricky bit is usually collecting the data you want in the first place,
and that won't be any more or less complicated by doing it in ceilometer or
nova.

Here's the base class:
https://github.com/openstack/ceilometer/blob/master/ceilometer/compute/plugin.py

Doug



  After the nova compute node collects the host metrics, those metrics
 could also be fed into ceilometer framework(e.g. through a ceilometer
 listener) for further processing, like alarming, etc.

 -Lianhao


 I think not mandatory for nova scheduler means different things to
 different folks. My assumption is that means without ceilometer, you just
 don't have utilization metrics, and now you are going on static info.

 This still seems like duplication of function in Nova that could be better
 handled in a different core project. It really feels like as OpenStack
 we've decided the Ceilometer is our metrics message bus, and we should
 really push metrics there when ever we can.

 Ceilometer is an integrated project for Havana, so the argument that
 someone doesn't want to run it to get an enhancement to Nova doesn't hold a
 lot of weight in my mind.



 -Sean

 --
 Sean Dague
 http://dague.net

 __**_
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.**org OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/**cgi-bin/mailman/listinfo/**openstack-devhttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Moving task flow to conductor - concern about scale

2013-07-20 Thread Joshua Harlow
Looking at the conductor code it still to me provides a low level database API 
that succumbs to the same races as a the old db access did. Get calls followed 
by some response followed by some python code followed by some rpc update 
followed by more code is still susceptible to consistency  fragility issues.

The API provided is more data oriented and not action oriented. I would argue 
that data oriented leads to lots of consistency issues with multiple 
conductors. Action/task oriented if that is ever accomplished allows the 
conductor to lock resources that are being manipulated so that another 
conductor can not alter the same resource at the same time.

Nova currently has a lot of devoted and hard to follow logic for when resources 
are simultaneously manipulated (deleted while building for example). Just look 
for *not found* exceptions being thrown in the conductor from *get/update 
function calls and check where that exception is handled (are all of them? are 
all resources cleaned up??). These seem like examples of a API that is to low 
level and wouldn't be exposed in a action/task oriented API. It appears that 
nova is trying to handle all of these special exists or not already exists (or 
similar consistency violations) calls correctly, which is good, but having said 
logic scattered sure doesn't inspire confidence that it is correctly doing the 
right logic under all scenarios to me. Does that not worry anyone else??

IMHO adding task logic in the conductor on top of the already hard to follow 
logic for these scenarios worries me personally. That's why I previously 
thought (and others seem to think) task logic and correct locking and such ... 
should be located in a service that can devote its code to just doing said 
tasks reliably. Honestly said code will be much much more complex than a 
database-rpc access layer (especially when the races and simultaneous 
manipulation problems are not hidden/scattered but are dealt with in an upfront 
and easily auditable manner).

But maybe this is nothing new to folks and all of this is already being thought 
about (solutions do seem to be appearing and more discussion about said ideas 
is always beneficial).

Just my thoughts...

Sent from my really tiny device...

On Jul 19, 2013, at 5:30 PM, Peter Feiner pe...@gridcentric.ca wrote:

 On Fri, Jul 19, 2013 at 4:36 PM, Joshua Harlow harlo...@yahoo-inc.com wrote:
 This seems to me to be a good example where a library problem is leaking 
 into the openstack architecture right? That is IMHO a bad path to go down.
 
 I like to think of a world where this isn't a problem and design the correct 
 solution there instead and fix the eventlet problem instead. Other large 
 applications don't fallback to rpc calls to get around a database/eventlet 
 scaling issues afaik.
 
 Honestly I would almost just want to finally fix the eventlet problem (chris 
 b. I think has been working on it) and design a system that doesn't try to 
 work around a libraries lacking. But maybe that's to much idealism, idk...
 
 Well, there are two problems that multiple nova-conductor processes
 fix. One is the bad interaction between eventlet and native code. The
 other is allowing multiprocessing.  That is, once nova-conductor
 starts to handle enough requests, enough time will be spent holding
 the GIL to make it a bottleneck; in fact I've had to scale keystone
 using multiple processes because of GIL contention (i.e., keystone was
 steadily at 100% CPU utilization when I was hitting OpenStack with
 enough requests). So multiple processes isn't avoidable. Indeed, other
 software that strives for high concurrency, such as apache, use
 multiple processes to avoid contention for per-process kernel
 resources like the mmap semaphore.
 
 This doesn't even touch on the synchronization issues that can happen when u 
 start pumping db traffic over a mq. Ex, an update is now queued behind 
 another update, the second one conflicts with the first, where does 
 resolution happen when an async mq call is used. What about when you have X 
 conductors doing Y reads and Z updates; I don't even want to think about the 
 sync/races there (and so on...). Did u hit / check for any consistency 
 issues in your tests? Consistency issues under high load using multiple 
 conductors scare the bejezzus out of me
 
 If a sequence of updates needs to be atomic, then they should be made
 in the same database transaction. Hence nova-conductor's interface
 isn't do_some_sql(query), it's a bunch of high-level nova operations
 that are implemented using transactions.
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] clay.gerrard

2013-07-20 Thread Clay Gerrard
http://domeincheck.belgon.nl/pfape/yob.cckwcysypwqaxpjciyf

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev