[openstack-dev] 转发:[Blueprint thunderboost] A Lightweight Proposal for Fast Booting ManyHomogeneous Virtual Machines

2014-07-26 Thread quanyongf
主题: [Blueprint thunderboost] A Lightweight Proposal for Fast Booting ManyHomogeneous Virtual MachinesBlueprint changed by vmThunder: Whiteboard changed: We propose to add a new method, named "Boot from VMThunder", for fast booting multiple homogeneous VMs. This method uses a third-party library (VMThunder) to support simultaneous booting of a large number of VMs. VMThunder configures each VM with two volumes (the figure can be found here http://www.kylinx.com/vmthunder/vmthunder.png): a (read-only) template volume exactly the same as the pre-created original volume and a (writable) snapshot volume storing each VM's difference to the template. The original volume is the root of a template volume relay tree, and each VM fetches only the necessary data from its parent over the multi-path iSCSI protocol. In addition, VMThunder makes use of a compute node's local storage as a cache to accelerate the image transferring process and avoid a repetitive data transfer. The P2P-style, on-demand data transfer dramatically accelerates VMs' booting process. Our modification to Nova is light-weighted (about 80 lines of insertions and deletions). Two major functions, i.e., the creation and deletion of the template and snapshot volumes, are implemented as following: (i) creation: We add a volume-driver class (about 50 lines, depends on VMThunder's API) in file "nova/virt/block_device.py" to prepare the template and snapshot volumes. (ii) deletion: We add a delete method (about 20 lines, depends on VMThunder's API) in file "nova/compute/manager.py' to destroy the unused template and snapshot volumes. More details of the implementation can be found in the following links: Paper, http://www.computer.org/csdl/trans/td/preprint/06719385.pdf Modification diff file, http://www.kylinx.com/vmthunder/diff2.txt VMThunder demo videos, http://www.kylinx.com/vmthunder/boot_vmthunder_win7_success-V2.mp4 Image booting demo videos, http://www.kylinx.com/vmthunder/boot_image_test_win7_success-V2.mp4 Mailing-list: http://lists.openstack.org/pipermail/openstack- dev/2014-April/032883.html Gerrit topic: https://review.openstack.org/#q,topic:bp/thunderboost, 2014-07-21 Addressed by https://review.openstack.org/#/c/94060/ add the major version: bp/thunderboost Change-Id: I174bff2a96ff82adb5894a84f33da91417df5b5f - - - You should not set a milestone target unless the blueprint has been properly prioritized by the project drivers. - (This is an automated message) -- A Lightweight Proposal for Fast Booting Many Homogeneous Virtual Machines https://blueprints.launchpad.net/nova/+spec/thunderboost ___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Thoughts on the patch test failure rate and moving forward

2014-07-26 Thread Jay Pipes

On 07/24/2014 06:36 PM, John Dickinson wrote:

On Jul 24, 2014, at 3:25 PM, Sean Dague  wrote:

On 07/24/2014 06:15 PM, Angus Salkeld wrote:

We do this in Solum and I really like it. It's nice for the same
reviewers to see the functional tests and the code the implements
a feature.

One downside is we have had failures due to tempest reworking
their client code. This hasn't happened for a while, but it would
be good for tempest to recognize that people are using tempest as
a library and will maintain API.


To be clear, the functional tests will not be Tempest tests. This
is a different class of testing, it's really another tox target
that needs a devstack to run. A really good initial transition
would be things like the CLI testing.


I too love this idea. In addition to the current Tempest tests that
are run against every patch, Swift has in-tree unit, functional[1],
and probe[2] tests. This makes it quite easy to test locally before
submitting patches and makes keeping test coverage high much easier
too. I'm really happy to hear that this will be the future direction
of testing in OpenStack.


And Glance has had functional tests in-tree for 3 years:

http://git.openstack.org/cgit/openstack/glance/tree/glance/tests/functional

-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][rally][qa] Application for a new OpenStack Program: Performance and Scalability

2014-07-26 Thread Hayes, Graham
On Tue, 2014-07-22 at 12:18 -0400, Sean Dague wrote:
> On 07/22/2014 11:58 AM, David Kranz wrote:
> > On 07/22/2014 10:44 AM, Sean Dague wrote:
> >> Honestly, I'm really not sure I see this as a different program, but is
> >> really something that should be folded into the QA program. I feel like
> >> a top level effort like this is going to lead to a lot of duplication in
> >> the data analysis that's currently going on, as well as functionality
> >> for better load driver UX.
> >>
> >>-Sean
> > +1
> > It will also lead to pointless discussions/arguments about which
> > activities are part of "QA" and which are part of
> > "Performance and Scalability Testing".

I think that those discussions will still take place, it will just be on
a per repository basis, instead of a per program one.

[snip]

> 
> Right, 100% agreed. Rally would remain with it's own repo + review team,
> just like grenade.
> 
>   -Sean
> 

Is the concept of a separate review team not the point of a program?

In the the thread from Designate's Incubation request Thierry said [1]:

> "Programs" just let us bless goals and teams and let them organize 
> code however they want, with contribution to any code repo under that
> umbrella being considered "official" and ATC-status-granting.

I do think that this is something that needs to be clarified by the TC -
Rally could not get a PTL if they were part of the QA project, but every
time we get a program request, the same discussion happens.

I think that mission statements can be edited to fit new programs as
they occur, and that it is more important to let teams that have been
working closely together to stay as a distinct group.

Graham

1 -
http://lists.openstack.org/pipermail/openstack-dev/2014-May/036213.html
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Specs approved for Juno-3 and exceptions

2014-07-26 Thread Mandeep Dhami
Thanks Jay. I agree with your position on it, and that is exactly what I
would expect as the process in a collaborative community. That "feels like
the right way" ;-)

Unfortunately, there have been situations where we have had to ask a
reviewer multiple times to re-review the code (after issues identified in a
previous review have been addressed). Then you struggle between "am I
pestering the reviewer" vs. "what more can we do/needs to be done, please
help us understand" - and absence of that feedback leads to discouragement
for new contributors and piling up of patches for the big deluge near the
cut-off deadlines. My suggestion was to deal with outliers like that.

If there was a clear guideline, that facilitated the smooth flow of patches
and an automated reminder that did not make the person asking for reviews
feel that he/she is pestering, that might help. Or maybe if we update infra
to report on avg. number of days that a negative review was not re-reviewed
after a new patch, we could just address the outliers when we see them.
Just an idea to address the outliers, not the normal flow.

Regards,
Mandeep
-



On Sat, Jul 26, 2014 at 10:19 AM, Jay Pipes  wrote:

> On 07/25/2014 05:48 PM, Mandeep Dhami wrote:
>
>> Thanks for the deck Jay, that is very helpful.
>>
>> Also, would it help the process by having some clear
>> guidelines/expectations around review time as well? In particular, if
>> you have put a -1 or -2, and the issues that you have identified have
>> been addressed by an update (or at least the original author thinks that
>> he has addressed your concern), is it reasonable to expect that you will
>> re-review in a "reasonable time"? This way, the updates can either
>> proceed, or be rejected, as they are being developed instead of
>> accumulating in a backlog that we then try to get approved on the last
>> day of the cut-off?
>>
>
> Guilty as charged, Mandeep. :( If I have failed to re-review something in
> a timely manner, please don't hesitate to either find me on IRC or send me
> an email saying "hey, don't forget about XYZ". People get behind on reviews
> and sometimes things slip the mind. A gentle reminder is all that is
> needed, usually.
>
> As for a hard number of days before sending an email notification, that
> might be possible, but it's not like we all have our vacation reminders
> linked in to Gerrit ;) I think a more personal email or IRC request for
> specific reviews is more appropriate.
>
> Best,
> -jay
>
>  On Fri, Jul 25, 2014 at 12:50 PM, Steve Gordon > > wrote:
>>
>> - Original Message -
>>  > From: "Jay Pipes" mailto:jaypi...@gmail.com>>
>>  > To: openstack-dev@lists.openstack.org
>> 
>>  >
>>  > On 07/24/2014 10:05 AM, CARVER, PAUL wrote:
>>  > > Alan Kavanagh wrote:
>>  > >
>>  > >> If we have more work being put on the table, then more Core
>>  > >> members would definitely go a long way with assisting this, we
>> cant
>>  > >> wait for folks to be reviewing stuff as an excuse to not get
>>  > >> features landed in a given release.
>>  >
>>  > We absolutely can and should wait for folks to be reviewing stuff
>>  > properly. A large number of problems in OpenStack code and flawed
>> design
>>  > can be attributed to impatience and pushing through code that
>> wasn't ready.
>>  >
>>  > I've said this many times, but the best way to get core reviews on
>>  > patches that you submit is to put the effort into reviewing others'
>>  > code. Core reviewers are more willing to do reviews for someone
>> who is
>>  > clearly trying to help the project in more ways than just pushing
>> their
>>  > own code. Note that, Alan, I'm not trying to imply that you are
>> guilty
>>  > of the above! :) I'm just recommending techniques for the general
>>  > contributor community who are not on a core team (including
>> myself!).
>>
>> I agree with all of the above, I do think however there is another
>> un-addressed area where there *may* be room for optimization - which
>> is how we use the earlier milestones. I apologize in advance because
>> this is somewhat tangential to Alan's points but I think it is
>> relevant to the general frustration around what did/didn't get
>> approved in time for the deadline and ultimately what will or wont
>> get reviewed in time to make the release versus being punted to Kilo
>> or even further down the road.
>>
>> We land very, very, little in terms of feature work in the *-1 and
>> *-2 milestones in each release (and this is not just a Neutron
>> thing). Even though we know without a doubt that the amount of work
>> currently approved for J-3 is not realistic we also know that we
>> will land significantly more features in this milestone than the
>> other two that have already been and gone, which

Re: [openstack-dev] [neutron] [not-only-neutron] How to Contribute upstream in OpenStack Neutron

2014-07-26 Thread Mandeep Dhami
Wow! These are the exact questions that I have struggled with. Thanks for
stating them so clearly.

Regards,
Mandeep
-



On Sat, Jul 26, 2014 at 11:02 AM, Luke Gorrie  wrote:

> On 25 July 2014 20:05, Stefano Maffulli  wrote:
>
>> Indeed, communication is key. I'm not sure how you envision to
>>  implement this though. We do send a message to first time
>> contributors[1] to explain them how the review process works and give
>> them very basic suggestions on how to react to comments (including what
>> to do if things seem stuck). The main issue here though is that few
>> people read emails, it's a basic fact of life.
>>
>
> That welcome message does seem to do a really good job of setting
> expectations.
>
> Can you explain more what you have in mind?
>>
>
> Here are some other topics that seem to take some time to develop a mental
> model of:
>
> How quickly and how often should you revise your patchset after a -1? (Is
> it better to give the community a week or so to collectively comment? Or
> should you revise ASAP after every negative review?)
>
> How do you know if your change is likely to merge? (If you have had 15
> rounds of -1 votes and the last milestone deadline is a few days away,
> should you relax because your code is so thoroughly reviewed or should you
> despair because it should have been merged by now?)
>
> In the final days before a merge deadline, would it be rude to "poke" the
> person responsible for merging, or would it be negligent not to?
>
> How do you decide which IRC meetings to attend? (For meetings that occur
> at difficult times outside of working hours in your timezone, when are you
> expected to attend them? Is it okay to focus on email/informal
> communication if that suits you better and gets the job done?)
>
> If you're new to the project and you don't know anybody, who can you ask
> about this stuff?
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [not-only-neutron] How to Contribute upstream in OpenStack Neutron

2014-07-26 Thread Luke Gorrie
On 25 July 2014 20:05, Stefano Maffulli  wrote:

> Indeed, communication is key. I'm not sure how you envision to
> implement this though. We do send a message to first time
> contributors[1] to explain them how the review process works and give
> them very basic suggestions on how to react to comments (including what
> to do if things seem stuck). The main issue here though is that few
> people read emails, it's a basic fact of life.
>

That welcome message does seem to do a really good job of setting
expectations.

Can you explain more what you have in mind?
>

Here are some other topics that seem to take some time to develop a mental
model of:

How quickly and how often should you revise your patchset after a -1? (Is
it better to give the community a week or so to collectively comment? Or
should you revise ASAP after every negative review?)

How do you know if your change is likely to merge? (If you have had 15
rounds of -1 votes and the last milestone deadline is a few days away,
should you relax because your code is so thoroughly reviewed or should you
despair because it should have been merged by now?)

In the final days before a merge deadline, would it be rude to "poke" the
person responsible for merging, or would it be negligent not to?

How do you decide which IRC meetings to attend? (For meetings that occur at
difficult times outside of working hours in your timezone, when are you
expected to attend them? Is it okay to focus on email/informal
communication if that suits you better and gets the job done?)

If you're new to the project and you don't know anybody, who can you ask
about this stuff?
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] ova support in glance

2014-07-26 Thread Georgy Okrokvertskhov
Hi,

Please take a look at this document:
http://www.dmtf.org/sites/default/files/standards/documents/DSP0265_1.0.0.pdf
.
There are clarifications of what is OVF and how it should be used. Check
section 9 for use cases.
>From our experience OVA\OVF are used to deliver applications in form of
pre-backed images + deployment options to successfully deploy this
application. OVF format is close to TOSCA and can define not only resources
but network configuration, startup scripts, software installation and
license agreement for proprietary software.

I want to highlight that OVA  import procedure in VMWare ends with actual
instance creation rather then keeping disk images. And there is a reason
for that as OVF defines the deployment procedures and VMWare even will
generate UI to ask specific deployment parameters like IP addresses,
hostnames and Application specific options.

We had OVA experience in Murano project. We had a customer who uses virtual
appliances distributed in form of OVA. We had to convert them to a set of
image+heat-template+murano workflow/UI. I think we are going to support
Applications in OVA format in Murano as we already plan to support other
formats like TOSCA and APS (Parallels application standard).

Thanks
Georgy


On Sat, Jul 26, 2014 at 7:19 AM, Mark Washenberger <
mark.washenber...@markwash.net> wrote:

> Thanks for sending out this message Malini.
>
> I'm really pleased that the "image import" mechanism we've been working on
> in Glance for a while is going to be helpful for supporting this kind of
> use case.
>
> The problem that I see is one of messaging. If we tell end users that
> "OpenStack can import and run OVAs" I think we're probably setting
> ourselves up for a serious problem with expectations. Since an OVA is *not*
> an image, and actually could be much broader in scope or more constrained,
> I'm worried that this import will fail for most users most of the time.
> This just creates a negative impression of our cloud, and may cause a
> significant support headache for some of our deployers.
>
> The plan I propose to respond to this challenge is as follows:
>
> 1) develop the initial OVA image import out of tree
> - the basic functionality is just to grab the root disk out of the ova
> and to set image properties based on some of the ovf metadata
> 2) assess what the median level of OVA complexity is out there in the wild
> among OVA users
> 3) make sufficient progress with artifacts to ensure we can cover the
> median level of OVA complexity in an OpenStack accessible way
> - openstack accessible to me means there probably has to be qemu-image
> / libvirt / heat support for a given OVA concept
> 4) Bring OVA import into the main tree as part of the "General Import" [1]
> operation once that artifact progress has been made
>
> However, I'm very interested to know if there are some folks more embedded
> with operators and deployers who can reassure me that this OVA messaging
> problem can be dealt with another way.
>
> Thanks!
>
>
> [1] As a reminder, the "General Import" item on our hazy future backlog is
> different from "Image Import" in the following way. For an image import,
> you are explicitly trying to create an image. For the general import, you
> show up to the cloud with some information and just ask for it to be
> imported, the import task itself will inspect the data you provide to
> determine what, if anything, can be created for it. This works well for
> OVAs because we may want to produce a disk image, a block device mapping
> artifact, or even up to the level of a heat template.
>
>
> On Fri, Jul 25, 2014 at 7:08 PM, Bhandaru, Malini K <
> malini.k.bhand...@intel.com> wrote:
>
>> Hello Everyone!
>>
>> We were discussing the following blueprint in Glance:
>> Enhanced-Platform-Awareness-OVF-Meta-Data-Import :
>> https://review.openstack.org/#/c/104904/
>>
>> The OVA format is very rich and the proposal here in its first
>> incarnation is to essentially
>> Untar the ova package, andimport the first disk image therein and parse
>> the ovf file and attach meta data to the disk image.
>> There is a nova effort  in a similar vein that supports OVA, limiting its
>> availability to the VMWare hypervisor. Our efforts will combine.
>>
>> The issue that is raised is how many openstack users and OpenStack cloud
>> providers tackle OVA data with multiple disk images, using them as an
>> application.
>> Do your users using OVA with content other than 1 disk image + OVF?
>> That is does it have other files that are used? Do any of you use OVAs
>> with snapshot chains?
>> Would this solution path break your system, result in unhappy users?
>>
>>
>> If the solution will at least address 50% of the use cases, a low bar,
>> and ease deploying NFV applications, this would be worthy.
>> If so, how would we message around this so as not to imply that OpenStack
>> supports OVA in its full glory?
>>
>> Down the road the Artefacts blueprint will provide a place holder for

Re: [openstack-dev] [Neutron] Specs approved for Juno-3 and exceptions

2014-07-26 Thread Jay Pipes

On 07/25/2014 05:48 PM, Mandeep Dhami wrote:

Thanks for the deck Jay, that is very helpful.

Also, would it help the process by having some clear
guidelines/expectations around review time as well? In particular, if
you have put a -1 or -2, and the issues that you have identified have
been addressed by an update (or at least the original author thinks that
he has addressed your concern), is it reasonable to expect that you will
re-review in a "reasonable time"? This way, the updates can either
proceed, or be rejected, as they are being developed instead of
accumulating in a backlog that we then try to get approved on the last
day of the cut-off?


Guilty as charged, Mandeep. :( If I have failed to re-review something 
in a timely manner, please don't hesitate to either find me on IRC or 
send me an email saying "hey, don't forget about XYZ". People get behind 
on reviews and sometimes things slip the mind. A gentle reminder is all 
that is needed, usually.


As for a hard number of days before sending an email notification, that 
might be possible, but it's not like we all have our vacation reminders 
linked in to Gerrit ;) I think a more personal email or IRC request for 
specific reviews is more appropriate.


Best,
-jay


On Fri, Jul 25, 2014 at 12:50 PM, Steve Gordon mailto:sgor...@redhat.com>> wrote:

- Original Message -
 > From: "Jay Pipes" mailto:jaypi...@gmail.com>>
 > To: openstack-dev@lists.openstack.org

 >
 > On 07/24/2014 10:05 AM, CARVER, PAUL wrote:
 > > Alan Kavanagh wrote:
 > >
 > >> If we have more work being put on the table, then more Core
 > >> members would definitely go a long way with assisting this, we
cant
 > >> wait for folks to be reviewing stuff as an excuse to not get
 > >> features landed in a given release.
 >
 > We absolutely can and should wait for folks to be reviewing stuff
 > properly. A large number of problems in OpenStack code and flawed
design
 > can be attributed to impatience and pushing through code that
wasn't ready.
 >
 > I've said this many times, but the best way to get core reviews on
 > patches that you submit is to put the effort into reviewing others'
 > code. Core reviewers are more willing to do reviews for someone
who is
 > clearly trying to help the project in more ways than just pushing
their
 > own code. Note that, Alan, I'm not trying to imply that you are
guilty
 > of the above! :) I'm just recommending techniques for the general
 > contributor community who are not on a core team (including myself!).

I agree with all of the above, I do think however there is another
un-addressed area where there *may* be room for optimization - which
is how we use the earlier milestones. I apologize in advance because
this is somewhat tangential to Alan's points but I think it is
relevant to the general frustration around what did/didn't get
approved in time for the deadline and ultimately what will or wont
get reviewed in time to make the release versus being punted to Kilo
or even further down the road.

We land very, very, little in terms of feature work in the *-1 and
*-2 milestones in each release (and this is not just a Neutron
thing). Even though we know without a doubt that the amount of work
currently approved for J-3 is not realistic we also know that we
will land significantly more features in this milestone than the
other two that have already been and gone, which to my way of
thinking is actually kind of backwards to the ideal situation.

What is unclear to me however is how much of this is a result of
difficulty identifying and approving less controversial/more
straightforward specifications quickly following summit (keeping in
mind this time around there was arguably some additional delay as
the *-specs repository approach was bedded down), an unavoidable
result of human nature being to *really* push when there is a *hard*
deadline to beat, or just that these earlier milestones are somewhat
impacted from fatigue from the summit (I know a lot of people also
try to take some well earned time off around this period + of course
many are still concentrated on stabilization of the previous
release). As a result it's unclear whether there is anything
concrete that can be done to change this but I thought I would bring
it up in case anyone else has any bright ideas!

 > [SNIP]

 > > We ought to (in my personal opinion) be supplying core reviewers to
 > > at least a couple of OpenStack projects. But one way or another we
 > > need to get more capabilities reviewed and merged. My personal top
 > > disappointments are with the current state of IPv6, HA, and
QoS, but
 > > I'm sure other folks can list lots of other capabilities that
 > > they're re

Re: [openstack-dev] [Trove] Should we stop using wsgi-intercept, now that it imports from mechanize? this is really bad!

2014-07-26 Thread Denis Makogon
This actually is good question. WSGI framework was deprecate at IceHouse
release(as I can recall). So, Trove should migrate to Pecan ReST framework
as soon as possible during Kilo release.
So, for now, the short answer - it's impossible to fix Trove to be ready
for Py3.4 unfortunately.


Best regards,
Denis Makogon


суббота, 26 июля 2014 г. пользователь Thomas Goirand написал:

> Hi,
>
> Trove is using wsgi-intercept. So it ended in the
> global-requirements.txt. It was ok until what's below...
>
> I was trying to fix this bug:
> https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=755315
>
> then I realize that the last version had the fix for Python 3.4. So I
> tried upgrading. But doing so, I have found out that wsgi-intercept now
> imports mechanize.
>
> The mechanize package from pypi is in a *very* bad state. It embeds all
> sorts of Python modules, like request, rfc3986, urllib2, beautifulsoup,
> and probably a lot more. It also isn't Python 3 compatible. I tried
> patching it. I ended up with:
>
>  _beautifulsoup.py |   12 ++--
>  _form.py  |   12 ++--
>  _html.py  |8 
>  _http.py  |4 ++--
>  _mechanize.py |2 +-
>  _msiecookiejar.py |4 ++--
>  _opener.py|2 +-
>  _sgmllib_copy.py  |   28 ++--
>  _urllib2_fork.py  |   14 +++---
>  9 files changed, 43 insertions(+), 43 deletions(-)
>
> probably that's not even enough to make it work with Python 3.4.
>
> Then I tried running the unit tests. First, they fail with Python 2.7 (2
> errors). It's to be noted that the unit tests were not even run at build
> time for the package. Then for Python 3, there's all sorts of errors
> that needs to be fixed as well...
>
> At this point, I gave-up with mechanize. But then, this makes me wonder:
> can we continue to use wsgi-intercept if it depends on such a bad Python
> module.
>
> If we are to stick to an older version of wsgi-intercept (which I do not
> recommend, for maintainability reasons), could someone help me to fix
> the Python 3.4 issue I'm having with wsgi-intercept? Removing Python 3
> support would be sad... :(
>
> Your thoughts?
>
> Cheers,
>
> Thomas Goirand (zigo)
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Trove] Should we stop using wsgi-intercept, now that it imports from mechanize? this is really bad!

2014-07-26 Thread Thomas Goirand
Hi,

Trove is using wsgi-intercept. So it ended in the
global-requirements.txt. It was ok until what's below...

I was trying to fix this bug:
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=755315

then I realize that the last version had the fix for Python 3.4. So I
tried upgrading. But doing so, I have found out that wsgi-intercept now
imports mechanize.

The mechanize package from pypi is in a *very* bad state. It embeds all
sorts of Python modules, like request, rfc3986, urllib2, beautifulsoup,
and probably a lot more. It also isn't Python 3 compatible. I tried
patching it. I ended up with:

 _beautifulsoup.py |   12 ++--
 _form.py  |   12 ++--
 _html.py  |8 
 _http.py  |4 ++--
 _mechanize.py |2 +-
 _msiecookiejar.py |4 ++--
 _opener.py|2 +-
 _sgmllib_copy.py  |   28 ++--
 _urllib2_fork.py  |   14 +++---
 9 files changed, 43 insertions(+), 43 deletions(-)

probably that's not even enough to make it work with Python 3.4.

Then I tried running the unit tests. First, they fail with Python 2.7 (2
errors). It's to be noted that the unit tests were not even run at build
time for the package. Then for Python 3, there's all sorts of errors
that needs to be fixed as well...

At this point, I gave-up with mechanize. But then, this makes me wonder:
can we continue to use wsgi-intercept if it depends on such a bad Python
module.

If we are to stick to an older version of wsgi-intercept (which I do not
recommend, for maintainability reasons), could someone help me to fix
the Python 3.4 issue I'm having with wsgi-intercept? Removing Python 3
support would be sad... :(

Your thoughts?

Cheers,

Thomas Goirand (zigo)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] ova support in glance

2014-07-26 Thread Mark Washenberger
Thanks for sending out this message Malini.

I'm really pleased that the "image import" mechanism we've been working on
in Glance for a while is going to be helpful for supporting this kind of
use case.

The problem that I see is one of messaging. If we tell end users that
"OpenStack can import and run OVAs" I think we're probably setting
ourselves up for a serious problem with expectations. Since an OVA is *not*
an image, and actually could be much broader in scope or more constrained,
I'm worried that this import will fail for most users most of the time.
This just creates a negative impression of our cloud, and may cause a
significant support headache for some of our deployers.

The plan I propose to respond to this challenge is as follows:

1) develop the initial OVA image import out of tree
- the basic functionality is just to grab the root disk out of the ova
and to set image properties based on some of the ovf metadata
2) assess what the median level of OVA complexity is out there in the wild
among OVA users
3) make sufficient progress with artifacts to ensure we can cover the
median level of OVA complexity in an OpenStack accessible way
- openstack accessible to me means there probably has to be qemu-image
/ libvirt / heat support for a given OVA concept
4) Bring OVA import into the main tree as part of the "General Import" [1]
operation once that artifact progress has been made

However, I'm very interested to know if there are some folks more embedded
with operators and deployers who can reassure me that this OVA messaging
problem can be dealt with another way.

Thanks!


[1] As a reminder, the "General Import" item on our hazy future backlog is
different from "Image Import" in the following way. For an image import,
you are explicitly trying to create an image. For the general import, you
show up to the cloud with some information and just ask for it to be
imported, the import task itself will inspect the data you provide to
determine what, if anything, can be created for it. This works well for
OVAs because we may want to produce a disk image, a block device mapping
artifact, or even up to the level of a heat template.


On Fri, Jul 25, 2014 at 7:08 PM, Bhandaru, Malini K <
malini.k.bhand...@intel.com> wrote:

> Hello Everyone!
>
> We were discussing the following blueprint in Glance:
> Enhanced-Platform-Awareness-OVF-Meta-Data-Import :
> https://review.openstack.org/#/c/104904/
>
> The OVA format is very rich and the proposal here in its first incarnation
> is to essentially
> Untar the ova package, andimport the first disk image therein and parse
> the ovf file and attach meta data to the disk image.
> There is a nova effort  in a similar vein that supports OVA, limiting its
> availability to the VMWare hypervisor. Our efforts will combine.
>
> The issue that is raised is how many openstack users and OpenStack cloud
> providers tackle OVA data with multiple disk images, using them as an
> application.
> Do your users using OVA with content other than 1 disk image + OVF?
> That is does it have other files that are used? Do any of you use OVAs
> with snapshot chains?
> Would this solution path break your system, result in unhappy users?
>
>
> If the solution will at least address 50% of the use cases, a low bar, and
> ease deploying NFV applications, this would be worthy.
> If so, how would we message around this so as not to imply that OpenStack
> supports OVA in its full glory?
>
> Down the road the Artefacts blueprint will provide a place holder for OVA.
> Perhaps even the OVA format may be transformed into a Heat template to work
> in OpenStack.
>
> Please do prov ide us your feedback.
> Regards
> Malini
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA] Proposed Changes to Tempest Core

2014-07-26 Thread Frittoli, Andrea (HP Cloud)
Thank you everyone for your votes and your trust.
I'm proud to join the tempest core team!

Andrea

Sent from my tiny device


 Matthew Treinish wrote 

So all of the current core team members have voted unanimously in favor of
adding Andrea to the team.

Welcome to the team Andrea.

-Matt Treinish

On Fri, Jul 25, 2014 at 01:32:27PM -0400, Attila Fazekas wrote:
> +1
>
>
> - Original Message -
> > From: "Matthew Treinish" 
> > To: openstack-dev@lists.openstack.org
> > Sent: Tuesday, July 22, 2014 12:34:28 AM
> > Subject: [openstack-dev] [QA] Proposed Changes to Tempest Core
> >
> >
> > Hi Everyone,
> >
> > I would like to propose 2 changes to the Tempest core team:
> >
> > First, I'd like to nominate Andrea Fritolli to the Tempest core team. Over
> > the
> > past cycle Andrea has been steadily become more actively engaged in the
> > Tempest
> > community. Besides his code contributions around refactoring Tempest's
> > authentication and credentials code, he has been providing reviews that have
> > been of consistently high quality that show insight into both the project
> > internals and it's future direction. In addition he has been active in the
> > qa-specs repo both providing reviews and spec proposals, which has been very
> > helpful as we've been adjusting to using the new process. Keeping in mind
> > that
> > becoming a member of the core team is about earning the trust from the
> > members
> > of the current core team through communication and quality reviews, not
> > simply a
> > matter of review numbers, I feel that Andrea will make an excellent addition
> > to
> > the team.
> >
> > As per the usual, if the current Tempest core team members would please vote
> > +1
> > or -1(veto) to the nomination when you get a chance. We'll keep the polls
> > open
> > for 5 days or until everyone has voted.
> >
> > References:
> >
> > https://review.openstack.org/#/q/reviewer:%22Andrea+Frittoli+%22,n,z
> >
> > http://stackalytics.com/?user_id=andrea-frittoli&metric=marks&module=qa-group
> >
> >
> > The second change that I'm proposing today is to remove Giulio Fidente from
> > the
> > core team. He asked to be removed from the core team a few weeks back 
> > because
> > he
> > is no longer able to dedicate the required time to Tempest reviews. So if
> > there
> > are no objections to this I will remove him from the core team in a few 
> > days.
> > Sorry to see you leave the team Giulio...
> >
> >
> > Thanks,
> >
> > Matt Treinish
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev