Re: [openstack-dev] [Horizon] Nominating Radomir Dopieralski to Horizon Core

2014-03-10 Thread Radomir Dopieralski
On 10/03/14 17:50, Lyle, David wrote:
> The results are unanimous.  Congratulations Radomir and welcome to the 
> Horizon Core team.

Thank you everyone, I will do my best!

-- 
Radomir Dopieralski


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] a question about instance snapshot

2014-03-10 Thread Bohai (ricky)
> -Original Message-
> From: Jay Pipes [mailto:jaypi...@gmail.com]
> Sent: Tuesday, March 11, 2014 3:20 AM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [nova] a question about instance snapshot
>
> On Mon, 2014-03-10 at 12:13 -0400, Shawn Hartsock wrote:
> > We have very strong interest in pursing this feature in the VMware
> > driver as well. I would like to see the revert instance feature
> > implemented at least.
> >
> > When I used to work in multi-discipline roles involving operations it
> > would be common for us to snapshot a vm, run through an upgrade
> > process, then revert if something did not upgrade smoothly. This
> > ability alone can be exceedingly valuable in long-lived virtual
> > machines.
> >
> > I also have some comments from parties interested in refactoring how
> > the VMware drivers handle snapshots but I'm not certain how much that
> > plays into this "live snapshot" discussion.
>
> I think the reason that there isn't much interest in doing this kind of thing 
> is
> because the worldview that VMs are pets is antithetical to the worldview that
> VMs are cattle, and Nova tends to favor the latter (where DRS/DPM on
> vSphere tends to favor the former).
>
> There's nothing about your scenario above of being able to "revert" an 
> instance
> to a particular state that isn't possible with today's Nova.
> Snapshotting an instance, doing an upgrade of software on the instance, and
> then restoring from the snapshot if something went wrong (reverting) is
> already fully possible to do with the regular Nova snapshot and restore
> operations. The only difference is that the "live-snapshot"
> stuff would include saving the memory view of a VM in addition to its disk 
> state.
> And that, at least in my opinion, is only needed when you are treating VMs 
> like
> pets and not cattle.
>

Hi Jay,

I read every words in your reply and respect what you said.

But i can't agree with you that memory snapshot is a feature for pat not for 
cattle.
I think it's a feature whatever what do you look the instance as.

The world doesn't care about what we look the instance as, in fact, currently 
almost all the
mainstream hypervisors have supported the memory snapshot.
If it's just a dispensable feature and no users need it, I can't understand why
the hypervisors provide it without exception.

In the document " OPENSTACK OPERATIONS GUIDE" section " Live snapshots" has the
below words:
" To ensure that important services have written their contents to disk (such 
as, databases),
we recommend you read the documentation for those applications to determine 
what commands
to issue to have them sync their contents to disk. If you are unsure how to do 
this,
 the safest approach is to simply stop these running services normally.
"
This just pushes all the responsibility to guarantee the consistency of the 
instance to the end user.
It's absolutely not convenient and I doubt whether it's appropriate.


Best regards to you.
Ricky

> Best,
> -jay
>
> > On Mon, Mar 10, 2014 at 12:04 AM, Bohai (ricky) 
> wrote:
> > >> -Original Message-
> > >> From: Alex Xu [mailto:x...@linux.vnet.ibm.com]
> > >> Sent: Sunday, March 09, 2014 10:04 PM
> > >> To: OpenStack Development Mailing List (not for usage questions)
> > >> Subject: Re: [openstack-dev] [nova] a question about instance
> > >> snapshot
> > >>
> > >> Hi, Jeremy, the discussion at here
> > >> http://lists.openstack.org/pipermail/openstack-dev/2013-August/0136
> > >> 88.html
> > >>
> > >
> > > I have a great interest in the topic too.
> > > I read the link you provided, and there is a little confusion for me.
> > > I agree with the security consideration in the discussion and memory
> snapshot can't be used for the cloning instance easily.
> > >
> > > But I think it's safe for the using for Instance revert.
> > > And revert the instance to a checkpoint is valuable for the user.
> > > Why we didn't use it for instance revert in the first step?
> > >
> > > Best regards to you.
> > > Ricky
> > >
> > >> Thanks
> > >> Alex
> > >> On 2014年03月07日 10:29, Liuji (Jeremy) wrote:
> > >> > Hi, all
> > >> >
> > >> > Current openstack seems not support to snapshot instance with
> > >> > memory and
> > >> dev states.
> > >> > I searched the blueprint and found two relational blueprint like below.
> > >> > But these blueprint failed to get in the branch.
> > >> >
> > >> > [1]: https://blueprints.launchpad.net/nova/+spec/live-snapshots
> > >> > [2]:
> > >> > https://blueprints.launchpad.net/nova/+spec/live-snapshot-vms
> > >> >
> > >> > In the blueprint[1], there is a comment,"
> > >> > We discussed this pretty extensively on the mailing list and in a
> > >> > design
> > >> summit session.
> > >> > The consensus is that this is not a feature we would like to have in 
> > >> > nova.
> > >> --russellb "
> > >> > But I can't find the discuss mail about it. I hope to know why we think
> so ?
> > >> > Without memory snapshot, we can't to provide the fea

Re: [openstack-dev] [nova] a question about instance snapshot

2014-03-10 Thread laserjetyang
the live snapshot has some issue on KVM, and I think it is a problem of KVM
hypervisor. For VMware, live snapshot is quite mature, and I think it is a
good way to start with VMware live snapshot.


On Tue, Mar 11, 2014 at 1:37 PM, Qin Zhao  wrote:

> Hi Jay,
> When users move from old tools to new cloud tools, they also hope the new
> tool can inherit some good and well-known capabilities. Sometimes, assuming
> users can change their habit is dangerous. (Eg. removing Windows Start
> button). Live-snapshot is indeed a very useful feature of hypervisors, and
> it is widely used for several years (especially for VMware). I think it is
> not harmful to existing Nova structure and workflow, and will let more
> people to adopt OpenStack easier.
>
>
> On Tue, Mar 11, 2014 at 6:15 AM, Jay Pipes  wrote:
>
>> On Mon, 2014-03-10 at 15:52 -0600, Chris Friesen wrote:
>> > On 03/10/2014 02:58 PM, Jay Pipes wrote:
>> > > On Mon, 2014-03-10 at 16:30 -0400, Shawn Hartsock wrote:
>> > >> While I understand the general argument about pets versus cattle. The
>> > >> question is, would you be willing to poke a few holes in the strict
>> > >> "cattle" abstraction for the sake of pragmatism. Few shops are going
>> > >> to make the direct transition in one move. Poking a hole in the
>> cattle
>> > >> abstraction allowing them to keep a pet VM might be very valuable to
>> > >> some shops making a migration.
>> > >
>> > > Poking holes in cattle aside, my experience with shops that prefer the
>> > > pets approach is that they are either:
>> > >
>> > >   * Not managing their servers themselves at all and just relying on
>> some
>> > > IT operations organization to manage everything for them, including
>> all
>> > > aspects of backing up their data as well as failing over and balancing
>> > > servers, or,
>> > >   * Hiding behind rationales of "needing to be secure" or "needing
>> 100%
>> > > uptime" or "needing no customer disruption" in order to avoid any
>> change
>> > > to the status quo. This is because the incentives inside legacy IT
>> > > application development and IT operations groups are typically towards
>> > > not rocking the boat in order to satisfy unrealistic expectations and
>> > > outdated interface agreements that are forced upon them by management
>> > > chains that haven't crawled out of the waterfall project management
>> funk
>> > > of the 1980s.
>> > >
>> > > Adding pet-based features to Nova would, IMO, just perpetuate the
>> above
>> > > scenarios and incentives.
>> >
>> > What about the cases where it's not a "preference" but rather just the
>> > inertia of pre-existing systems and procedures?
>>
>> You mean what I wrote in the second bullet point above?
>>
>> > If we can get them in the door with enough support for legacy stuff,
>> > then they might be easier to convince to do things the "cloud" way in
>> > the future.
>>
>> Yes, fair point, and that's what Shawn was saying as well. Just noting
>> that in my experience, the second part of the above sentence just
>> doesn't happen. Once you bring them over and offer them the tools from
>> their legacy environment, they aren't interested in changing. :)
>>
>> > If we stick with the hard-line cattle-only approach we run the risk of
>> > alienating them completely since redoing everything at once is generally
>> > not feasible.
>>
>> Yes, I understand that. I'm actually fine with including functionality
>> like memory snapshotting, but only if under no circumstances does it
>> negatively impact the service of compute to other tenants/users and will
>> not negatively impact the scaling factor of Nova either.
>>
>> I'm just not as optimistic as you are that once legacy IT folks have
>> their old tools, they will consider changing their habits. ;)
>>
>> Best,
>> -jay
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
> --
> Qin Zhao
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Question about Ephemeral/swap disk and flavor

2014-03-10 Thread Chen CH Ji

Hi
 We are trying to fix some eph disk related bugs and also
considering our private driver implementation based for eph disk
 I have some questions related "ephemeral_gb" in flavor and nova
boot option "--ephemeral", and I failed to find out descriptions in
existing documents:

   1.   If I boot a new instance using flavor with "ephemeral_gb" defined as
  20 G, and no "--ephemeral" specified, does the new instance have a
  20G additional ephemeral disk after booted?
   2.   The flavor has 20 G ephemeral disk defined, but specified two 5 G
  ephemeral disk through "--ephemeral" option in nova boot cmd, what
  will happen to the new instance? Get one 20G additional ephemeral? Or
  two 5 G ephemeral disks instead?
   3.   If the answer to question 2 is "two 5 G ephemeral disks", what will
  happen if resize this instance to a flavor with "ephemeral_gb"
  defined as 40 G?
   4.   I think the "ephemeral_gb" in flavor means maximum disk size you
  can specified, is this understanding correct?
   5.   Also, above question can be applied to swap disk

Asked here and reply guess developers who made this feature can give more
info:
https://ask.openstack.org/en/question/25088/ephemeral_gb-in-flavor-and-nova-boot-option-ephemeral/

Best Regards!

Kevin (Chen) Ji 纪 晨

Engineer, zVM Development, CSTL
Notes: Chen CH Ji/China/IBM@IBMCN   Internet: jiche...@cn.ibm.com
Phone: +86-10-82454158
Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District,
Beijing 100193, PRC___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] a question about instance snapshot

2014-03-10 Thread Qin Zhao
Hi Jay,
When users move from old tools to new cloud tools, they also hope the new
tool can inherit some good and well-known capabilities. Sometimes, assuming
users can change their habit is dangerous. (Eg. removing Windows Start
button). Live-snapshot is indeed a very useful feature of hypervisors, and
it is widely used for several years (especially for VMware). I think it is
not harmful to existing Nova structure and workflow, and will let more
people to adopt OpenStack easier.


On Tue, Mar 11, 2014 at 6:15 AM, Jay Pipes  wrote:

> On Mon, 2014-03-10 at 15:52 -0600, Chris Friesen wrote:
> > On 03/10/2014 02:58 PM, Jay Pipes wrote:
> > > On Mon, 2014-03-10 at 16:30 -0400, Shawn Hartsock wrote:
> > >> While I understand the general argument about pets versus cattle. The
> > >> question is, would you be willing to poke a few holes in the strict
> > >> "cattle" abstraction for the sake of pragmatism. Few shops are going
> > >> to make the direct transition in one move. Poking a hole in the cattle
> > >> abstraction allowing them to keep a pet VM might be very valuable to
> > >> some shops making a migration.
> > >
> > > Poking holes in cattle aside, my experience with shops that prefer the
> > > pets approach is that they are either:
> > >
> > >   * Not managing their servers themselves at all and just relying on
> some
> > > IT operations organization to manage everything for them, including all
> > > aspects of backing up their data as well as failing over and balancing
> > > servers, or,
> > >   * Hiding behind rationales of "needing to be secure" or "needing 100%
> > > uptime" or "needing no customer disruption" in order to avoid any
> change
> > > to the status quo. This is because the incentives inside legacy IT
> > > application development and IT operations groups are typically towards
> > > not rocking the boat in order to satisfy unrealistic expectations and
> > > outdated interface agreements that are forced upon them by management
> > > chains that haven't crawled out of the waterfall project management
> funk
> > > of the 1980s.
> > >
> > > Adding pet-based features to Nova would, IMO, just perpetuate the above
> > > scenarios and incentives.
> >
> > What about the cases where it's not a "preference" but rather just the
> > inertia of pre-existing systems and procedures?
>
> You mean what I wrote in the second bullet point above?
>
> > If we can get them in the door with enough support for legacy stuff,
> > then they might be easier to convince to do things the "cloud" way in
> > the future.
>
> Yes, fair point, and that's what Shawn was saying as well. Just noting
> that in my experience, the second part of the above sentence just
> doesn't happen. Once you bring them over and offer them the tools from
> their legacy environment, they aren't interested in changing. :)
>
> > If we stick with the hard-line cattle-only approach we run the risk of
> > alienating them completely since redoing everything at once is generally
> > not feasible.
>
> Yes, I understand that. I'm actually fine with including functionality
> like memory snapshotting, but only if under no circumstances does it
> negatively impact the service of compute to other tenants/users and will
> not negatively impact the scaling factor of Nova either.
>
> I'm just not as optimistic as you are that once legacy IT folks have
> their old tools, they will consider changing their habits. ;)
>
> Best,
> -jay
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Qin Zhao
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] No route matched for POST

2014-03-10 Thread Aaron Rosen
Hi Vijay,

I think you'd have to post you're code for anyone to really help you.
Otherwise we'll just be taking shots in the dark.

Best,

Aaron


On Mon, Mar 10, 2014 at 7:22 PM, Vijay B  wrote:

> Hi,
>
> I'm trying to implement a new extension API in neutron, but am running
> into a "No route matched for POST" on the neutron service.
>
> I have followed the instructions in the link
> https://wiki.openstack.org/wiki/NeutronDevelopment#API_Extensions when
> trying to implement this extension.
>
> The extension doesn't depend on any plug in per se, akin to security
> groups.
>
> I have defined a new file in neutron/extensions/, called Tag.py, with a
> class Tag extending class extensions.ExtensionDescriptor, like the
> documentation requires. Much like many of the other extensions already
> implemented, I define my new extension as a dictionary, with fields like
> allow_post/allow_put etc, and then pass this to the controller. I still
> however run into a no route matched for POST error when I attempt to fire
> my CLI to create a tag. I also edited the ml2 plugin file
> neutron/plugins/ml2/plugin.py to add "tags" to
> _supported_extension_aliases, but that didn't resolve the issue.
>
> It looks like I'm missing something quite minor, causing the the new
> extension to not get registered, but I'm not sure what.
>
> I can provide more info/patches if anyone would like to take a look, and
> it would be very much appreciated if someone could help me out with this.
>
> Thanks!
> Regards,
> Vijay
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [SWIFT] SWIFT object caching (HOT content)

2014-03-10 Thread Anbu
Hi,
I came across this blueprint 
https://blueprints.launchpad.net/swift/+spec/swift-proxy-caching and a related 
etherpad https://etherpad.openstack.org/p/swift-kt about SWIFT object caching.
I would like to contribute in this and I would also like to know if anybody has 
made any progress in this area.
If anyone is aware of a discussion that has happened/happening in this, kindly 
point me to it.

Thank you,
Babu

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat]Policy on upgades required config changes

2014-03-10 Thread Keith Bray
I want to echo Clint's responses... We do run close to Heat master here at
Rackspace, and we'd be happy to set up a non-voting job to notify when a
review would break Heat on our cloud if that would be beneficial.  Some of
the breaks we have seen have been things that simply weren't caught in
code review (a human intensive effort), were specific to the way we
configure Heat for large-scale cloud use, applicable to the entire Heat
project, and not necessarily service provider specific.

-Keith

On 3/10/14 5:19 PM, "Clint Byrum"  wrote:

>Excerpts from Steven Hardy's message of 2014-03-05 04:24:51 -0800:
>> On Tue, Mar 04, 2014 at 02:06:16PM -0800, Clint Byrum wrote:
>> > Excerpts from Steven Hardy's message of 2014-03-04 09:39:21 -0800:
>> > > Hi all,
>> > > 
>> > > As some of you know, I've been working on the instance-users
>>blueprint[1].
>> > > 
>> > > This blueprint implementation requires three new items to be added
>>to the
>> > > heat.conf, or some resources (those which create keystone users)
>>will not
>> > > work:
>> > > 
>> > > https://review.openstack.org/#/c/73978/
>> > > https://review.openstack.org/#/c/76035/
>> > > 
>> > > So on upgrade, the deployer must create a keystone domain and
>>domain-admin
>> > > user, add the details to heat.conf, as already been done in
>>devstack[2].
>> > > 
>> > > The changes requried for this to work have already landed in
>>devstack, but
>> > > it was discussed to day and Clint suggested this may be unacceptable
>> > > upgrade behavior - I'm not sure so looking for guidance/comments.
>> > > 
>> > > My plan was/is:
>> > > - Make devstack work
>> > > - Talk to tripleo folks to assist in any transition (what prompted
>>this
>> > >   discussion)
>> > > - Document the upgrade requirements in the Icehouse release notes
>>so the
>> > >   wider community can upgrade from Havana.
>> > > - Try to give a heads-up to those maintaining downstream heat
>>deployment
>> > >   tools (e.g stackforge/puppet-heat) that some tweaks will be
>>required for
>> > >   Icehouse.
>> > > 
>> > > However some have suggested there may be an openstack-wide policy
>>which
>> > > requires peoples old config files to continue working indefinitely
>>on
>> > > upgrade between versions - is this right?  If so where is it
>>documented?
>> > > 
>> > 
>> > I don't think I said indefinitely, and I certainly did not mean
>> > indefinitely.
>> > 
>> > What is required though, is that we be able to upgrade to the next
>> > release without requiring a new config setting.
>> 
>> So log a warning for one cycle, then it's OK to expect the config after
>> that?
>> 
>
>Correct.
>
>> I'm still unclear if there's an openstack-wide policy on this, as the
>>whole
>> time-based release with release-notes (which all of openstack is
>>structured
>> around and adheres to) seems to basically be an uncomfortable fit for
>>folks
>> like tripleo who are trunk chasing and doing CI.
>>
>
>So we're continuous delivery focused, but we are not special. HP Cloud
>and Rackspace both do this, and really anyone running a large cloud will
>most likely do so with CD, as the value proposition is that you don't
>have big scary upgrades, you just keep incrementally upgrading and
>getting newer, better code. We can only do this if we have excellent
>testing, which upstream already does and which the public clouds all
>do privately as well of course.
>
>Changes like the one that was merged last week in Heat turn into
>stressful fire drills for those deployment teams.
>
>> > Also as we scramble to deal with these things in TripleO (as all of
>>our
>> > users are now unable to spin up new images), it is clear that it is
>>more
>> > than just a setting. One must create domain users carefully and roll
>>out
>> > a new password.
>> 
>> Such are the pitfalls of life at the bleeding edge ;)
>> 
>
>This is mildly annoying as a stance, as that's not how we've been
>operating with all of the other services of OpenStack. We're not crazy
>for wanting to deploy master and for wanting master to keep working. We
>are a _little_ crazy for wanting that without being in the gate.
>
>> Seriously though, apologies for the inconvenience - I have been asking
>>for
>> feedback on these patches for at least a month, but clearly I should've
>> asked harder.
>> 
>
>Mea culpa too, I did not realize what impact this would have until it
>was too late.
>
>> As was discussed on IRC yesterday, I think some sort of (initially
>>non-voting)
>> feedback from tripleo CI to heat gerrit is pretty much essential given
>>that
>> you're so highly coupled to us or this will just keep happening.
>> 
>
>TripleO will be in the gate some day (hopefully soon!) and then this
>will be less of an issue as you'd see failures early on, and could open
>bugs and get us to fix our issue sooner.
>
>However you'd still need to provide the backward compatibility for a
>single cycle. Servers aren't upgraded instantly, and keystone may not be
>ready for this v3/domain change until after users ha

Re: [openstack-dev] [Mistral] Crack at a "Real life" workflow

2014-03-10 Thread Renat Akhmerov
On 10 Mar 2014, at 16:53, Stan Lagun  wrote:

> 
> On Mon, Mar 10, 2014 at 12:26 PM, Renat Akhmerov  
> wrote:
> 
> In case of Amazon SWF it works in the opposite way. First of all it’s a 
> language agnostic web service, then they have language specific frameworks 
> working on top of the service.
> 
> The big question here is would SWF be popular (or at least usable in 
> real-world scenarios) without language-specific framework on top? 

Legitimate question. I would say “no”. The thing is that SWF web service was 
not designed for direct usage in the first place. We believe it’s possible to 
do for a number of use cases although doesn’t cancel the idea of having 
language bindings. Life will tell.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-10 Thread Aaron Rosen
We had this same issue with the dhcp-agent. Code was added that paralleled
the initial sync here: https://review.openstack.org/#/c/28914/  that made
things a good bit faster if I remember correctly.  Might be worth doing
something similar for the l3-agent.

Best,

Aaron


On Mon, Mar 10, 2014 at 5:07 PM, Joe Gordon  wrote:

>
>
>
> On Mon, Mar 10, 2014 at 3:57 PM, Joe Gordon  wrote:
>
>> I looked into the python to C options and haven't found anything
>> promising yet.
>>
>>
>> I tried Cython, and RPython, on a trivial hello world app, but git
>> similar startup times to standard python.
>>
>> The one thing that did work was adding a '-S' when starting python.
>>
>>-S Disable the import of the module site and the
>> site-dependent manipulations of sys.path that it entails.
>>
>
> Using 'python -S' didn't appear to help in devstack
>
> #!/usr/bin/python -S
> # PBR Generated from u'console_scripts'
>
> import sys
> import site
> site.addsitedir('/mnt/stack/oslo.rootwrap/oslo/rootwrap')
>
>
>
>
>>
>> I am not sure if we can do that for rootwrap.
>>
>>
>> jogo@dev:~/tmp/pypy-2.2.1-src$ time ./tmp-c
>> hello world
>>
>> real0m0.021s
>> user0m0.000s
>> sys 0m0.020s
>> jogo@dev:~/tmp/pypy-2.2.1-src$ time ./tmp-c
>> hello world
>>
>> real0m0.021s
>> user0m0.000s
>> sys 0m0.020s
>> jogo@dev:~/tmp/pypy-2.2.1-src$ time python -S ./tmp.py
>> hello world
>>
>> real0m0.010s
>> user0m0.000s
>> sys 0m0.008s
>>
>> jogo@dev:~/tmp/pypy-2.2.1-src$ time python -S ./tmp.py
>> hello world
>>
>> real0m0.010s
>> user0m0.000s
>> sys 0m0.008s
>>
>>
>>
>> On Mon, Mar 10, 2014 at 3:26 PM, Miguel Angel Ajo Pelayo <
>> mangel...@redhat.com> wrote:
>>
>>> Hi Carl, thank you, good idea.
>>>
>>> I started reviewing it, but I will do it more carefully tomorrow morning.
>>>
>>>
>>>
>>> - Original Message -
>>> > All,
>>> >
>>> > I was writing down a summary of all of this and decided to just do it
>>> > on an etherpad.  Will you help me capture the big picture there?  I'd
>>> > like to come up with some actions this week to try to address at least
>>> > part of the problem before Icehouse releases.
>>> >
>>> > https://etherpad.openstack.org/p/neutron-agent-exec-performance
>>> >
>>> > Carl
>>> >
>>> > On Mon, Mar 10, 2014 at 5:26 AM, Miguel Angel Ajo >> >
>>> > wrote:
>>> > > Hi Yuri & Stephen, thanks a lot for the clarification.
>>> > >
>>> > > I'm not familiar with unix domain sockets at low level, but , I
>>> wonder
>>> > > if authentication could be achieved just with permissions (only
>>> users in
>>> > > group "neutron" or group "rootwrap" accessing this service.
>>> > >
>>> > > I find it an interesting alternative, to the other proposed
>>> solutions, but
>>> > > there are some challenges associated with this solution, which could
>>> make
>>> > > it
>>> > > more complicated:
>>> > >
>>> > > 1) Access control, file system permission based or token based,
>>> > >
>>> > > 2) stdout/stderr/return encapsulation/forwarding to the caller,
>>> > >if we have a simple/fast RPC mechanism we can use, it's a matter
>>> > >of serializing a dictionary.
>>> > >
>>> > > 3) client side implementation for 1 + 2.
>>> > >
>>> > > 4) It would need to accept new domain socket connections in green
>>> threads
>>> > > to
>>> > > avoid spawning a new process to handle a new connection.
>>> > >
>>> > > The advantages:
>>> > >* we wouldn't need to break the only-python-rule.
>>> > >* we don't need to rewrite/translate rootwrap.
>>> > >
>>> > > The disadvantages:
>>> > >   * it needs changes on the client side (neutron + other projects),
>>> > >
>>> > >
>>> > > Cheers,
>>> > > Miguel Ángel.
>>> > >
>>> > >
>>> > >
>>> > > On 03/08/2014 07:09 AM, Yuriy Taraday wrote:
>>> > >>
>>> > >> On Fri, Mar 7, 2014 at 5:41 PM, Stephen Gran
>>> > >> mailto:stephen.g...@theguardian.com
>>> >>
>>> > >> wrote:
>>> > >>
>>> > >> Hi,
>>> > >>
>>> > >> Given that Yuriy says explicitly 'unix socket', I dont think he
>>> > >> means 'MQ' when he says 'RPC'.  I think he just means a daemon
>>> > >> listening on a unix socket for execution requests.  This seems
>>> like
>>> > >> a reasonably sensible idea to me.
>>> > >>
>>> > >>
>>> > >> Yes, you're right.
>>> > >>
>>> > >> On 07/03/14 12:52, Miguel Angel Ajo wrote:
>>> > >>
>>> > >>
>>> > >> I thought of this option, but didn't consider it, as It's
>>> somehow
>>> > >> risky to expose an RPC end executing priviledged (even
>>> filtered)
>>> > >> commands.
>>> > >>
>>> > >>
>>> > >> subprocess module have some means to do RPC securely over UNIX
>>> sockets.
>>> > >> I does this by passing some token along with messages. It should be
>>> > >> secure because with UNIX sockets we don't need anything stronger
>>> since
>>> > >> MITM attacks are not possible.
>>> > >>
>>> > >> If I'm not wrong, once you have credentials for messaging,
>>> you can
>>> > >> send messages to any end, even filtered

Re: [openstack-dev] MuranoPL questions?

2014-03-10 Thread Renat Akhmerov
Although being a little bit verbose it makes a lot of sense to me.

@Joshua,

Even assuming Python could be sandboxed and whatever else that’s needed to be 
able to use it as DSL (for something like Mistral, Murano or Heat) is done  why 
do you think Python would be a better alternative for people who don’t know 
neither these new DSLs nor Python itself. Especially, given the fact that 
Python has A LOT of things that they’d never use. I know many people who have 
been programming in Python for a while and they admit they don’t know all the 
nuances of Python and actually use 30-40% of all of its capabilities. Even not 
in domain specific development. So narrowing a feature set that a language 
provides and limiting it to a certain domain vocabulary is what helps people 
solve tasks of that specific domain much easier and in the most expressive 
natural way. Without having to learn tons and tons of details that a general 
purpose language (GPL, hah :) ) provides (btw, the reason to write thick books).

I agree with Stan, if you begin to use a technology you’ll have to learn 
something anyway, be it TaskFlow API and principles or DSL. Well-designed DSL 
just encapsulates essential principles of a system it is used for. By learning 
DSL you’re leaning the system itself, as simple as that.

Renat Akhmerov
@ Mirantis Inc.



On 10 Mar 2014, at 05:35, Stan Lagun  wrote:

> > I'd be very interested in knowing the resource controls u plan to add. 
> > Memory, CPU...
> We haven't discussed it yet. Any suggestions are welcomed
> 
> > I'm still trying to figure out where something like 
> > https://github.com/istalker2/MuranoDsl/blob/master/meta/com.mirantis.murano.demoApp.DemoInstance/manifest.yaml
> >  would be beneficial, why not > just spend effort sand boxing lua, 
> > python... Instead of spending effort on creating a new language and then 
> > having to sandbox it as well... Especially if u picked languages that are 
> > made to be  sandboxed from the start (not python)...
> 
> 1. See my detailed answer in Mistral thread why haven't we used any of those 
> languages. There are many reasons besides sandboxing.
> 
> 2. You don't need to sandbox MuranoPL. Sandboxing is restricting some 
> operations. In MuranoPL ALL operations (including operators in expressions, 
> functions, methods etc.) are just those that you explicitly provided. So 
> there is nothing to restrict. There are no builtins that throw 
> AccessViolationError
> 
> 3. Most of the value of MuranoPL comes not form the workflow code but from 
> class declarations. In all OOP languages classes are just a convenient to 
> organize your code. There are classes that represent real-life objects and 
> classes that are nothing more than data-structures, DTOs etc. In Murano 
> classes in MuranoPL are deployable entities like Heat resources application 
> components, services etc. In dashboard UI user works with those entities. He 
> (in UI!) creates instances of those classes, fills their property values, 
> binds objects together (assigns on object to a property of another). And this 
> is done without even a single MuranoPL line being executed! That is possible 
> because everything in MuranoPL is a subject to declaration and because it is 
> just plain YAML anyone can easily extract those declarations from MuranoPL 
> classes.
> Now suppose it was Python instead of MuranoPL. Then you would have to parse 
> *.py files to get list of declared classes (without executing anything). 
> Suppose that you managed to solve this somehow. Probably you wrote regexp 
> that finds all class declarations it text files. Are you done? No! There are 
> no properties (attributes) declarations in Python. You cannot infer all 
> possible class attributes just from class declaration. You can try to parse 
> __init__ method to find attribute initialization code but then you find that 
> you cannot infer property types. You would not know what values does class 
> expect in those attributes, what are constraints etc. Besides there are 
> plenty of ways to fool such naive algorithms. Classes can be created using 
> type() built-in function without declarations at all. Attributes may be 
> initialized anywhere. Everything can be made in runtime.
> As you can see Python is not good for the task. Ans all this true for all of 
> the languages you mentioned. The reason is fundamental - all of those 
> languages are dynamic.
> So maybe you can take some statically-typed language like C++ or Java to do 
> the job? Again, no! There are other problems with those type of languages. 
> Have you ever seen statically-typed embeddable language? Those language 
> require compilation and linker, binary distribution of classes, handling of 
> versioning problems, dependencies on 3-rd party libraries etc. This is the 
> hell you don't want to step in. But even in statically typed languages there 
> are no contracts. You know the list of properties and their types. But 
> knowing that class foo h

Re: [openstack-dev] [OpenStack-Infra] [3rd party testing] Q&A meeting today at 14:00 EST / 18:00 UTC

2014-03-10 Thread trinath.soman...@freescale.com
+1

Attending



--
Trinath Somanchi - B39208
trinath.soman...@freescale.com | extn: 4048


-Original Message-
From: Jay Pipes [mailto:jaypi...@gmail.com] 
Sent: Monday, March 10, 2014 9:30 PM
To: OpenStack Development Mailing List; openstack-infra
Subject: [OpenStack-Infra] [3rd party testing] Q&A meeting today at 14:00 EST / 
18:00 UTC

Hi Stackers,

We'll be having our second weekly Q&A/workshop session around third party 
testing today at 14:00 EST / 18:00 UTC in #openstack-meeting on Freenode IRC. 
See you all there.

Best,
-jay


___
OpenStack-Infra mailing list
openstack-in...@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] nominating Ildikó Váncsa and Nadya Privalova to ceilometer-core

2014-03-10 Thread Gordon Chung
> I'd like to nominate Ildikó Váncsa and Nadya Privalova as ceilometer

+1, thanks for the effort so far.

cheers,
gordon chung
openstack, ibm software standards___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [gantt] scheduler sub-group meeting agenda 3/11

2014-03-10 Thread Dugger, Donald D
All- Note that Daylight Savings time started in the US.  This meeting remains 
at 1500 UTC, adjust your local time appropriately.



1) No-db scheduler

2) Code forklift

3) Icehouse BPs

4) Opens

--
Don Dugger
"Censeo Toto nos in Kansa esse decisse." - D. Gale
Ph: 303/443-3786

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Cinder] Feature about volume delete protection

2014-03-10 Thread Zhangleiqiang
Hi all,



Besides the "soft-delete" state for volumes, I think there is need for 
introducing another "fake delete" state for volumes which have snapshot.



Current Openstack refuses the delete request for volumes which have snapshot. 
However, we will have no method to limit users to only use the specific 
snapshot other than the original volume ,  because the original volume is 
always visible for the users.



So I think we can permit users to delete volumes which have snapshots, and mark 
the volume as "fake delete" state. When all of the snapshots of the volume have 
already deleted, the original volume will be removed automatically.





Any thoughts? Welcome any advices.



--
zhangleiqiang

Best Regards

From: John Griffith [mailto:john.griff...@solidfire.com]
Sent: Thursday, March 06, 2014 8:38 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Nova][Cinder] Feature about volume delete 
protection



On Thu, Mar 6, 2014 at 9:13 PM, John Garbutt 
mailto:j...@johngarbutt.com>> wrote:
On 6 March 2014 08:50, zhangyu (AI) 
mailto:zhangy...@huawei.com>> wrote:
> It seems to be an interesting idea. In fact, a China-based public IaaS, 
> QingCloud, has provided a similar feature
> to their virtual servers. Within 2 hours after a virtual server is deleted, 
> the server owner can decide whether
> or not to cancel this deletion and re-cycle that "deleted" virtual server.
>
> People make mistakes, while such a feature helps in urgent cases. Any idea 
> here?
Nova has soft_delete and restore for servers. That sounds similar?

John

>
> -Original Message-
> From: Zhangleiqiang 
> [mailto:zhangleiqi...@huawei.com]
> Sent: Thursday, March 06, 2014 2:19 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: [openstack-dev] [Nova][Cinder] Feature about volume delete protection
>
> Hi all,
>
> Current openstack provide the delete volume function to the user.
> But it seems there is no any protection for user's delete operation miss.
>
> As we know the data in the volume maybe very important and valuable.
> So it's better to provide a method to the user to avoid the volume delete 
> miss.
>
> Such as:
> We can provide a safe delete for the volume.
> User can specify how long the volume will be delay deleted(actually deleted) 
> when he deletes the volume.
> Before the volume is actually deleted, user can cancel the delete operation 
> and find back the volume.
> After the specified time, the volume will be actually deleted by the system.
>
> Any thoughts? Welcome any advices.
>
> Best regards to you.
>
>
> --
> zhangleiqiang
>
> Best Regards
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

I think a soft-delete for Cinder sounds like a neat idea.  You should file a BP 
that we can target for Juno.

Thanks,
John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][QOS]How is the BP about ml-qos going?

2014-03-10 Thread Yuzhou (C)
Hi stackers,

The progress of the bp about ml2-qos is code review for long time.
Why didn't the implementation of qos commit the neutron master ?
Anyone who knows the history can help me or give me a hint how to find the 
discuss mail?

Thanks.

Zhou Yu

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] No route matched for POST

2014-03-10 Thread Vijay B
Hi,

I'm trying to implement a new extension API in neutron, but am running into
a "No route matched for POST" on the neutron service.

I have followed the instructions in the link
https://wiki.openstack.org/wiki/NeutronDevelopment#API_Extensions when
trying to implement this extension.

The extension doesn't depend on any plug in per se, akin to security groups.

I have defined a new file in neutron/extensions/, called Tag.py, with a
class Tag extending class extensions.ExtensionDescriptor, like the
documentation requires. Much like many of the other extensions already
implemented, I define my new extension as a dictionary, with fields like
allow_post/allow_put etc, and then pass this to the controller. I still
however run into a no route matched for POST error when I attempt to fire
my CLI to create a tag. I also edited the ml2 plugin file
neutron/plugins/ml2/plugin.py to add "tags" to
_supported_extension_aliases, but that didn't resolve the issue.

It looks like I'm missing something quite minor, causing the the new
extension to not get registered, but I'm not sure what.

I can provide more info/patches if anyone would like to take a look, and it
would be very much appreciated if someone could help me out with this.

Thanks!
Regards,
Vijay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [python-openstacksdk] Meeting Tuesday 11 Mar 19:00 UTC

2014-03-10 Thread Brian Curtin
Reminder that tomorrow is the python-openstacksdk meeting. Double
reminder that, where applicable, this is one hour later thanks to
Daylight Saving Time.

https://wiki.openstack.org/wiki/Meetings#python-openstacksdk_Meeting

Date/Time: Tuesday 4 March - 19:00 UTC / 1400 CDT

IRC channel: #openstack-meeting-3

Meeting Agenda:
https://wiki.openstack.org/wiki/Meetings/PythonOpenStackSDK

About the project:
https://wiki.openstack.org/wiki/SDK-Development/PythonOpenStackSDK

If you have questions, all of us lurk in #openstack-sdks on freenode!

There isn't much of an agenda for this meeting as there hasn't was a
ton of movement last week. It will likely consist of discussion around
this review to kick things off:
https://review.openstack.org/#/c/79435/

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Infra] Gerrit maintenance 2014-03-12 at 12:00 UTC

2014-03-10 Thread Jeremy Stanley
The Gerrit service on review.openstack.org will be unavailable
briefly on Wednesday, March 12 between 12:00 to 12:30 UTC:

http://www.timeanddate.com/worldclock/fixedtime.html?iso=20140312T12&am=30 >

This maintenance will cover renaming for Sahara (formerly Savanna)
projects and possibly also some StackForge projects which have put
in requests for renames. Updates will be provided in the
#openstack-infra and #openstack-dev channels on the Freenode IRC
network. Once completed, I'll follow up to this mailing list thread
as well.
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] nominating Ildikó Váncsa and Nadya Privalova to ceilometer-core

2014-03-10 Thread Lu, Lianhao
+1 for both. 

Best Regards,
Lianhao

> -Original Message-
> From: Eoghan Glynn [mailto:egl...@redhat.com]
> Sent: Monday, March 10, 2014 5:15 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: [openstack-dev] [ceilometer] nominating Ildikó Váncsa and Nadya 
> Privalova to ceilometer-core
> 
> 
> Folks,
> 
> Time for some new blood on the ceilometer core team.
> 
> I'd like to nominate Ildikó Váncsa and Nadya Privalova as ceilometer
> cores in recognition of their contributions([1], [2]) over the Icehouse
> cycle, and looking forward to their continued participation for Juno.
> 
> Both showed up with design summit proposals in HK, and have since
> demonstrated staying power as project contributors, proving themselves
> as eagle-eyed reviewers.
> 
> Contribution highlights:
> 
>  * Ildikó co-authored the complex query API extension with Balazs Gibizer
>and showed a lot of tenacity in pushing this extensive blueprint
>through gerrit over multiple milestones.
> 
>  * Nadya has shown much needed love to the previously neglected HBase
>driver bringing it much closer to feature parity with the other
>supported DBs, and has also driven the introduction of ceilometer
>coverage in Tempest.
> 
> This nomination doubles as my +1 for both ildikov & nprivalova.
> 
> Ceilometer cores, please respond with your yeas or nays on list.
> 
> Cheers,
> Eoghan
> 
> 
> [1] http://bit.ly/ildikov-icehouse-reviews
> http://bit.ly/ildikov-icehouse-commits
> 
> [2] http://bit.ly/nprivalova-icehouse-reviews
> http://bit.ly/nprivalova-icehouse-commits
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] testr help

2014-03-10 Thread Robert Collins
On 11 March 2014 12:20, Zane Bitter  wrote:

>> Except nose can make them all the same file descriptor and let
>> everything multiplex together. Nose isn't demuxing arbitrary numbers
>> of file descriptors from arbitrary numbers of processes.
>
>
> Can't each subunit process do the same thing?

Roughly yes.

> As a user, here's how I want it to work:
>  - Each subunit process works like nose - multiplexing the various streams
> of output together and associating it with a particular test - except that
> nothing is written to the console but instead returned to testr in subunit
> format.

subunit certainly supports that. We don't have glue in subunit.run to
intercept stdout and stderr automatically and do the association.
There are some nuances in getting that *right* that are fairly
important (such as not breaking tunnelled debugger use), but I'm very
open to the idea of such a thing existing. testtools.run shouldn't
multiplex like that (it interferes with pdb) but should perhaps permit
it.

>  - testr reads the subunit data and saves it to the test repository.

Which it does.

>  - testr prints a report to the console based on the data it just
> received/saved.

Which it does.

> How it actually seems to work:
>  - A magic pixie creates a TestCase class with a magic incantation to
> capture your stdout/stderr/logging without breaking other test runners.

testr doesn't actually have anything to do with this - its a
environment variable in .testr.conf that OpenStack uses. OpenStack has
a particularly egregious logging environment, and its inherited nose
based test had no friendly capturing/management of that.

>  - Or they don't! You're hosed. The magic incantation is undocumented.
>  - You change all of your TestCases to inherit from the class with the magic
> pixie dust.
>  - Each subunit process associates the various streams of output (if you set
> it up to) with a particular test, but keeps them separate so that if you
> want to figure out the order of events you have to direct them all to the
> same channel - which, in practice, means you can only use logging (since
> some of the events you are interested in probably already exist in the code
> as logs).

stdout is buffered by default, stderr is unbuffered by default,
logging is on the other side of a mutex - if you have any concurrency
going on in your test process (which many do), there is absolutely no
guarantee of relative ordering between different sources unless that's
done in the generating process - something subunit.run may be able to
help with (see above).

>  - when you want to debug a test, you have to all the tedious loigging setup
> if it doesn't already exist in the file. It probably won't, because flake8
> would have made you delete it unless it's being used already.
>  - testr reads the subunit data and saves it to the test repository.
>  - testr prints a report to the console based on the data it just
> received/saved, though parts of it look like a raw data dump.

Which bits look raw? It should only show text/* attachments, non-text
should be named but not dumped.

> While there may be practical reasons why it currently works like the latter,
> I would submit that there is no technical reason it could not work like the
> former. In particular, there is nothing about the concept of running the
> tests in parallel that would prevent it, just as there is nothing about what
> nose does that would prevent two copies of nose from running at the same
> time on different sets of tests.

The key bit that isn't visible isn't implemented yet, which is a
desire to demultiplex stdin from the testr console to the backends, to
permit pdb usage in tests. It works in single-worker mode today
(across processes) but not in multi-worker mode.

> It just seems bizarre to me that the _tests_ have to figure out what test
> runner they are being run by and redirect their output to the correct
> location to oblige it. Surely the point of a test runner is to do the Right
> Thing(TM) for any test, regardless of what it knows about the test runner.

Test runners should provide a homogeneous, consistent environment for
tests - for sure.

>>> BTW, since I'm on the subject, testr would be a lot more
>>> confidence-inspiring if running `testr failing` immediately after running
>>> `testr` reported the same number of failures, or indeed if running
>>> `testr`

It should. There as a double-accounting bug in testtools some months
back, but you should get the same failure count in testr last as from
testr run, since it pulls from the same data.

> It makes sense for the case where the test runner has died without reporting
> data, but why should it be reported as a separate failure when it has
> reported data that testr has regarded as valid enough to use and display?

testr synthesises failed tests for a backend in two cases:
a) if a test starts but doesn't finish, that is presumed to be a
backend failure regardless of backend exit code (e.g. because of code
that cal

Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-10 Thread Joe Gordon
On Mon, Mar 10, 2014 at 3:57 PM, Joe Gordon  wrote:

> I looked into the python to C options and haven't found anything promising
> yet.
>
>
> I tried Cython, and RPython, on a trivial hello world app, but git similar
> startup times to standard python.
>
> The one thing that did work was adding a '-S' when starting python.
>
>-S Disable the import of the module site and the site-dependent
> manipulations of sys.path that it entails.
>

Using 'python -S' didn't appear to help in devstack

#!/usr/bin/python -S
# PBR Generated from u'console_scripts'

import sys
import site
site.addsitedir('/mnt/stack/oslo.rootwrap/oslo/rootwrap')




>
> I am not sure if we can do that for rootwrap.
>
>
> jogo@dev:~/tmp/pypy-2.2.1-src$ time ./tmp-c
> hello world
>
> real0m0.021s
> user0m0.000s
> sys 0m0.020s
> jogo@dev:~/tmp/pypy-2.2.1-src$ time ./tmp-c
> hello world
>
> real0m0.021s
> user0m0.000s
> sys 0m0.020s
> jogo@dev:~/tmp/pypy-2.2.1-src$ time python -S ./tmp.py
> hello world
>
> real0m0.010s
> user0m0.000s
> sys 0m0.008s
>
> jogo@dev:~/tmp/pypy-2.2.1-src$ time python -S ./tmp.py
> hello world
>
> real0m0.010s
> user0m0.000s
> sys 0m0.008s
>
>
>
> On Mon, Mar 10, 2014 at 3:26 PM, Miguel Angel Ajo Pelayo <
> mangel...@redhat.com> wrote:
>
>> Hi Carl, thank you, good idea.
>>
>> I started reviewing it, but I will do it more carefully tomorrow morning.
>>
>>
>>
>> - Original Message -
>> > All,
>> >
>> > I was writing down a summary of all of this and decided to just do it
>> > on an etherpad.  Will you help me capture the big picture there?  I'd
>> > like to come up with some actions this week to try to address at least
>> > part of the problem before Icehouse releases.
>> >
>> > https://etherpad.openstack.org/p/neutron-agent-exec-performance
>> >
>> > Carl
>> >
>> > On Mon, Mar 10, 2014 at 5:26 AM, Miguel Angel Ajo 
>> > wrote:
>> > > Hi Yuri & Stephen, thanks a lot for the clarification.
>> > >
>> > > I'm not familiar with unix domain sockets at low level, but , I wonder
>> > > if authentication could be achieved just with permissions (only users
>> in
>> > > group "neutron" or group "rootwrap" accessing this service.
>> > >
>> > > I find it an interesting alternative, to the other proposed
>> solutions, but
>> > > there are some challenges associated with this solution, which could
>> make
>> > > it
>> > > more complicated:
>> > >
>> > > 1) Access control, file system permission based or token based,
>> > >
>> > > 2) stdout/stderr/return encapsulation/forwarding to the caller,
>> > >if we have a simple/fast RPC mechanism we can use, it's a matter
>> > >of serializing a dictionary.
>> > >
>> > > 3) client side implementation for 1 + 2.
>> > >
>> > > 4) It would need to accept new domain socket connections in green
>> threads
>> > > to
>> > > avoid spawning a new process to handle a new connection.
>> > >
>> > > The advantages:
>> > >* we wouldn't need to break the only-python-rule.
>> > >* we don't need to rewrite/translate rootwrap.
>> > >
>> > > The disadvantages:
>> > >   * it needs changes on the client side (neutron + other projects),
>> > >
>> > >
>> > > Cheers,
>> > > Miguel Ángel.
>> > >
>> > >
>> > >
>> > > On 03/08/2014 07:09 AM, Yuriy Taraday wrote:
>> > >>
>> > >> On Fri, Mar 7, 2014 at 5:41 PM, Stephen Gran
>> > >> mailto:stephen.g...@theguardian.com>>
>> > >> wrote:
>> > >>
>> > >> Hi,
>> > >>
>> > >> Given that Yuriy says explicitly 'unix socket', I dont think he
>> > >> means 'MQ' when he says 'RPC'.  I think he just means a daemon
>> > >> listening on a unix socket for execution requests.  This seems
>> like
>> > >> a reasonably sensible idea to me.
>> > >>
>> > >>
>> > >> Yes, you're right.
>> > >>
>> > >> On 07/03/14 12:52, Miguel Angel Ajo wrote:
>> > >>
>> > >>
>> > >> I thought of this option, but didn't consider it, as It's
>> somehow
>> > >> risky to expose an RPC end executing priviledged (even
>> filtered)
>> > >> commands.
>> > >>
>> > >>
>> > >> subprocess module have some means to do RPC securely over UNIX
>> sockets.
>> > >> I does this by passing some token along with messages. It should be
>> > >> secure because with UNIX sockets we don't need anything stronger
>> since
>> > >> MITM attacks are not possible.
>> > >>
>> > >> If I'm not wrong, once you have credentials for messaging,
>> you can
>> > >> send messages to any end, even filtered, I somehow see this
>> as a
>> > >> higher
>> > >> risk option.
>> > >>
>> > >>
>> > >> As Stephen noted, I'm not talking about using MQ for RPC. Just some
>> > >> local UNIX socket with very simple RPC over it.
>> > >>
>> > >> And btw, if we add RPC in the middle, it's possible that all
>> those
>> > >> system call delays increase, or don't decrease all it'll be
>> > >> desirable.
>> > >>
>> > >>
>> > >> Every call to rootwrap would require the following.
>> > >>
>

Re: [openstack-dev] [Nova] API weekly meeting

2014-03-10 Thread Christopher Yeoh
On Tue, 11 Mar 2014 10:00:15 +1030
Christopher Yeoh  wrote:

> Hi,
> 
> Thanks for all the responses. I think that Friday UTC 00:00
> is probably going to be workable for everyone and there is a meeting
> slot free. That would translate to:
> 
> UTC 00:00
> AEDT 10:30

Sorry that should be (I'm in a wierd timezone and got confused):

AEDT 11:00
ACDT 10:30 

> Japan 09:00
> China 08:00
> EST 20:00 (Thu)
> 
> If this doesn't work for anyone who really wants to make it please let
> me know, otherwise we'll start this Friday.
> 
> Regards,
> 
> Chris
> 
> On Thu, 06 Mar 2014 23:36:30 -0500
> Jay Pipes  wrote:
> 
> > On Fri, 2014-03-07 at 11:15 +1030, Christopher Yeoh wrote:
> > > Hi,
> > > 
> > > I'd like to start a weekly IRC meeting for those interested in
> > > discussing Nova API issues. I think it would be a useful forum
> > > for:
> > > 
> > > - People to keep up with what work is going on the API and where
> > > its headed. 
> > > - Cloud providers, SDK maintainers and users of the REST API to
> > > provide feedback about the API and what they want out of it.
> > > - Help coordinate the development work on the API (both v2 and v3)
> > > 
> > > If you're interested in attending please respond and include what
> > > time zone you're in so we can work out the best time to meet.
> > 
> > Very much interested. I'll make room in my schedule to attend most
> > any time other than 2-6am EST (7-11UTC).
> > 
> > Best,
> > -jay
> > 
> > 
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] testr help

2014-03-10 Thread Clark Boylan
On Mon, Mar 10, 2014 at 4:20 PM, Zane Bitter  wrote:
> On 10/03/14 16:04, Clark Boylan wrote:
>>
>> On Mon, Mar 10, 2014 at 11:31 AM, Zane Bitter  wrote:
>>>
>>> Thanks Clark for this great write-up. However, I think the solution to
>>> the
>>> problem in question is richer commands and better output formatting, not
>>> discarding information.
>>>
>>>
>>> On 07/03/14 16:30, Clark Boylan wrote:


 But running tests in parallel introduces some fun problems. Like where
 do you send logging and stdout output. If you send it to the console
 it will be interleaved and essentially useless. The solution to this
 problem (for which I am probably to blame) is to have each test
 collect the logging, stdout, and stderr associated to that test and
 attach it to that tests subunit reporting. This way you get all of the
 output associated with a single test attached to that test and don't
 have crazy interleaving that needs to be demuxed. The capturing of
>>>
>>>
>>>
>>> This is not really a problem unique to parallel test runners. Printing to
>>> the console is just not a great way to handle stdout/stderr in general
>>> because it messes up the output of the test runner, and nose does exactly
>>> the same thing as testr in collecting them - except that nose combines
>>> messages from the 3 sources and prints the output for human consumption,
>>> rather than in separate groups surrounded by lots of {{{random braces}}}.
>>>
>> Except nose can make them all the same file descriptor and let
>> everything multiplex together. Nose isn't demuxing arbitrary numbers
>> of file descriptors from arbitrary numbers of processes.
>
>
> Can't each subunit process do the same thing?
>
> As a user, here's how I want it to work:
>  - Each subunit process works like nose - multiplexing the various streams
> of output together and associating it with a particular test - except that
> nothing is written to the console but instead returned to testr in subunit
> format.
>  - testr reads the subunit data and saves it to the test repository.
>  - testr prints a report to the console based on the data it just
> received/saved.
>
> How it actually seems to work:
>  - A magic pixie creates a TestCase class with a magic incantation to
> capture your stdout/stderr/logging without breaking other test runners.
>  - Or they don't! You're hosed. The magic incantation is undocumented.
>  - You change all of your TestCases to inherit from the class with the magic
> pixie dust.
>  - Each subunit process associates the various streams of output (if you set
> it up to) with a particular test, but keeps them separate so that if you
> want to figure out the order of events you have to direct them all to the
> same channel - which, in practice, means you can only use logging (since
> some of the events you are interested in probably already exist in the code
> as logs).
If you want this behavior then do as nose does and give logging,
stdout, and stderr the same file descriptor to write to. (I think the
current implementation uses StringIO as a sink, just give them all the
same sink).
>  - when you want to debug a test, you have to all the tedious loigging setup
> if it doesn't already exist in the file. It probably won't, because flake8
> would have made you delete it unless it's being used already.
I don't understand this, you use oslo logging and the fake logger fixture, done.
>  - testr reads the subunit data and saves it to the test repository.
>  - testr prints a report to the console based on the data it just
> received/saved, though parts of it look like a raw data dump.
>
> While there may be practical reasons why it currently works like the latter,
> I would submit that there is no technical reason it could not work like the
> former. In particular, there is nothing about the concept of running the
> tests in parallel that would prevent it, just as there is nothing about what
> nose does that would prevent two copies of nose from running at the same
> time on different sets of tests.
>
>
 this data is toggleable in the test suite using environment variables
 and is off by default so that when you are not using testr you don't
 get this behavior [0]. However we seem to have neglected log capture
 toggles.
>>>
>>>
>>>
>>> Oh wow, there is actually a way to get the stdout and stderr? Fantastic!
>>> Why
>>> on earth are these disabled?
>>>
>> See above, testr has to deal with multiple writers to stdout and
>> stderr, you really don't want them all going to the same place when
>> using testr (which is why stdout and stderr are captured when running
>> testr but not otherwise).
>
>
> Ah, OK, I think I understand now. testr passes the environment variables
> automatically, so you only have to know the magic incantation at the time
> you're writing the test, not when you're running it.
>
>
>>> Please, please, please don't turn off the logging too. That's the only
>>> tool
>>> left for debugging now that s

Re: [openstack-dev] [Nova] API weekly meeting

2014-03-10 Thread Christopher Yeoh
Hi,

Thanks for all the responses. I think that Friday UTC 00:00
is probably going to be workable for everyone and there is a meeting
slot free. That would translate to:

UTC 00:00
AEDT 10:30
Japan 09:00
China 08:00
EST 20:00 (Thu)

If this doesn't work for anyone who really wants to make it please let
me know, otherwise we'll start this Friday.

Regards,

Chris

On Thu, 06 Mar 2014 23:36:30 -0500
Jay Pipes  wrote:

> On Fri, 2014-03-07 at 11:15 +1030, Christopher Yeoh wrote:
> > Hi,
> > 
> > I'd like to start a weekly IRC meeting for those interested in
> > discussing Nova API issues. I think it would be a useful forum for:
> > 
> > - People to keep up with what work is going on the API and where its
> >   headed. 
> > - Cloud providers, SDK maintainers and users of the REST API to
> > provide feedback about the API and what they want out of it.
> > - Help coordinate the development work on the API (both v2 and v3)
> > 
> > If you're interested in attending please respond and include what
> > time zone you're in so we can work out the best time to meet.
> 
> Very much interested. I'll make room in my schedule to attend most any
> time other than 2-6am EST (7-11UTC).
> 
> Best,
> -jay
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] testr help

2014-03-10 Thread Zane Bitter

On 10/03/14 16:04, Clark Boylan wrote:

On Mon, Mar 10, 2014 at 11:31 AM, Zane Bitter  wrote:

Thanks Clark for this great write-up. However, I think the solution to the
problem in question is richer commands and better output formatting, not
discarding information.


On 07/03/14 16:30, Clark Boylan wrote:


But running tests in parallel introduces some fun problems. Like where
do you send logging and stdout output. If you send it to the console
it will be interleaved and essentially useless. The solution to this
problem (for which I am probably to blame) is to have each test
collect the logging, stdout, and stderr associated to that test and
attach it to that tests subunit reporting. This way you get all of the
output associated with a single test attached to that test and don't
have crazy interleaving that needs to be demuxed. The capturing of



This is not really a problem unique to parallel test runners. Printing to
the console is just not a great way to handle stdout/stderr in general
because it messes up the output of the test runner, and nose does exactly
the same thing as testr in collecting them - except that nose combines
messages from the 3 sources and prints the output for human consumption,
rather than in separate groups surrounded by lots of {{{random braces}}}.


Except nose can make them all the same file descriptor and let
everything multiplex together. Nose isn't demuxing arbitrary numbers
of file descriptors from arbitrary numbers of processes.


Can't each subunit process do the same thing?

As a user, here's how I want it to work:
 - Each subunit process works like nose - multiplexing the various 
streams of output together and associating it with a particular test - 
except that nothing is written to the console but instead returned to 
testr in subunit format.

 - testr reads the subunit data and saves it to the test repository.
 - testr prints a report to the console based on the data it just 
received/saved.


How it actually seems to work:
 - A magic pixie creates a TestCase class with a magic incantation to 
capture your stdout/stderr/logging without breaking other test runners.

 - Or they don't! You're hosed. The magic incantation is undocumented.
 - You change all of your TestCases to inherit from the class with the 
magic pixie dust.
 - Each subunit process associates the various streams of output (if 
you set it up to) with a particular test, but keeps them separate so 
that if you want to figure out the order of events you have to direct 
them all to the same channel - which, in practice, means you can only 
use logging (since some of the events you are interested in probably 
already exist in the code as logs).
 - when you want to debug a test, you have to all the tedious loigging 
setup if it doesn't already exist in the file. It probably won't, 
because flake8 would have made you delete it unless it's being used already.

 - testr reads the subunit data and saves it to the test repository.
 - testr prints a report to the console based on the data it just 
received/saved, though parts of it look like a raw data dump.


While there may be practical reasons why it currently works like the 
latter, I would submit that there is no technical reason it could not 
work like the former. In particular, there is nothing about the concept 
of running the tests in parallel that would prevent it, just as there is 
nothing about what nose does that would prevent two copies of nose from 
running at the same time on different sets of tests.



this data is toggleable in the test suite using environment variables
and is off by default so that when you are not using testr you don't
get this behavior [0]. However we seem to have neglected log capture
toggles.



Oh wow, there is actually a way to get the stdout and stderr? Fantastic! Why
on earth are these disabled?


See above, testr has to deal with multiple writers to stdout and
stderr, you really don't want them all going to the same place when
using testr (which is why stdout and stderr are captured when running
testr but not otherwise).


Ah, OK, I think I understand now. testr passes the environment variables 
automatically, so you only have to know the magic incantation at the 
time you're writing the test, not when you're running it.



Please, please, please don't turn off the logging too. That's the only tool
left for debugging now that stdout goes into a black hole.


Logging goes into the same "black hole" today, I am suggesting that we
make this toggleable like we have made stdout and stderr capturing
toggleable. FWIW this isn't a black hole it is all captured on disk
and you can refer back to it at any time (the UI around doing this
could definitely be better though).


Hmm, now that you mention it, I remember Clint did the setup work in 
Heat to get the logging working. So maybe we have to do the same for 
stdout and stderr. At the moment they really do go into a black hole - 
we can't see them in the testr output at all a

Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-10 Thread Joe Gordon
I looked into the python to C options and haven't found anything promising
yet.


I tried Cython, and RPython, on a trivial hello world app, but git similar
startup times to standard python.

The one thing that did work was adding a '-S' when starting python.

   -S Disable the import of the module site and the site-dependent
manipulations of sys.path that it entails.

I am not sure if we can do that for rootwrap.


jogo@dev:~/tmp/pypy-2.2.1-src$ time ./tmp-c
hello world

real0m0.021s
user0m0.000s
sys 0m0.020s
jogo@dev:~/tmp/pypy-2.2.1-src$ time ./tmp-c
hello world

real0m0.021s
user0m0.000s
sys 0m0.020s
jogo@dev:~/tmp/pypy-2.2.1-src$ time python -S ./tmp.py
hello world

real0m0.010s
user0m0.000s
sys 0m0.008s

jogo@dev:~/tmp/pypy-2.2.1-src$ time python -S ./tmp.py
hello world

real0m0.010s
user0m0.000s
sys 0m0.008s



On Mon, Mar 10, 2014 at 3:26 PM, Miguel Angel Ajo Pelayo <
mangel...@redhat.com> wrote:

> Hi Carl, thank you, good idea.
>
> I started reviewing it, but I will do it more carefully tomorrow morning.
>
>
>
> - Original Message -
> > All,
> >
> > I was writing down a summary of all of this and decided to just do it
> > on an etherpad.  Will you help me capture the big picture there?  I'd
> > like to come up with some actions this week to try to address at least
> > part of the problem before Icehouse releases.
> >
> > https://etherpad.openstack.org/p/neutron-agent-exec-performance
> >
> > Carl
> >
> > On Mon, Mar 10, 2014 at 5:26 AM, Miguel Angel Ajo 
> > wrote:
> > > Hi Yuri & Stephen, thanks a lot for the clarification.
> > >
> > > I'm not familiar with unix domain sockets at low level, but , I wonder
> > > if authentication could be achieved just with permissions (only users
> in
> > > group "neutron" or group "rootwrap" accessing this service.
> > >
> > > I find it an interesting alternative, to the other proposed solutions,
> but
> > > there are some challenges associated with this solution, which could
> make
> > > it
> > > more complicated:
> > >
> > > 1) Access control, file system permission based or token based,
> > >
> > > 2) stdout/stderr/return encapsulation/forwarding to the caller,
> > >if we have a simple/fast RPC mechanism we can use, it's a matter
> > >of serializing a dictionary.
> > >
> > > 3) client side implementation for 1 + 2.
> > >
> > > 4) It would need to accept new domain socket connections in green
> threads
> > > to
> > > avoid spawning a new process to handle a new connection.
> > >
> > > The advantages:
> > >* we wouldn't need to break the only-python-rule.
> > >* we don't need to rewrite/translate rootwrap.
> > >
> > > The disadvantages:
> > >   * it needs changes on the client side (neutron + other projects),
> > >
> > >
> > > Cheers,
> > > Miguel Ángel.
> > >
> > >
> > >
> > > On 03/08/2014 07:09 AM, Yuriy Taraday wrote:
> > >>
> > >> On Fri, Mar 7, 2014 at 5:41 PM, Stephen Gran
> > >> mailto:stephen.g...@theguardian.com>>
> > >> wrote:
> > >>
> > >> Hi,
> > >>
> > >> Given that Yuriy says explicitly 'unix socket', I dont think he
> > >> means 'MQ' when he says 'RPC'.  I think he just means a daemon
> > >> listening on a unix socket for execution requests.  This seems
> like
> > >> a reasonably sensible idea to me.
> > >>
> > >>
> > >> Yes, you're right.
> > >>
> > >> On 07/03/14 12:52, Miguel Angel Ajo wrote:
> > >>
> > >>
> > >> I thought of this option, but didn't consider it, as It's
> somehow
> > >> risky to expose an RPC end executing priviledged (even
> filtered)
> > >> commands.
> > >>
> > >>
> > >> subprocess module have some means to do RPC securely over UNIX
> sockets.
> > >> I does this by passing some token along with messages. It should be
> > >> secure because with UNIX sockets we don't need anything stronger since
> > >> MITM attacks are not possible.
> > >>
> > >> If I'm not wrong, once you have credentials for messaging,
> you can
> > >> send messages to any end, even filtered, I somehow see this
> as a
> > >> higher
> > >> risk option.
> > >>
> > >>
> > >> As Stephen noted, I'm not talking about using MQ for RPC. Just some
> > >> local UNIX socket with very simple RPC over it.
> > >>
> > >> And btw, if we add RPC in the middle, it's possible that all
> those
> > >> system call delays increase, or don't decrease all it'll be
> > >> desirable.
> > >>
> > >>
> > >> Every call to rootwrap would require the following.
> > >>
> > >> Client side:
> > >> - new client socket;
> > >> - one message sent;
> > >> - one message received.
> > >>
> > >> Server side:
> > >> - accepting new connection;
> > >> - one message received;
> > >> - one fork-exec;
> > >> - one message sent.
> > >>
> > >> This looks like way simpler than passing through sudo and rootwrap
> that
> > >> requires three exec's and whole lot of configuration files opened and
> > >> parsed.
> > >>
> > >> --
>

Re: [openstack-dev] OpenStack now compatible with SQLA 0.9?

2014-03-10 Thread Sean Dague
On 03/08/2014 05:53 AM, Sean Dague wrote:
> On 03/08/2014 05:00 AM, Thomas Goirand wrote:
>> Hi,
>>
>> I just tried this:
>> https://review.openstack.org/#/c/79129/
>>
>> and it seems everything works. \o/
>>
>> Shall we lift the SQLA cap right away?
> 
> Yes, the caps should only exist for things we know we can't work with.
> And SQLA looks solid now.
> 
> Please update the commit message for that change then I'm +2.

Unfortunately a devstack refactor actually prevented us from testing this
correctly, so I did a revert.

It seems that there is one last issue because Ceilometer is doing really
tricky stuff in their migrations which makes invalid assumptions on the
stability of sqla internals.

https://review.openstack.org/79482

Works around it, though others should figure out if that's quite the
right way to address it.

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][db][performance] Proposal: Get rid of soft deletion (step by step)

2014-03-10 Thread Michael Still
I see many examples in nova of where we still read rows with
read_deleted="yes". I think we need to see a plan for how to remove
all of those before we can progress this.

Michael

On Tue, Mar 11, 2014 at 9:06 AM, Johannes Erdfelt  wrote:
> On Mon, Mar 10, 2014, Joshua Harlow  wrote:
>> Sounds like a good idea to me.
>
> I generally think this is a good idea too.
>
>> I've never understood why we treat the DB as a LOG (keeping deleted == 0
>> records around) when we should just use a LOG (or similar system) to
>> begin with instead.
>>
>> Does anyone use the feature of switching deleted == 1 back to
>> deleted = 0? Has this worked out for u?
>
> This isn't the only potential use. It's possible that code depends on
> being able to still access deleted records.
>
> For instance, in the past we could delete an instance_type, but if an
> instance is still referencing it, code would still try to fetch it from
> the database some times.
>
> This particular example probably isn't an issue anymore since I think all
> of that has been moved to instance metadata specifically to avoid
> problems like this.
>
> That said, I think it's well worth the effort to simplify the code and
> make operators lives easier.
>
> JE
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Taskflow remote worker model ftw!

2014-03-10 Thread Joshua Harlow
No dependency on celery,

Currently it's not using oslo.messaging yet since oslo.messaging is still
dependent on py2.6,py2.7 (it also wasn't released to pypi when this work
started). Taskflow is trying to retain py3.3 compatibility (as all new
libraries should) so bringing oslo.messaging[1] in at the current time
would reduce what taskflow can run with to py2.6,py2.7.

Of course this is getting better as we speak (and afaik is a
work-in-progress to fix this). As oslo.messaging matures I'm all for using
it (and depreciating the work here done with kombu). Until then, we can
think of making the backend more pluggable then it already is (having a
kombu supported one and a oslo.messaging supported one) if we need to
(hopefully the oslo.messaging py3.3 work should finish quickly?).

[1] https://wiki.openstack.org/wiki/Python3#Dependencies

-Josh

-Original Message-
From: Zane Bitter 
Organization: Red Hat
Reply-To: "OpenStack Development Mailing List (not for usage questions)"

Date: Monday, March 10, 2014 at 2:10 PM
To: "openstack-dev@lists.openstack.org" 
Subject: Re: [openstack-dev] Taskflow remote worker model ftw!

>Cool, nice work Josh & team!
>
>On 10/03/14 17:23, Joshua Harlow wrote:
>> This means that the engine no longer has to run tasks locally or in
>> threads (or greenthreads) but can run tasks on remote machines (anything
>> that can be connected to a MQ via kombu; TBD when this becomes
>> oslo.messaging).
>
>Does that reflect a dependency on Celery? Not using oslo.messaging makes
>it a non-starter for OpenStack IMO, so this would be a very high
>priority for adoption.
>
>cheers,
>Zane.
>
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Conductor support for networking in Icehouse

2014-03-10 Thread Matt Kassawara
Hmm... I guess the blueprint summary led me to believe that nova-network no
longer needs to hit the database.


On Mon, Mar 10, 2014 at 3:50 PM, Dan Smith  wrote:

> > https://bugs.launchpad.net/nova/+bug/1290568
>
> Thanks. Note that the objects work doesn't really imply that the service
> doesn't hit the database. In fact, nova-compute stopped hitting the
> database before we started on the objects work.
>
> Anyway, looks like there are still some direct-to-database things
> lingering in the linux_net module. I'm not sure those will get resolved
> before icehouse at this point...
>
> --Dan
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-10 Thread Miguel Angel Ajo Pelayo
Hi Carl, thank you, good idea.

I started reviewing it, but I will do it more carefully tomorrow morning.



- Original Message -
> All,
> 
> I was writing down a summary of all of this and decided to just do it
> on an etherpad.  Will you help me capture the big picture there?  I'd
> like to come up with some actions this week to try to address at least
> part of the problem before Icehouse releases.
> 
> https://etherpad.openstack.org/p/neutron-agent-exec-performance
> 
> Carl
> 
> On Mon, Mar 10, 2014 at 5:26 AM, Miguel Angel Ajo 
> wrote:
> > Hi Yuri & Stephen, thanks a lot for the clarification.
> >
> > I'm not familiar with unix domain sockets at low level, but , I wonder
> > if authentication could be achieved just with permissions (only users in
> > group "neutron" or group "rootwrap" accessing this service.
> >
> > I find it an interesting alternative, to the other proposed solutions, but
> > there are some challenges associated with this solution, which could make
> > it
> > more complicated:
> >
> > 1) Access control, file system permission based or token based,
> >
> > 2) stdout/stderr/return encapsulation/forwarding to the caller,
> >if we have a simple/fast RPC mechanism we can use, it's a matter
> >of serializing a dictionary.
> >
> > 3) client side implementation for 1 + 2.
> >
> > 4) It would need to accept new domain socket connections in green threads
> > to
> > avoid spawning a new process to handle a new connection.
> >
> > The advantages:
> >* we wouldn't need to break the only-python-rule.
> >* we don't need to rewrite/translate rootwrap.
> >
> > The disadvantages:
> >   * it needs changes on the client side (neutron + other projects),
> >
> >
> > Cheers,
> > Miguel Ángel.
> >
> >
> >
> > On 03/08/2014 07:09 AM, Yuriy Taraday wrote:
> >>
> >> On Fri, Mar 7, 2014 at 5:41 PM, Stephen Gran
> >> mailto:stephen.g...@theguardian.com>>
> >> wrote:
> >>
> >> Hi,
> >>
> >> Given that Yuriy says explicitly 'unix socket', I dont think he
> >> means 'MQ' when he says 'RPC'.  I think he just means a daemon
> >> listening on a unix socket for execution requests.  This seems like
> >> a reasonably sensible idea to me.
> >>
> >>
> >> Yes, you're right.
> >>
> >> On 07/03/14 12:52, Miguel Angel Ajo wrote:
> >>
> >>
> >> I thought of this option, but didn't consider it, as It's somehow
> >> risky to expose an RPC end executing priviledged (even filtered)
> >> commands.
> >>
> >>
> >> subprocess module have some means to do RPC securely over UNIX sockets.
> >> I does this by passing some token along with messages. It should be
> >> secure because with UNIX sockets we don't need anything stronger since
> >> MITM attacks are not possible.
> >>
> >> If I'm not wrong, once you have credentials for messaging, you can
> >> send messages to any end, even filtered, I somehow see this as a
> >> higher
> >> risk option.
> >>
> >>
> >> As Stephen noted, I'm not talking about using MQ for RPC. Just some
> >> local UNIX socket with very simple RPC over it.
> >>
> >> And btw, if we add RPC in the middle, it's possible that all those
> >> system call delays increase, or don't decrease all it'll be
> >> desirable.
> >>
> >>
> >> Every call to rootwrap would require the following.
> >>
> >> Client side:
> >> - new client socket;
> >> - one message sent;
> >> - one message received.
> >>
> >> Server side:
> >> - accepting new connection;
> >> - one message received;
> >> - one fork-exec;
> >> - one message sent.
> >>
> >> This looks like way simpler than passing through sudo and rootwrap that
> >> requires three exec's and whole lot of configuration files opened and
> >> parsed.
> >>
> >> --
> >>
> >> Kind regards, Yuriy.
> >>
> >>
> >> ___
> >> OpenStack-dev mailing list
> >> OpenStack-dev@lists.openstack.org
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat]Policy on upgades required config changes

2014-03-10 Thread Clint Byrum
Excerpts from Steven Hardy's message of 2014-03-05 04:24:51 -0800:
> On Tue, Mar 04, 2014 at 02:06:16PM -0800, Clint Byrum wrote:
> > Excerpts from Steven Hardy's message of 2014-03-04 09:39:21 -0800:
> > > Hi all,
> > > 
> > > As some of you know, I've been working on the instance-users blueprint[1].
> > > 
> > > This blueprint implementation requires three new items to be added to the
> > > heat.conf, or some resources (those which create keystone users) will not
> > > work:
> > > 
> > > https://review.openstack.org/#/c/73978/
> > > https://review.openstack.org/#/c/76035/
> > > 
> > > So on upgrade, the deployer must create a keystone domain and domain-admin
> > > user, add the details to heat.conf, as already been done in devstack[2].
> > > 
> > > The changes requried for this to work have already landed in devstack, but
> > > it was discussed to day and Clint suggested this may be unacceptable
> > > upgrade behavior - I'm not sure so looking for guidance/comments.
> > > 
> > > My plan was/is:
> > > - Make devstack work
> > > - Talk to tripleo folks to assist in any transition (what prompted this
> > >   discussion)
> > > - Document the upgrade requirements in the Icehouse release notes so the
> > >   wider community can upgrade from Havana.
> > > - Try to give a heads-up to those maintaining downstream heat deployment
> > >   tools (e.g stackforge/puppet-heat) that some tweaks will be required for
> > >   Icehouse.
> > > 
> > > However some have suggested there may be an openstack-wide policy which
> > > requires peoples old config files to continue working indefinitely on
> > > upgrade between versions - is this right?  If so where is it documented?
> > > 
> > 
> > I don't think I said indefinitely, and I certainly did not mean
> > indefinitely.
> > 
> > What is required though, is that we be able to upgrade to the next
> > release without requiring a new config setting.
> 
> So log a warning for one cycle, then it's OK to expect the config after
> that?
> 

Correct.

> I'm still unclear if there's an openstack-wide policy on this, as the whole
> time-based release with release-notes (which all of openstack is structured
> around and adheres to) seems to basically be an uncomfortable fit for folks
> like tripleo who are trunk chasing and doing CI.
>

So we're continuous delivery focused, but we are not special. HP Cloud
and Rackspace both do this, and really anyone running a large cloud will
most likely do so with CD, as the value proposition is that you don't
have big scary upgrades, you just keep incrementally upgrading and
getting newer, better code. We can only do this if we have excellent
testing, which upstream already does and which the public clouds all
do privately as well of course.

Changes like the one that was merged last week in Heat turn into
stressful fire drills for those deployment teams.

> > Also as we scramble to deal with these things in TripleO (as all of our
> > users are now unable to spin up new images), it is clear that it is more
> > than just a setting. One must create domain users carefully and roll out
> > a new password.
> 
> Such are the pitfalls of life at the bleeding edge ;)
> 

This is mildly annoying as a stance, as that's not how we've been
operating with all of the other services of OpenStack. We're not crazy
for wanting to deploy master and for wanting master to keep working. We
are a _little_ crazy for wanting that without being in the gate.

> Seriously though, apologies for the inconvenience - I have been asking for
> feedback on these patches for at least a month, but clearly I should've
> asked harder.
> 

Mea culpa too, I did not realize what impact this would have until it
was too late.

> As was discussed on IRC yesterday, I think some sort of (initially non-voting)
> feedback from tripleo CI to heat gerrit is pretty much essential given that
> you're so highly coupled to us or this will just keep happening.
> 

TripleO will be in the gate some day (hopefully soon!) and then this
will be less of an issue as you'd see failures early on, and could open
bugs and get us to fix our issue sooner.

However you'd still need to provide the backward compatibility for a
single cycle. Servers aren't upgraded instantly, and keystone may not be
ready for this v3/domain change until after users have fully rolled out
Icehouse. Whether one is CD or stable release focused, one still needs a
simple solution to rolling out a massive update.

> > What I'm suggesting is that we should instead _warn_ that the old
> > behavior is being used and will be deprecated.
> > 
> > At this point, out of urgency, we're landing fixes. But in the future,
> > this should be considered carefully.
> 
> Ok, well I raised this bug:
> 
> https://bugs.launchpad.net/heat/+bug/1287980
> 
> So we can modify the stuff so that it falls back to the old behavior
> gracefully and will solve the issue for folks on the time-based releases.
> 
> Hopefully we can work towards the tripleo gate feedback so ne

Re: [openstack-dev] [nova] a question about instance snapshot

2014-03-10 Thread Jay Pipes
On Mon, 2014-03-10 at 15:52 -0600, Chris Friesen wrote:
> On 03/10/2014 02:58 PM, Jay Pipes wrote:
> > On Mon, 2014-03-10 at 16:30 -0400, Shawn Hartsock wrote:
> >> While I understand the general argument about pets versus cattle. The
> >> question is, would you be willing to poke a few holes in the strict
> >> "cattle" abstraction for the sake of pragmatism. Few shops are going
> >> to make the direct transition in one move. Poking a hole in the cattle
> >> abstraction allowing them to keep a pet VM might be very valuable to
> >> some shops making a migration.
> >
> > Poking holes in cattle aside, my experience with shops that prefer the
> > pets approach is that they are either:
> >
> >   * Not managing their servers themselves at all and just relying on some
> > IT operations organization to manage everything for them, including all
> > aspects of backing up their data as well as failing over and balancing
> > servers, or,
> >   * Hiding behind rationales of "needing to be secure" or "needing 100%
> > uptime" or "needing no customer disruption" in order to avoid any change
> > to the status quo. This is because the incentives inside legacy IT
> > application development and IT operations groups are typically towards
> > not rocking the boat in order to satisfy unrealistic expectations and
> > outdated interface agreements that are forced upon them by management
> > chains that haven't crawled out of the waterfall project management funk
> > of the 1980s.
> >
> > Adding pet-based features to Nova would, IMO, just perpetuate the above
> > scenarios and incentives.
> 
> What about the cases where it's not a "preference" but rather just the 
> inertia of pre-existing systems and procedures?

You mean what I wrote in the second bullet point above?

> If we can get them in the door with enough support for legacy stuff, 
> then they might be easier to convince to do things the "cloud" way in 
> the future.

Yes, fair point, and that's what Shawn was saying as well. Just noting
that in my experience, the second part of the above sentence just
doesn't happen. Once you bring them over and offer them the tools from
their legacy environment, they aren't interested in changing. :)

> If we stick with the hard-line cattle-only approach we run the risk of 
> alienating them completely since redoing everything at once is generally 
> not feasible.

Yes, I understand that. I'm actually fine with including functionality
like memory snapshotting, but only if under no circumstances does it
negatively impact the service of compute to other tenants/users and will
not negatively impact the scaling factor of Nova either.

I'm just not as optimistic as you are that once legacy IT folks have
their old tools, they will consider changing their habits. ;)

Best,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Taskflow remote worker model ftw!

2014-03-10 Thread Zane Bitter

Cool, nice work Josh & team!

On 10/03/14 17:23, Joshua Harlow wrote:

This means that the engine no longer has to run tasks locally or in
threads (or greenthreads) but can run tasks on remote machines (anything
that can be connected to a MQ via kombu; TBD when this becomes
oslo.messaging).


Does that reflect a dependency on Celery? Not using oslo.messaging makes 
it a non-starter for OpenStack IMO, so this would be a very high 
priority for adoption.


cheers,
Zane.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][db][performance] Proposal: Get rid of soft deletion (step by step)

2014-03-10 Thread Johannes Erdfelt
On Mon, Mar 10, 2014, Joshua Harlow  wrote:
> Sounds like a good idea to me.

I generally think this is a good idea too.

> I've never understood why we treat the DB as a LOG (keeping deleted == 0
> records around) when we should just use a LOG (or similar system) to
> begin with instead.
> 
> Does anyone use the feature of switching deleted == 1 back to
> deleted = 0? Has this worked out for u?

This isn't the only potential use. It's possible that code depends on
being able to still access deleted records.

For instance, in the past we could delete an instance_type, but if an
instance is still referencing it, code would still try to fetch it from
the database some times.

This particular example probably isn't an issue anymore since I think all
of that has been moved to instance metadata specifically to avoid
problems like this.

That said, I think it's well worth the effort to simplify the code and
make operators lives easier.

JE


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][db][performance] Proposal: Get rid of soft deletion (step by step)

2014-03-10 Thread Roman Podoliaka
Hi all,

>>> I've never understood why we treat the DB as a LOG (keeping deleted == 0 
>>> records around) when we should just use a LOG (or similar system) to begin 
>>> with instead.

I can't agree more with you! Storing deleted records in tables is
hardly usable, bad for performance (as it makes tables and indexes
larger) and it probably covers a very limited set of use cases (if
any) of OpenStack users.

>>> One of approaches that I see is in step by step removing "deleted" column 
>>> from every table with probably code refactoring.

So we have a homework to do: find out what for projects use
soft-deletes. I assume that soft-deletes are only used internally and
aren't exposed to API users, but let's check that. At the same time
all new projects should avoid using of soft-deletes from the start.

Thanks,
Roman

On Mon, Mar 10, 2014 at 2:44 PM, Joshua Harlow  wrote:
> Sounds like a good idea to me.
>
> I've never understood why we treat the DB as a LOG (keeping deleted == 0
> records around) when we should just use a LOG (or similar system) to begin
> with instead.
>
> Does anyone use the feature of switching deleted == 1 back to deleted = 0?
> Has this worked out for u?
>
> Seems like some of the feedback on
> https://etherpad.openstack.org/p/operators-feedback-mar14 also suggests that
> this has been a operational pain-point for folks (Tool to delete things
> properly suggestions and such…).
>
> From: Boris Pavlovic 
> Reply-To: "OpenStack Development Mailing List (not for usage questions)"
> 
> Date: Monday, March 10, 2014 at 1:29 PM
> To: OpenStack Development Mailing List ,
> Victor Sergeyev 
> Subject: [openstack-dev] [all][db][performance] Proposal: Get rid of soft
> deletion (step by step)
>
> Hi stackers,
>
> (It's proposal for Juno.)
>
> Intro:
>
> Soft deletion means that records from DB are not actually deleted, they are
> just marked as a "deleted". To mark record as a "deleted" we put in special
> table's column "deleted" record's ID value.
>
> Issue 1: Indexes & Queries
> We have to add in every query "AND deleted == 0" to get non-deleted records.
> It produce performance issue, cause we should add it in any index one
> "extra" column.
> As well it produce extra complexity in db migrations and building queries.
>
> Issue 2: Unique constraints
> Why we store ID in deleted and not True/False?
> The reason is that we would like to be able to create real DB unique
> constraints and avoid race conditions on "insert" operation.
>
> Sample: we Have table (id, name, password, deleted) we would like to put in
> column "name" only unique value.
>
> Approach without UC: if count(`select  where name = name`) == 0:
> insert(...)
> (race cause we are able to add new record between )
>
> Approach with UC: try: insert(...) except Duplicate: ...
>
> So to add UC we have to add them on (name, deleted). (to be able to make
> insert/delete/insert with same name)
>
> As well it produce performance issues, because we have to use Complex unique
> constraints on 2  or more columns. + extra code & complexity in db
> migrations.
>
> Issue 3: Garbage collector
>
> It is really hard to make garbage collector that will have good performance
> and be enough common to work in any case for any project.
> Without garbage collector DevOps have to cleanup records by hand, (risk to
> break something). If they don't cleanup DB they will get very soon
> performance issue.
>
> To put in a nutshell most important issues:
> 1) Extra complexity to each select query & extra column in each index
> 2) Extra column in each Unique Constraint (worse performance)
> 3) 2 Extra column in each table: (deleted, deleted_at)
> 4) Common garbage collector is required
>
>
> To resolve all these issues we should just remove soft deletion.
>
> One of approaches that I see is in step by step removing "deleted" column
> from every table with probably code refactoring.  Actually we have 3
> different cases:
>
> 1) We don't use soft deleted records:
> 1.1) Do .delete() instead of .soft_delete()
> 1.2) Change query to avoid adding extra "deleted == 0" to each query
> 1.3) Drop "deleted" and "deleted_at" columns
>
> 2) We use soft deleted records for internal stuff "e.g. periodic tasks"
> 2.1) Refactor code somehow: E.g. store all required data by periodic task in
> some special table that has: (id, type, json_data) columns
> 2.2) On delete add record to this table
> 2.3-5) similar to 1.1, 1.2, 13
>
> 3) We use soft deleted records in API
> 3.1) Deprecated API call if it is possible
> 3.2) Make proxy call to ceilometer from API
> 3.3) On .delete() store info about records in (ceilometer, or somewhere
> else)
> 3.4-6) similar to 1.1, 1.2, 1.3
>
> This is not ready RoadMap, just base thoughts to start the constructive
> discussion in the mailing list, so %stacker% your opinion is very important!
>
>
> Best regards,
> Boris Pavlovic
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
>

Re: [openstack-dev] [all][db][performance] Proposal: Get rid of soft deletion (step by step)

2014-03-10 Thread Jay Pipes
On Tue, 2014-03-11 at 01:29 +0400, Boris Pavlovic wrote:
> 
> To put in a nutshell most important issues:
> 1) Extra complexity to each select query & extra column in each index
> 2) Extra column in each Unique Constraint (worse performance)
> 3) 2 Extra column in each table: (deleted, deleted_at)
> 4) Common garbage collector is required

Nice summary of the problems related to soft deletion, Boris.

> To resolve all these issues we should just remove soft deletion.
> 
> One of approaches that I see is in step by step removing "deleted"
> column from every table with probably code refactoring.  Actually we
> have 3 different cases:
> 
> 1) We don't use soft deleted records: 
> 1.1) Do .delete() instead of .soft_delete()
> 1.2) Change query to avoid adding extra "deleted == 0" to each query
> 1.3) Drop "deleted" and "deleted_at" columns
> 
> 2) We use soft deleted records for internal stuff "e.g. periodic
> tasks"
> 2.1) Refactor code somehow: E.g. store all required data by periodic
> task in some special table that has: (id, type, json_data) columns
> 2.2) On delete add record to this table 
> 2.3-5) similar to 1.1, 1.2, 13
> 
> 3) We use soft deleted records in API 
> 3.1) Deprecated API call if it is possible 
> 3.2) Make proxy call to ceilometer from API 
> 3.3) On .delete() store info about records in (ceilometer, or
> somewhere else) 
> 3.4-6) similar to 1.1, 1.2, 1.3

I would actually prefer this "solution", at least for server instances:

1. Remove any contractual obligation in the API to allow servers with
the same "name" to exist, as long as only one of those servers is not
deleted. As I've mentioned before, I think this is exceedingly silly to
slow down the operation of Nova just to allow a user to create a server,
delete it, and immediately create a server with the same name.

2. Make the unique constraint for the server name be on (project_id,
name) and be done with it.

3. Remove deleted and deleted_at from the instances table.

4. Don't allow any delete() operation at all on the
nova.objects.instance object at all.

3. Hard delete records from the instances table on a periodic basis
using an external archiver that either just deletes the records in
instances that are in ERROR or TERMINATED vm_state (as is possible if
ceilometer is providing your bookkeeping) or move those records into an
archival table (as would be necessary if you are not running Ceilometer
and need some history of these things).

For other objects in the system, I think your solution #1 would work
fine.

Best,
-jay



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova client] gate-python-novaclient-pypy

2014-03-10 Thread Jeremy Stanley
On 2014-03-10 18:33:14 + (+), Jeremy Stanley wrote:
[...]
> Monty originally proposed an improvement to the way we bootstrap
> setuptools on our job workers which ought to help with this, so
> I'll see whether it's in shape and still resolves the issue. If
> so, I'll make sure we prioritize merging it ASAP to get client
> gating unwedged and follow up to this thread with relevant
> updates.

That patch turned out not to help with this issue anyway. I've
determined that (for reasons as of yet unknown) this is only
happening in one of our two providers, so I'm temporarily removing
the problem nodes while I continue to debug this. Hopefully that
un-breaks gating for clients in the near term.

I've opened https://launchpad.net/bugs/1290562 for now which these
failures can be rechecked against while I work out the underlying
problem.
-- 
Jeremy Stanley

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [qa] Python 3.3 patches (using six)

2014-03-10 Thread David Kranz
There are a number of patches up for review that make various changes to 
use "six" apis instead of Python 2 constructs. While I understand the 
desire to get a head start on getting Tempest to run in Python 3, I'm 
not sure it makes sense to do this work piecemeal until we are near 
ready to introduce a py3 gate job. Many contributors will not be aware 
of what all the differences are and py2-isms will creep back in 
resulting in more overall time spent making these changes and reviewing. 
Also, the core review team is busy trying to do stuff important to the 
icehouse release which is barely more than 5 weeks away. IMO we should 
hold off on various kinds of "cleanup" patches for now.


 -David

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] a question about instance snapshot

2014-03-10 Thread Chris Friesen

On 03/10/2014 02:58 PM, Jay Pipes wrote:

On Mon, 2014-03-10 at 16:30 -0400, Shawn Hartsock wrote:

While I understand the general argument about pets versus cattle. The
question is, would you be willing to poke a few holes in the strict
"cattle" abstraction for the sake of pragmatism. Few shops are going
to make the direct transition in one move. Poking a hole in the cattle
abstraction allowing them to keep a pet VM might be very valuable to
some shops making a migration.


Poking holes in cattle aside, my experience with shops that prefer the
pets approach is that they are either:

  * Not managing their servers themselves at all and just relying on some
IT operations organization to manage everything for them, including all
aspects of backing up their data as well as failing over and balancing
servers, or,
  * Hiding behind rationales of "needing to be secure" or "needing 100%
uptime" or "needing no customer disruption" in order to avoid any change
to the status quo. This is because the incentives inside legacy IT
application development and IT operations groups are typically towards
not rocking the boat in order to satisfy unrealistic expectations and
outdated interface agreements that are forced upon them by management
chains that haven't crawled out of the waterfall project management funk
of the 1980s.

Adding pet-based features to Nova would, IMO, just perpetuate the above
scenarios and incentives.


What about the cases where it's not a "preference" but rather just the 
inertia of pre-existing systems and procedures?


If we can get them in the door with enough support for legacy stuff, 
then they might be easier to convince to do things the "cloud" way in 
the future.


If we stick with the hard-line cattle-only approach we run the risk of 
alienating them completely since redoing everything at once is generally 
not feasible.


Chris





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Conductor support for networking in Icehouse

2014-03-10 Thread Dan Smith
> https://bugs.launchpad.net/nova/+bug/1290568

Thanks. Note that the objects work doesn't really imply that the service
doesn't hit the database. In fact, nova-compute stopped hitting the
database before we started on the objects work.

Anyway, looks like there are still some direct-to-database things
lingering in the linux_net module. I'm not sure those will get resolved
before icehouse at this point...

--Dan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][db][performance] Proposal: Get rid of soft deletion (step by step)

2014-03-10 Thread Joshua Harlow
Sounds like a good idea to me.

I've never understood why we treat the DB as a LOG (keeping deleted == 0 
records around) when we should just use a LOG (or similar system) to begin with 
instead.

Does anyone use the feature of switching deleted == 1 back to deleted = 0? Has 
this worked out for u?

Seems like some of the feedback on 
https://etherpad.openstack.org/p/operators-feedback-mar14 also suggests that 
this has been a operational pain-point for folks (Tool to delete things 
properly suggestions and such…).

From: Boris Pavlovic mailto:bpavlo...@mirantis.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Monday, March 10, 2014 at 1:29 PM
To: OpenStack Development Mailing List 
mailto:openstack-dev@lists.openstack.org>>, 
Victor Sergeyev mailto:vserge...@mirantis.com>>
Subject: [openstack-dev] [all][db][performance] Proposal: Get rid of soft 
deletion (step by step)

Hi stackers,

(It's proposal for Juno.)

Intro:

Soft deletion means that records from DB are not actually deleted, they are 
just marked as a "deleted". To mark record as a "deleted" we put in special 
table's column "deleted" record's ID value.

Issue 1: Indexes & Queries
We have to add in every query "AND deleted == 0" to get non-deleted records.
It produce performance issue, cause we should add it in any index one "extra" 
column.
As well it produce extra complexity in db migrations and building queries.

Issue 2: Unique constraints
Why we store ID in deleted and not True/False?
The reason is that we would like to be able to create real DB unique 
constraints and avoid race conditions on "insert" operation.

Sample: we Have table (id, name, password, deleted) we would like to put in 
column "name" only unique value.

Approach without UC: if count(`select  where name = name`) == 0: insert(...)
(race cause we are able to add new record between )

Approach with UC: try: insert(...) except Duplicate: ...

So to add UC we have to add them on (name, deleted). (to be able to make 
insert/delete/insert with same name)

As well it produce performance issues, because we have to use Complex unique 
constraints on 2  or more columns. + extra code & complexity in db migrations.

Issue 3: Garbage collector

It is really hard to make garbage collector that will have good performance and 
be enough common to work in any case for any project.
Without garbage collector DevOps have to cleanup records by hand, (risk to 
break something). If they don't cleanup DB they will get very soon performance 
issue.

To put in a nutshell most important issues:
1) Extra complexity to each select query & extra column in each index
2) Extra column in each Unique Constraint (worse performance)
3) 2 Extra column in each table: (deleted, deleted_at)
4) Common garbage collector is required


To resolve all these issues we should just remove soft deletion.

One of approaches that I see is in step by step removing "deleted" column from 
every table with probably code refactoring.  Actually we have 3 different cases:

1) We don't use soft deleted records:
1.1) Do .delete() instead of .soft_delete()
1.2) Change query to avoid adding extra "deleted == 0" to each query
1.3) Drop "deleted" and "deleted_at" columns

2) We use soft deleted records for internal stuff "e.g. periodic tasks"
2.1) Refactor code somehow: E.g. store all required data by periodic task in 
some special table that has: (id, type, json_data) columns
2.2) On delete add record to this table
2.3-5) similar to 1.1, 1.2, 13

3) We use soft deleted records in API
3.1) Deprecated API call if it is possible
3.2) Make proxy call to ceilometer from API
3.3) On .delete() store info about records in (ceilometer, or somewhere else)
3.4-6) similar to 1.1, 1.2, 1.3

This is not ready RoadMap, just base thoughts to start the constructive 
discussion in the mailing list, so %stacker% your opinion is very important!


Best regards,
Boris Pavlovic
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Docs for new plugins

2014-03-10 Thread Mohammad Banikazemi
Would like to know what to do for adding documentation for a new plugin.
Can someone point me to the right place/process please.

Thanks,

Mohammad___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][FYI] Bookmarklet for neutron gerrit review

2014-03-10 Thread Maru Newby
+1

I think there should be naming standard for all reviews (e.g. [test] Jenkins, 
[test-external] VMware) so that gerrit css can colorize automatic review 
comments no matter the project.


m.

On Mar 7, 2014, at 2:08 AM, Chmouel Boudjnah  wrote:

> if peoples like this why don't we have it directly on the reviews?
> 
> Chmouel.
> 
> 
> On Tue, Mar 4, 2014 at 10:00 PM, Carl Baldwin  wrote:
> Nachi,
> 
> Great!  I'd been meaning to do something like this.  I took yours and
> tweaked it a bit to highlight failed Jenkins builds in red and grey
> other Jenkins messages.  Human reviews are left in blue.
> 
> javascript:(function(){
> list = document.querySelectorAll('td.GJEA35ODGC');
> for(i in list) {
> title = list[i];
> if(! title.innerHTML) { continue; }
> text = title.nextSibling;
> if (text.innerHTML.search('Build failed') > 0) {
> title.style.color='red'
> } else if(title.innerHTML.search('Jenkins|CI|Ryu|Testing|Mine') >= 0) 
> {
> title.style.color='#66'
> } else {
> title.style.color='blue'
> }
> }
> })()
> 
> Carl
> 
> On Wed, Feb 26, 2014 at 12:31 PM, Nachi Ueno  wrote:
> > Hi folks
> >
> > I wrote an bookmarklet for neutron gerrit review.
> > This bookmarklet make the comment title for 3rd party ci as gray.
> >
> > javascript:(function(){list =
> > document.querySelectorAll('td.GJEA35ODGC'); for(i in
> > list){if(!list[i].innerHTML){continue;};if(list[i].innerHTML &&
> > list[i].innerHTML.search('CI|Ryu|Testing|Mine') >
> > 0){list[i].style.color='#66'}else{list[i].style.color='red'}};})()
> >
> > enjoy :)
> > Nachi
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Conductor support for networking in Icehouse

2014-03-10 Thread Matt Kassawara
Done.

https://bugs.launchpad.net/nova/+bug/1290568


On Mon, Mar 10, 2014 at 1:37 PM, Dan Smith  wrote:

> > However, when attempting to boot an instance, the Nova network service
> > fails to retrieve network information from the controller. Adding the
> > the database keys resolves the problem. I'm using
> > the 2014.1~b3-0ubuntu1~cloud0 packages on Ubuntu 12.04.
>
> Can you file a bug with details from the logs?
>
> --Dan
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][db][performance] Proposal: Get rid of soft deletion (step by step)

2014-03-10 Thread Boris Pavlovic
Hi stackers,

(It's proposal for Juno.)

Intro:

Soft deletion means that records from DB are not actually deleted, they are
just marked as a "deleted". To mark record as a "deleted" we put in special
table's column "deleted" record's ID value.

Issue 1: Indexes & Queries
We have to add in every query "AND deleted == 0" to get non-deleted records.
It produce performance issue, cause we should add it in any index one
"extra" column.
As well it produce extra complexity in db migrations and building queries.

Issue 2: Unique constraints
Why we store ID in deleted and not True/False?
The reason is that we would like to be able to create real DB unique
constraints and avoid race conditions on "insert" operation.

Sample: we Have table (id, name, password, deleted) we would like to put in
column "name" only unique value.

Approach without UC: if count(`select  where name = name`) == 0:
insert(...)
(race cause we are able to add new record between )

Approach with UC: try: insert(...) except Duplicate: ...

So to add UC we have to add them on (name, deleted). (to be able to make
insert/delete/insert with same name)

As well it produce performance issues, because we have to use Complex
unique constraints on 2  or more columns. + extra code & complexity in db
migrations.

Issue 3: Garbage collector

It is really hard to make garbage collector that will have good performance
and be enough common to work in any case for any project.
Without garbage collector DevOps have to cleanup records by hand, (risk to
break something). If they don't cleanup DB they will get very soon
performance issue.

To put in a nutshell most important issues:
1) Extra complexity to each select query & extra column in each index
2) Extra column in each Unique Constraint (worse performance)
3) 2 Extra column in each table: (deleted, deleted_at)
4) Common garbage collector is required


To resolve all these issues we should just remove soft deletion.

One of approaches that I see is in step by step removing "deleted" column
from every table with probably code refactoring.  Actually we have 3
different cases:

1) We don't use soft deleted records:
1.1) Do .delete() instead of .soft_delete()
1.2) Change query to avoid adding extra "deleted == 0" to each query
1.3) Drop "deleted" and "deleted_at" columns

2) We use soft deleted records for internal stuff "e.g. periodic tasks"
2.1) Refactor code somehow: E.g. store all required data by periodic task
in some special table that has: (id, type, json_data) columns
2.2) On delete add record to this table
2.3-5) similar to 1.1, 1.2, 13

3) We use soft deleted records in API
3.1) Deprecated API call if it is possible
3.2) Make proxy call to ceilometer from API
3.3) On .delete() store info about records in (ceilometer, or somewhere
else)
3.4-6) similar to 1.1, 1.2, 1.3

This is not ready RoadMap, just base thoughts to start the constructive
discussion in the mailing list, so %stacker% your opinion is very
important!


Best regards,
Boris Pavlovic
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Taskflow remote worker model ftw!

2014-03-10 Thread Joshua Harlow
Hi all,


I'd just like to let everyone know a new feature in taskflow (that I think will 
be benefical to various projects (reducing the duplication of similar code in 
various projects that accomplish the same feature set). The new feature is an 
ability to run tasks in remote-workers (the task transitions and state 
persistence is still done in an 'orchestrating' engine). This means that the 
engine no longer has to run tasks locally or in threads (or greenthreads) but 
can run tasks on remote machines (anything that can be connected to a MQ via 
kombu; TBD when this becomes oslo.messaging).


A simple example that might show how this works better for folks that have some 
time to try it out.


-

Pre-setup: git clone the taskflow repo and install it (in a venv or elsewhere), 
install a mq server (rabbitmq for example).

-


Lets now create two basic tasks (one that says hello and one that says goodbye).


class HelloWorldTask(task.Task):

default_provides = "hi_happened"


def execute(self):

LOG.info('hello world')

return time.time()


class GoodbyeWorldTask(task.Task):

default_provides = "goodbye_happened"


def execute(self, hi_happened):

LOG.info('goodbye world (hi said on %s)', hi_happened)

return time.time()


* Notice how the GoodbyeWorldTask requires an input 'hi_happened' (which is 
produced by the HelloWorldTask).


Now lets create a workflow that combines these two together.


f = linear_flow.Flow("hi-bye")

f.add(HelloWorldTask())

f.add(GoodbyeWorldTask())


Notice here that we have specified a linear runtime order (that is hello will 
be said before goodbye, this is also inherent in the dependency ordering since 
the goodbye task requires 'hi_happened' to run, and the only way to satisfy 
that dependency is to run the helloworld task before the goodbye task).


*  If you are wondering what the heck this is (or why it is useful to have 
these little task and flow classes) check out 
https://wiki.openstack.org/wiki/TaskFlow#Structure


Now the fun begins!


We need a worker to accept requests to run tasks on so lets create a small 
function that just does that.


def run_worker():

worker_conf = dict(MQ_CONF)

worker_conf.update({

# These are the available tasks that this worker has access to execute.

'tasks': [

HelloWorldTask,

GoodbyeWorldTask,

],

})


# Start this up and stop it on ctrl-c

worker = remote_worker.Worker(**worker_conf)

runner = threading.Thread(target=worker.run)

runner.start()

worker.wait()


while True:

try:

time.sleep(1)

except KeyboardInterrupt:

LOG.info("Dying...")

worker.stop()

runner.join()

break


And of course we need a function that will perform the orchestration of the 
remote (or local tasks), this function starts the whole execution flow by 
taking the workflow defined above and combining that workflow with an engine 
that will run the individual tasks (and transfer data between those tasks as 
needed).


* For those still wondering what an engine is (or what it offers) check out 
https://wiki.openstack.org/wiki/TaskFlow#Engines and 
https://wiki.openstack.org/wiki/TaskFlow/Patterns_and_Engines/Persistence#Big_Picture
 (which hopefully will make it easier to understand why the concept exists in 
the first place).


def run_engine():

# Make some remote tasks happen

f = lf.Flow("test")

f.add(HelloWorldTask())

f.add(GoodbyeWorldTask())


# Create a in-memory storage area where intermediate results will be

# saved (you can change this to a persistent backend if desired).

backend = impl_memory.MemoryBackend({})

_logbook, flowdetail = pu.temporary_flow_detail(backend)

engine_conf = dict(MQ_CONF)

engine_conf.update({

# This identifies what workers are accessible via what queues, this

# will be made better soon with reviews 
https://review.openstack.org/#/c/75094/

# or similar.

'workers_info': {

'work': [

HelloWorldTask().name,

GoodbyeWorldTask().name,

],

}

})


LOG.info("Running workflow %s", f)


# Now run the engine.

e = engine.WorkerBasedActionEngine(f, flowdetail, backend, engine_conf)

with logging_listener.LoggingListener(e, level=logging.INFO):

e.run()


# See the results recieved.

print("Final results: %s" % (e.storage.fetch_all()))


Now once we have this two methods created we can actually start the worker and 
the engine and watch the action happen. To do this without having to apply a 
little more boilerplate (imports and such) the code above can be found at 
http://paste.openstack.org/show/73071/.


To start a worker just do the following. Download the above paste to a file 
named 'test.py' (and then modify the MQ_SERVER glob

Re: [openstack-dev] UTF-8 required charset/encoding for openstack database?

2014-03-10 Thread Roman Podoliaka
Hi all,

>>> It sounds like for the near future my best bet would be to just set the 
>>> install scripts to configure postgres (which is used only for openstack) to 
>>> default to utf-8.  Is that a fair summation?

Yes. UTF-8 is a reasonable default value.

Thanks,
Roman

On Mon, Mar 10, 2014 at 1:36 PM, Chris Friesen
 wrote:
> On 03/10/2014 02:02 PM, Ben Nemec wrote:
>
>> We just had a discussion about this in #openstack-oslo too.  See the
>> discussion starting at 2014-03-10T16:32:26
>>
>> http://eavesdrop.openstack.org/irclogs/%23openstack-oslo/%23openstack-oslo.2014-03-10.log
>
>
> In that discussion dhellmann said, "I wonder if we make any assumptions
> elsewhere that we are using utf8 in the database"
>
> For what it's worth I came across
> "https://wiki.openstack.org/wiki/Encoding";, which proposed a rule:
>
> "All external text that is not explicitly encoded (database storage,
> commandline arguments, etc.) should be presumed to be encoded as utf-8."
>
>
>> While it seems Heat does require utf8 (or at least matching character
>> sets) across all tables, I'm not sure the current solution is good.  It
>> seems like we may want a migration to help with this for anyone who
>> might already have mismatched tables.  There's a lot of overlap between
>> that discussion and how to handle Postgres with this, I think.
>
>
> I'm lucky enough to be able to fix this now, I don't have any real
> deployments yet.
>
> It sounds like for the near future my best bet would be to just set the
> install scripts to configure postgres (which is used only for openstack) to
> default to utf-8.  Is that a fair summation?
>
> Chris
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-10 Thread Carl Baldwin
All,

I was writing down a summary of all of this and decided to just do it
on an etherpad.  Will you help me capture the big picture there?  I'd
like to come up with some actions this week to try to address at least
part of the problem before Icehouse releases.

https://etherpad.openstack.org/p/neutron-agent-exec-performance

Carl

On Mon, Mar 10, 2014 at 5:26 AM, Miguel Angel Ajo  wrote:
> Hi Yuri & Stephen, thanks a lot for the clarification.
>
> I'm not familiar with unix domain sockets at low level, but , I wonder
> if authentication could be achieved just with permissions (only users in
> group "neutron" or group "rootwrap" accessing this service.
>
> I find it an interesting alternative, to the other proposed solutions, but
> there are some challenges associated with this solution, which could make it
> more complicated:
>
> 1) Access control, file system permission based or token based,
>
> 2) stdout/stderr/return encapsulation/forwarding to the caller,
>if we have a simple/fast RPC mechanism we can use, it's a matter
>of serializing a dictionary.
>
> 3) client side implementation for 1 + 2.
>
> 4) It would need to accept new domain socket connections in green threads to
> avoid spawning a new process to handle a new connection.
>
> The advantages:
>* we wouldn't need to break the only-python-rule.
>* we don't need to rewrite/translate rootwrap.
>
> The disadvantages:
>   * it needs changes on the client side (neutron + other projects),
>
>
> Cheers,
> Miguel Ángel.
>
>
>
> On 03/08/2014 07:09 AM, Yuriy Taraday wrote:
>>
>> On Fri, Mar 7, 2014 at 5:41 PM, Stephen Gran
>> mailto:stephen.g...@theguardian.com>>
>> wrote:
>>
>> Hi,
>>
>> Given that Yuriy says explicitly 'unix socket', I dont think he
>> means 'MQ' when he says 'RPC'.  I think he just means a daemon
>> listening on a unix socket for execution requests.  This seems like
>> a reasonably sensible idea to me.
>>
>>
>> Yes, you're right.
>>
>> On 07/03/14 12:52, Miguel Angel Ajo wrote:
>>
>>
>> I thought of this option, but didn't consider it, as It's somehow
>> risky to expose an RPC end executing priviledged (even filtered)
>> commands.
>>
>>
>> subprocess module have some means to do RPC securely over UNIX sockets.
>> I does this by passing some token along with messages. It should be
>> secure because with UNIX sockets we don't need anything stronger since
>> MITM attacks are not possible.
>>
>> If I'm not wrong, once you have credentials for messaging, you can
>> send messages to any end, even filtered, I somehow see this as a
>> higher
>> risk option.
>>
>>
>> As Stephen noted, I'm not talking about using MQ for RPC. Just some
>> local UNIX socket with very simple RPC over it.
>>
>> And btw, if we add RPC in the middle, it's possible that all those
>> system call delays increase, or don't decrease all it'll be
>> desirable.
>>
>>
>> Every call to rootwrap would require the following.
>>
>> Client side:
>> - new client socket;
>> - one message sent;
>> - one message received.
>>
>> Server side:
>> - accepting new connection;
>> - one message received;
>> - one fork-exec;
>> - one message sent.
>>
>> This looks like way simpler than passing through sudo and rootwrap that
>> requires three exec's and whole lot of configuration files opened and
>> parsed.
>>
>> --
>>
>> Kind regards, Yuriy.
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] a question about instance snapshot

2014-03-10 Thread Jay Pipes
On Mon, 2014-03-10 at 16:30 -0400, Shawn Hartsock wrote:
> While I understand the general argument about pets versus cattle. The
> question is, would you be willing to poke a few holes in the strict
> "cattle" abstraction for the sake of pragmatism. Few shops are going
> to make the direct transition in one move. Poking a hole in the cattle
> abstraction allowing them to keep a pet VM might be very valuable to
> some shops making a migration.

Poking holes in cattle aside, my experience with shops that prefer the
pets approach is that they are either:

 * Not managing their servers themselves at all and just relying on some
IT operations organization to manage everything for them, including all
aspects of backing up their data as well as failing over and balancing
servers, or,
 * Hiding behind rationales of "needing to be secure" or "needing 100%
uptime" or "needing no customer disruption" in order to avoid any change
to the status quo. This is because the incentives inside legacy IT
application development and IT operations groups are typically towards
not rocking the boat in order to satisfy unrealistic expectations and
outdated interface agreements that are forced upon them by management
chains that haven't crawled out of the waterfall project management funk
of the 1980s.

Adding pet-based features to Nova would, IMO, just perpetuate the above
scenarios and incentives.

> To be fair, the places where I've heard interest are looking to "walk
> not run" from a pets approach to a cattle approach. This conceit in
> the abstraction might be valuable enough to them to spell the
> difference between cautiously adopting and tacitly rejecting
> OpenStack.
> 
> That's how I would argue *for* the change anyhow, it's not an argument
> based on ideology but on pragmatism.

Yeah, I understand your point, and I do see how it might lead some folks
towards OpenStack.

But then again, I wouldn't port a bunch of Prolog language constructs
and quirks to Python just to lure Prolog and Erlang developers.

In the end, I know I'm probably just over-zealous on this stuff, but I
just can't help it :)

Best,
-jay

> On Mon, Mar 10, 2014 at 3:19 PM, Jay Pipes  wrote:
> > On Mon, 2014-03-10 at 12:13 -0400, Shawn Hartsock wrote:
> >> We have very strong interest in pursing this feature in the VMware
> >> driver as well. I would like to see the revert instance feature
> >> implemented at least.
> >>
> >> When I used to work in multi-discipline roles involving operations it
> >> would be common for us to snapshot a vm, run through an upgrade
> >> process, then revert if something did not upgrade smoothly. This
> >> ability alone can be exceedingly valuable in long-lived virtual
> >> machines.
> >>
> >> I also have some comments from parties interested in refactoring how
> >> the VMware drivers handle snapshots but I'm not certain how much that
> >> plays into this "live snapshot" discussion.
> >
> > I think the reason that there isn't much interest in doing this kind of
> > thing is because the worldview that VMs are pets is antithetical to the
> > worldview that VMs are cattle, and Nova tends to favor the latter (where
> > DRS/DPM on vSphere tends to favor the former).
> >
> > There's nothing about your scenario above of being able to "revert" an
> > instance to a particular state that isn't possible with today's Nova.
> > Snapshotting an instance, doing an upgrade of software on the instance,
> > and then restoring from the snapshot if something went wrong (reverting)
> > is already fully possible to do with the regular Nova snapshot and
> > restore operations. The only difference is that the "live-snapshot"
> > stuff would include saving the memory view of a VM in addition to its
> > disk state. And that, at least in my opinion, is only needed when you
> > are treating VMs like pets and not cattle.
> >
> > Best,
> > -jay
> >
> >> On Mon, Mar 10, 2014 at 12:04 AM, Bohai (ricky)  wrote:
> >> >> -Original Message-
> >> >> From: Alex Xu [mailto:x...@linux.vnet.ibm.com]
> >> >> Sent: Sunday, March 09, 2014 10:04 PM
> >> >> To: OpenStack Development Mailing List (not for usage questions)
> >> >> Subject: Re: [openstack-dev] [nova] a question about instance snapshot
> >> >>
> >> >> Hi, Jeremy, the discussion at here
> >> >> http://lists.openstack.org/pipermail/openstack-dev/2013-August/013688.html
> >> >>
> >> >
> >> > I have a great interest in the topic too.
> >> > I read the link you provided, and there is a little confusion for me.
> >> > I agree with the security consideration in the discussion and memory 
> >> > snapshot can't be used for the cloning instance easily.
> >> >
> >> > But I think it's safe for the using for Instance revert.
> >> > And revert the instance to a checkpoint is valuable for the user.
> >> > Why we didn't use it for instance revert in the first step?
> >> >
> >> > Best regards to you.
> >> > Ricky
> >> >
> >> >> Thanks
> >> >> Alex
> >> >> On 2014年03月07日 10:29, Liuji (Jeremy) wrote:
> >> >> > Hi, all

Re: [openstack-dev] [Neutron][LBaaS] Mini-summit Interest?

2014-03-10 Thread Paul Voccio
I get the idea that we should be open and discuss things transparently, but I’m 
not quite following the reasoning that there shouldn’t be mini-summits just 
because not everyone would be able to attend.   If people have the will and the 
means to meet to tackle problems, shouldn’t we encourage that? The discussions 
and topics that are on the table shouldn’t be a secret. It should be easy to 
use the ML to post etherpads of topics and ideas. Nothing would be binding in 
the mini-summit.  Isn’t it be categorically the same as a meetup?

This thinking is a bit confusing after coming out of the operators mini summit 
last week in San Jose. It was a group of people gathering to hone in on what 
they want to focus on at the summit and issues that we know need to be 
addressed. ML, IRC and other mediums should and are used to do the same.

Thanks,
~pvo


From: Edgar Magana mailto:emag...@plumgrid.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Monday, March 10, 2014 at 3:10 PM
To: OpenStack List 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Neutron][LBaaS] Mini-summit Interest?

Eugene,

A have a few arguments why I believe this is not 100% inclusive

  *   Is the foundation involved on this process? How? What is the budget? Who 
is the responsible from the foundation  side?
  *   If somebody made already travel arraignments, it won't be possible to 
make changes at not cost.
  *   Staying extra days in a different city could impact anyone's budget
  *   As a OpenStack developer. I want to understand why the summit is not 
enough for deciding the next steps for each project. If that is the case, I 
would prefer to make changes on the organization of the summit instead of 
creating mini-summits all around!

I could continue but I think these are good enough.

I could agree with your point about previous summits being distractive for 
developers, this is why this time the OpenStack foundation is trying very hard 
to allocate specific days for the conference and specific days for the summit.
The point that I am totally agree with you is that we SHOULD NOT have session 
about work that will be done no matter what!  Those are just a waste of good 
time that could be invested in very interesting discussions about topics that 
are still not clear.
I would recommend that you express this opinion to Mark. He is the right guy to 
decide which sessions will bring interesting discussions and which ones will be 
just a declaration of intents.

Thanks,

Edgar

From: Eugene Nikanorov mailto:enikano...@mirantis.com>>
Reply-To: OpenStack List 
mailto:openstack-dev@lists.openstack.org>>
Date: Monday, March 10, 2014 10:32 AM
To: OpenStack List 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Neutron][LBaaS] Mini-summit Interest?

Hi Edgar,

I'm neutral to the suggestion of mini summit at this point.
Why do you think it will exclude developers?
If we keep it 1-3 days prior to OS Summit in Atlanta (e.g. in the same city) 
that would allow anyone who joins OS Summit to save on extra travelling.
OS Summit itself is too distractive to have really productive discussions, 
unless your missing the sessions and spend time discussing.
For instance design sessions basically only good for declaration of intents, 
but not for real discussion of a complex topic at meaningful detail level.

What would be your suggestions to make this more inclusive?
I think the time and place is the key here - hence Atlanta and few days prior 
OS summit.

Thanks,
Eugene.



On Mon, Mar 10, 2014 at 10:59 PM, Edgar Magana 
mailto:emag...@plumgrid.com>> wrote:
Team,

I found that having a mini-summit with a very short notice means excluding
a lot of developers of such an interesting topic for Neutron.
The OpenStack summit is the opportunity for all developers to come
together and discuss the next steps, there are many developers that CAN
NOT afford another trip for a "special" summit. I am personally against
that and I do support Mark's proposal of having all the conversation over
IRC and mailing list.

Please, do not start excluding people that won't be able to attend another
face-to-face meeting besides the summit. I believe that these are the
little things that make an open source community weak if we do not control
it.

Thanks,

Edgar


On 3/6/14 9:51 PM, "Mark McClain" 
mailto:mmccl...@yahoo-inc.com>> wrote:

>
>On Mar 6, 2014, at 4:31 PM, Jay Pipes 
>mailto:jaypi...@gmail.com>> wrote:
>
>> On Thu, 2014-03-06 at 21:14 +, Youcef Laribi wrote:
>>> +1
>>>
>>> I think if we can have it before the Juno summit, we can take
>>> concrete, well thought-out proposals to the community at the summit.
>>
>> Unless something has changed starting at the Hong Kong design summit
>> (which unfortunately I was not able to attend), the design summits have
>> always been a place to gather to *discuss* and *debate* proposed
>> blueprints and des

Re: [openstack-dev] [nova] a question about instance snapshot

2014-03-10 Thread Joe Gordon
On Mar 10, 2014 12:29 PM, "Jay Pipes"  wrote:
>
> On Mon, 2014-03-10 at 12:13 -0400, Shawn Hartsock wrote:
> > We have very strong interest in pursing this feature in the VMware
> > driver as well. I would like to see the revert instance feature
> > implemented at least.
> >
> > When I used to work in multi-discipline roles involving operations it
> > would be common for us to snapshot a vm, run through an upgrade
> > process, then revert if something did not upgrade smoothly. This
> > ability alone can be exceedingly valuable in long-lived virtual
> > machines.
> >
> > I also have some comments from parties interested in refactoring how
> > the VMware drivers handle snapshots but I'm not certain how much that
> > plays into this "live snapshot" discussion.
>
> I think the reason that there isn't much interest in doing this kind of
> thing is because the worldview that VMs are pets is antithetical to the
> worldview that VMs are cattle, and Nova tends to favor the latter (where
> DRS/DPM on vSphere tends to favor the former).
>
> There's nothing about your scenario above of being able to "revert" an
> instance to a particular state that isn't possible with today's Nova.
> Snapshotting an instance, doing an upgrade of software on the instance,
> and then restoring from the snapshot if something went wrong (reverting)
> is already fully possible to do with the regular Nova snapshot and
> restore operations. The only difference is that the "live-snapshot"
> stuff would include saving the memory view of a VM in addition to its
> disk state. And that, at least in my opinion, is only needed when you
> are treating VMs like pets and not cattle.

++, well said.

>
> Best,
> -jay
>
> > On Mon, Mar 10, 2014 at 12:04 AM, Bohai (ricky) 
wrote:
> > >> -Original Message-
> > >> From: Alex Xu [mailto:x...@linux.vnet.ibm.com]
> > >> Sent: Sunday, March 09, 2014 10:04 PM
> > >> To: OpenStack Development Mailing List (not for usage questions)
> > >> Subject: Re: [openstack-dev] [nova] a question about instance
snapshot
> > >>
> > >> Hi, Jeremy, the discussion at here
> > >>
http://lists.openstack.org/pipermail/openstack-dev/2013-August/013688.html
> > >>
> > >
> > > I have a great interest in the topic too.
> > > I read the link you provided, and there is a little confusion for me.
> > > I agree with the security consideration in the discussion and memory
snapshot can't be used for the cloning instance easily.
> > >
> > > But I think it's safe for the using for Instance revert.
> > > And revert the instance to a checkpoint is valuable for the user.
> > > Why we didn't use it for instance revert in the first step?
> > >
> > > Best regards to you.
> > > Ricky
> > >
> > >> Thanks
> > >> Alex
> > >> On 2014年03月07日 10:29, Liuji (Jeremy) wrote:
> > >> > Hi, all
> > >> >
> > >> > Current openstack seems not support to snapshot instance with
memory and
> > >> dev states.
> > >> > I searched the blueprint and found two relational blueprint like
below.
> > >> > But these blueprint failed to get in the branch.
> > >> >
> > >> > [1]: https://blueprints.launchpad.net/nova/+spec/live-snapshots
> > >> > [2]: https://blueprints.launchpad.net/nova/+spec/live-snapshot-vms
> > >> >
> > >> > In the blueprint[1], there is a comment,"
> > >> > We discussed this pretty extensively on the mailing list and in a
design
> > >> summit session.
> > >> > The consensus is that this is not a feature we would like to have
in nova.
> > >> --russellb "
> > >> > But I can't find the discuss mail about it. I hope to know why we
think so ?
> > >> > Without memory snapshot, we can't to provide the feature for user
to revert
> > >> a instance to a checkpoint.
> > >> >
> > >> > Anyone who knows the history can help me or give me a hint how to
find the
> > >> discuss mail?
> > >> >
> > >> > I am a newbie for openstack and I apologize if I am missing
something very
> > >> obvious.
> > >> >
> > >> >
> > >> > Thanks,
> > >> > Jeremy Liu
> > >> >
> > >> >
> > >> > ___
> > >> > OpenStack-dev mailing list
> > >> > OpenStack-dev@lists.openstack.org
> > >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >> >
> > >> >
> > >> >
> > >>
> > >>
> > >> ___
> > >> OpenStack-dev mailing list
> > >> OpenStack-dev@lists.openstack.org
> > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > > ___
> > > OpenStack-dev mailing list
> > > OpenStack-dev@lists.openstack.org
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Re: [openstack-dev] [TripleO] os-cloud-config ssh access to cloud

2014-03-10 Thread James Slagle
On Mon, Mar 10, 2014 at 6:10 AM, Jiří Stránský  wrote:
> On 7.3.2014 14:50, Imre Farkas wrote:
>>
>> On 03/07/2014 10:30 AM, Jiří Stránský wrote:
>>>
>>> Hi,
>>>
>>> there's one step in cloud initialization that is performed over SSH --
>>> calling "keystone-manage pki_setup". Here's the relevant code in
>>> keystone-init [1], here's a review for moving the functionality to
>>> os-cloud-config [2].
>>>
>>> The consequence of this is that Tuskar will need passwordless ssh key to
>>> access overcloud controller. I consider this suboptimal for two reasons:
>>>
>>> * It creates another security concern.
>>>
>>> * AFAIK nova is only capable of injecting one public SSH key into
>>> authorized_keys on the deployed machine, which means we can either give
>>> it Tuskar's public key and allow Tuskar to initialize overcloud, or we
>>> can give it admin's custom public key and allow admin to ssh into
>>> overcloud, but not both. (Please correct me if i'm mistaken.) We could
>>> probably work around this issue by having Tuskar do the user key
>>> injection as part of os-cloud-config, but it's a bit clumsy.
>>>
>>>
>>> This goes outside the scope of my current knowledge, i'm hoping someone
>>> knows the answer: Could pki_setup be run by combining powers of Heat and
>>> os-config-refresh? (I presume there's some reason why we're not doing
>>> this already.) I think it would help us a good bit if we could avoid
>>> having to SSH from Tuskar to overcloud.
>>
>>
>> Yeah, it came up a couple times on the list. The current solution is
>> because if you have an HA setup, the nodes can't decide on its own,
>> which one should run pki_setup.
>> Robert described this topic and why it needs to be initialized
>> externally during a weekly meeting in last December. Check the topic
>> 'After heat stack-create init operations (lsmola)':
>>
>> http://eavesdrop.openstack.org/meetings/tripleo/2013/tripleo.2013-12-17-19.02.log.html
>
>
> Thanks for the reply Imre. Yeah i vaguely remember that meeting :)
>
> I guess to do HA init we'd need to pick one of the controllers and run the
> init just there (set some parameter that would then be recognized by
> os-refresh-config). I couldn't find if Heat can do something like this on
> it's own, probably we'd need to deploy one of the controller nodes with
> different parameter set, which feels a bit weird.
>
> Hmm so unless someone comes up with something groundbreaking, we'll probably
> keep doing what we're doing.

Agreed,  I think what you've done here is fine.

As you keep churning through init-keystone, keep in mind that there
are some recent changes in review[1] that switch that script over to
use openstack-client instead of keystoneclient. That was needed due to
the required use of the keystone v3 api to create a domain for the
heat stack user. A fallback backwards compatibility was added to Heat
to allow the existing behavior to still work, but I don't see a reason
for you to reimplement the old way of doings things in
os-cloud-config. There is a helper script[2] in Heat that shows how
the domain should be created.

[1] https://review.openstack.org/#/c/78020/
[2] http://git.openstack.org/cgit/openstack/heat/tree/tools/create_heat_domain

> Having the ability to inject multiple keys to
> instances [1] would help us get rid of the Tuskar vs. admin key issue i
> mentioned in the initial e-mail. We might try asking a fellow Nova developer
> to help us out here.
>
>
> Jirka
>
> [1] https://bugs.launchpad.net/nova/+bug/917850
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
-- James Slagle
--

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] UTF-8 required charset/encoding for openstack database?

2014-03-10 Thread Chris Friesen

On 03/10/2014 02:02 PM, Ben Nemec wrote:


We just had a discussion about this in #openstack-oslo too.  See the
discussion starting at 2014-03-10T16:32:26
http://eavesdrop.openstack.org/irclogs/%23openstack-oslo/%23openstack-oslo.2014-03-10.log


In that discussion dhellmann said, "I wonder if we make any assumptions 
elsewhere that we are using utf8 in the database"


For what it's worth I came across 
"https://wiki.openstack.org/wiki/Encoding";, which proposed a rule:


"All external text that is not explicitly encoded (database storage, 
commandline arguments, etc.) should be presumed to be encoded as utf-8."



While it seems Heat does require utf8 (or at least matching character
sets) across all tables, I'm not sure the current solution is good.  It
seems like we may want a migration to help with this for anyone who
might already have mismatched tables.  There's a lot of overlap between
that discussion and how to handle Postgres with this, I think.


I'm lucky enough to be able to fix this now, I don't have any real 
deployments yet.


It sounds like for the near future my best bet would be to just set the 
install scripts to configure postgres (which is used only for openstack) 
to default to utf-8.  Is that a fair summation?


Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] a question about instance snapshot

2014-03-10 Thread Shawn Hartsock
While I understand the general argument about pets versus cattle. The
question is, would you be willing to poke a few holes in the strict
"cattle" abstraction for the sake of pragmatism. Few shops are going
to make the direct transition in one move. Poking a hole in the cattle
abstraction allowing them to keep a pet VM might be very valuable to
some shops making a migration.

To be fair, the places where I've heard interest are looking to "walk
not run" from a pets approach to a cattle approach. This conceit in
the abstraction might be valuable enough to them to spell the
difference between cautiously adopting and tacitly rejecting
OpenStack.

That's how I would argue *for* the change anyhow, it's not an argument
based on ideology but on pragmatism.

On Mon, Mar 10, 2014 at 3:19 PM, Jay Pipes  wrote:
> On Mon, 2014-03-10 at 12:13 -0400, Shawn Hartsock wrote:
>> We have very strong interest in pursing this feature in the VMware
>> driver as well. I would like to see the revert instance feature
>> implemented at least.
>>
>> When I used to work in multi-discipline roles involving operations it
>> would be common for us to snapshot a vm, run through an upgrade
>> process, then revert if something did not upgrade smoothly. This
>> ability alone can be exceedingly valuable in long-lived virtual
>> machines.
>>
>> I also have some comments from parties interested in refactoring how
>> the VMware drivers handle snapshots but I'm not certain how much that
>> plays into this "live snapshot" discussion.
>
> I think the reason that there isn't much interest in doing this kind of
> thing is because the worldview that VMs are pets is antithetical to the
> worldview that VMs are cattle, and Nova tends to favor the latter (where
> DRS/DPM on vSphere tends to favor the former).
>
> There's nothing about your scenario above of being able to "revert" an
> instance to a particular state that isn't possible with today's Nova.
> Snapshotting an instance, doing an upgrade of software on the instance,
> and then restoring from the snapshot if something went wrong (reverting)
> is already fully possible to do with the regular Nova snapshot and
> restore operations. The only difference is that the "live-snapshot"
> stuff would include saving the memory view of a VM in addition to its
> disk state. And that, at least in my opinion, is only needed when you
> are treating VMs like pets and not cattle.
>
> Best,
> -jay
>
>> On Mon, Mar 10, 2014 at 12:04 AM, Bohai (ricky)  wrote:
>> >> -Original Message-
>> >> From: Alex Xu [mailto:x...@linux.vnet.ibm.com]
>> >> Sent: Sunday, March 09, 2014 10:04 PM
>> >> To: OpenStack Development Mailing List (not for usage questions)
>> >> Subject: Re: [openstack-dev] [nova] a question about instance snapshot
>> >>
>> >> Hi, Jeremy, the discussion at here
>> >> http://lists.openstack.org/pipermail/openstack-dev/2013-August/013688.html
>> >>
>> >
>> > I have a great interest in the topic too.
>> > I read the link you provided, and there is a little confusion for me.
>> > I agree with the security consideration in the discussion and memory 
>> > snapshot can't be used for the cloning instance easily.
>> >
>> > But I think it's safe for the using for Instance revert.
>> > And revert the instance to a checkpoint is valuable for the user.
>> > Why we didn't use it for instance revert in the first step?
>> >
>> > Best regards to you.
>> > Ricky
>> >
>> >> Thanks
>> >> Alex
>> >> On 2014年03月07日 10:29, Liuji (Jeremy) wrote:
>> >> > Hi, all
>> >> >
>> >> > Current openstack seems not support to snapshot instance with memory and
>> >> dev states.
>> >> > I searched the blueprint and found two relational blueprint like below.
>> >> > But these blueprint failed to get in the branch.
>> >> >
>> >> > [1]: https://blueprints.launchpad.net/nova/+spec/live-snapshots
>> >> > [2]: https://blueprints.launchpad.net/nova/+spec/live-snapshot-vms
>> >> >
>> >> > In the blueprint[1], there is a comment,"
>> >> > We discussed this pretty extensively on the mailing list and in a design
>> >> summit session.
>> >> > The consensus is that this is not a feature we would like to have in 
>> >> > nova.
>> >> --russellb "
>> >> > But I can't find the discuss mail about it. I hope to know why we think 
>> >> > so ?
>> >> > Without memory snapshot, we can't to provide the feature for user to 
>> >> > revert
>> >> a instance to a checkpoint.
>> >> >
>> >> > Anyone who knows the history can help me or give me a hint how to find 
>> >> > the
>> >> discuss mail?
>> >> >
>> >> > I am a newbie for openstack and I apologize if I am missing something 
>> >> > very
>> >> obvious.
>> >> >
>> >> >
>> >> > Thanks,
>> >> > Jeremy Liu
>> >> >
>> >> >
>> >> > ___
>> >> > OpenStack-dev mailing list
>> >> > OpenStack-dev@lists.openstack.org
>> >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >> >
>> >> >
>> >> >
>> >>
>> >>
>> >> __

Re: [openstack-dev] UTF-8 required charset/encoding for openstack database?

2014-03-10 Thread Ben Nemec

On 2014-03-10 12:24, Chris Friesen wrote:

Hi,

I'm using havana and recent we ran into an issue with heat related to
character sets.

In heat/db/sqlalchemy/api.py in user_creds_get() we call
_decrypt() on an encrypted password stored in the database and then
try to convert the result to unicode.  Today we hit a case where this
errored out with the following message:

UnicodeDecodeError: 'utf8' codec can't decode byte 0xf2 in position 0:
invalid continuation byte

We're using postgres and currently all the databases are using
SQL_ASCII as the charset.

I see that in icehouse heat will complain if you're using mysql and
not using UTF-8.  There doesn't seem to be any checks for other
databases though.

It looks like devstack creates most databases as UTF-8 but uses latin1
for nova/nova_bm/nova_cell.  I assume this is because nova expects to
migrate the db to UTF-8 later.  Given that those migrations specify a
character set only for mysql, when using postgres should we explicitly
default to UTF-8 for everything?

Thanks,
Chris


We just had a discussion about this in #openstack-oslo too.  See the 
discussion starting at 2014-03-10T16:32:26 
http://eavesdrop.openstack.org/irclogs/%23openstack-oslo/%23openstack-oslo.2014-03-10.log


While it seems Heat does require utf8 (or at least matching character 
sets) across all tables, I'm not sure the current solution is good.  It 
seems like we may want a migration to help with this for anyone who 
might already have mismatched tables.  There's a lot of overlap between 
that discussion and how to handle Postgres with this, I think.


I don't have a definite answer for any of this yet but I think it is 
something we need to figure out, so hopefully we can get some input from 
people who know more about the encoding requirements of the Heat and 
other projects' databases.


-Ben

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Mini-summit Interest?

2014-03-10 Thread Edgar Magana
Eugene,

A have a few arguments why I believe this is not 100% inclusive
* Is the foundation involved on this process? How? What is the budget? Who
is the responsible from the foundation  side?
* If somebody made already travel arraignments, it won't be possible to make
changes at not cost.
* Staying extra days in a different city could impact anyone's budget
* As a OpenStack developer. I want to understand why the summit is not
enough for deciding the next steps for each project. If that is the case, I
would prefer to make changes on the organization of the summit instead of
creating mini-summits all around!
I could continue but I think these are good enough.

I could agree with your point about previous summits being distractive for
developers, this is why this time the OpenStack foundation is trying very
hard to allocate specific days for the conference and specific days for the
summit.
The point that I am totally agree with you is that we SHOULD NOT have
session about work that will be done no matter what!  Those are just a waste
of good time that could be invested in very interesting discussions about
topics that are still not clear.
I would recommend that you express this opinion to Mark. He is the right guy
to decide which sessions will bring interesting discussions and which ones
will be just a declaration of intents.

Thanks,

Edgar

From:  Eugene Nikanorov 
Reply-To:  OpenStack List 
Date:  Monday, March 10, 2014 10:32 AM
To:  OpenStack List 
Subject:  Re: [openstack-dev] [Neutron][LBaaS] Mini-summit Interest?

Hi Edgar,

I'm neutral to the suggestion of mini summit at this point.
Why do you think it will exclude developers?
If we keep it 1-3 days prior to OS Summit in Atlanta (e.g. in the same city)
that would allow anyone who joins OS Summit to save on extra travelling.
OS Summit itself is too distractive to have really productive discussions,
unless your missing the sessions and spend time discussing.
For instance design sessions basically only good for declaration of intents,
but not for real discussion of a complex topic at meaningful detail level.

What would be your suggestions to make this more inclusive?
I think the time and place is the key here - hence Atlanta and few days
prior OS summit.

Thanks,
Eugene.



On Mon, Mar 10, 2014 at 10:59 PM, Edgar Magana  wrote:
> Team,
> 
> I found that having a mini-summit with a very short notice means excluding
> a lot of developers of such an interesting topic for Neutron.
> The OpenStack summit is the opportunity for all developers to come
> together and discuss the next steps, there are many developers that CAN
> NOT afford another trip for a "special" summit. I am personally against
> that and I do support Mark's proposal of having all the conversation over
> IRC and mailing list.
> 
> Please, do not start excluding people that won't be able to attend another
> face-to-face meeting besides the summit. I believe that these are the
> little things that make an open source community weak if we do not control
> it.
> 
> Thanks,
> 
> Edgar
> 
> 
> On 3/6/14 9:51 PM, "Mark McClain"  wrote:
> 
>> >
>> >On Mar 6, 2014, at 4:31 PM, Jay Pipes  wrote:
>> >
>>> >> On Thu, 2014-03-06 at 21:14 +, Youcef Laribi wrote:
 >>> +1
 >>>
 >>> I think if we can have it before the Juno summit, we can take
 >>> concrete, well thought-out proposals to the community at the summit.
>>> >>
>>> >> Unless something has changed starting at the Hong Kong design summit
>>> >> (which unfortunately I was not able to attend), the design summits have
>>> >> always been a place to gather to *discuss* and *debate* proposed
>>> >> blueprints and design specs. It has never been about a gathering to
>>> >> rubber-stamp proposals that have already been hashed out in private
>>> >> somewhere else.
>> >
>> >You are correct that is the goal of the design summit.  While I do think
>> >it is wise to discuss the next steps with LBaaS at this point in time, I
>> >am not a proponent of in person mini-design summits.  Many contributors
>> >to LBaaS are distributed all over the global, and scheduling a mini
>> >summit with short notice will exclude valuable contributors to the team.
>> >I¹d prefer to see an open process with discussions on the mailing list
>> >and specially scheduled IRC meetings to discuss the ideas.
>> >
>> >mark
>> >
>> >
>> >___
>> >OpenStack-dev mailing list
>> >OpenStack-dev@lists.openstack.org
>> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___ OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-de

Re: [openstack-dev] testr help

2014-03-10 Thread Clark Boylan
On Mon, Mar 10, 2014 at 11:31 AM, Zane Bitter  wrote:
> Thanks Clark for this great write-up. However, I think the solution to the
> problem in question is richer commands and better output formatting, not
> discarding information.
>
>
> On 07/03/14 16:30, Clark Boylan wrote:
>>
>> But running tests in parallel introduces some fun problems. Like where
>> do you send logging and stdout output. If you send it to the console
>> it will be interleaved and essentially useless. The solution to this
>> problem (for which I am probably to blame) is to have each test
>> collect the logging, stdout, and stderr associated to that test and
>> attach it to that tests subunit reporting. This way you get all of the
>> output associated with a single test attached to that test and don't
>> have crazy interleaving that needs to be demuxed. The capturing of
>
>
> This is not really a problem unique to parallel test runners. Printing to
> the console is just not a great way to handle stdout/stderr in general
> because it messes up the output of the test runner, and nose does exactly
> the same thing as testr in collecting them - except that nose combines
> messages from the 3 sources and prints the output for human consumption,
> rather than in separate groups surrounded by lots of {{{random braces}}}.
>
Except nose can make them all the same file descriptor and let
everything multiplex together. Nose isn't demuxing arbitrary numbers
of file descriptors from arbitrary numbers of processes.
>
>> this data is toggleable in the test suite using environment variables
>> and is off by default so that when you are not using testr you don't
>> get this behavior [0]. However we seem to have neglected log capture
>> toggles.
>
>
> Oh wow, there is actually a way to get the stdout and stderr? Fantastic! Why
> on earth are these disabled?
>
See above, testr has to deal with multiple writers to stdout and
stderr, you really don't want them all going to the same place when
using testr (which is why stdout and stderr are captured when running
testr but not otherwise).
>
> Please, please, please don't turn off the logging too. That's the only tool
> left for debugging now that stdout goes into a black hole.
>
Logging goes into the same "black hole" today, I am suggesting that we
make this toggleable like we have made stdout and stderr capturing
toggleable. FWIW this isn't a black hole it is all captured on disk
and you can refer back to it at any time (the UI around doing this
could definitely be better though).
>
>> Now onto indirectly answering some of the questions you have. If a
>> single unittest is producing thousands of lines of log output that is
>> probably a bug. The test scope is too large or the logging is too
>> verbose or both. To work around this we probably need to make the
>> fakeLogger fixture toggleable with configurable log level. Then you
>> could do something like `OS_LOG_LEVEL=error tox` and avoid getting all
>> of the debug level logs.
>
>
> Fewer logs is hardly ever what you want when debugging a unit test.
>
I agree, but in general Openstack does a terrible job at logging
particularly when you look at the DEBUG level. There is too much noise
and not enough consistency. Being able to set the level you want to
capture at or turn off capturing is a good thing.
>
> I think what John is looking for is a report at the end of each test run
> that just lists the tests that failed instead of all the details (like
> `testr failing --list`), or perhaps the complete list of tests with the
> coloured pass/fail like IIRC nose does. Since testr's killer feature is to
> helpfully store all of the results for later, maybe this output is all you
> need in the first instance (along with a message telling the user what
> command to run to see the full output, of course), or at least that could be
> an option.
>
This is a good idea. Could be done by having testr output info without
any subunit attachments.
>
>> For examining test results you can `testr load $SUBUNIT_LOG_FILE` then
>> run commands like `testr last`, `testr failing`, `testr slowest`, and
>> `testr stats` against that run (if you want details on the last run
>> you don't need an explicit load). There are also a bunch of tools that
>> come with python-subunit like subunit-filter, subunit2pyunit,
>> subunit-stats, and so on that you can use to do additional processing.
>
>
> It sounds like what John wants to do is pass a filter to something like
> `testr failing` or `testr last` to only report a subset of the results, in
> much the same way as it's possible to pass a filter to `testr` to only run a
> subset of the tests.
>
> BTW, since I'm on the subject, testr would be a lot more
> confidence-inspiring if running `testr failing` immediately after running
> `testr` reported the same number of failures, or indeed if running `testr`
> twice in a row was guaranteed to report the same number of failures
> (assuming that the tests themselves are deterministic). I can't imagine 

Re: [openstack-dev] testr help

2014-03-10 Thread Clark Boylan
On Mon, Mar 10, 2014 at 12:21 PM, John Dennis  wrote:
> On 03/10/2014 02:31 PM, Zane Bitter wrote:
>> Fewer logs is hardly ever what you want when debugging a unit test.
>>
>> I think what John is looking for is a report at the end of each test run
>> that just lists the tests that failed instead of all the details (like
>> `testr failing --list`), or perhaps the complete list of tests with the
>> coloured pass/fail like IIRC nose does. Since testr's killer feature is
>> to helpfully store all of the results for later, maybe this output is
>> all you need in the first instance (along with a message telling the
>> user what command to run to see the full output, of course), or at least
>> that could be an option.
>
>> It sounds like what John wants to do is pass a filter to something like
>> `testr failing` or `testr last` to only report a subset of the results,
>> in much the same way as it's possible to pass a filter to `testr` to
>> only run a subset of the tests.
>
> Common vocabulary is essential to discuss this. A test result as emitted
> by subunit is a metadata collection indexed by keys. To get the list of
> failing tests one iterates over the set of results and looks for the
> absense of the "successful" key in the result. That's the set of test
> failures, as such it's a filter on the test results.
>
> Therefore I see filtering as the act of producing a subset of the test
> results (i.e, only those failing, or only those whose names match a
> regexp, or the intersection of those). That is a filtered result set.
> The filtering is performed by examining the key/value in each result
> metadata to yield a subset of the results.
>
> Next you have to display that filtered result set. When each result is
> displayed one should be able to specify which pieces of metadata get
> displayed. In my mind that's not filtering, it's a display option. One
> common display option would be to emit only the test name. Another might
> be to display the test name and the captured log data. As it stands now
> it seems to display every piece of metadata in the result which is what
> is producing the excessive verbosity.
>
> Hopefully I'm making sense, yes/no?
>
Makes sense to me. We should poke lifeless and get him to chime in.

Clark

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][QA] Neutron API tests requiring a last push to merge

2014-03-10 Thread Miguel Lavalle
Dear Tempest Core Reviewers,

We are making very good progress merging Neutron API tests into Tempest.
Over the past 2 weeks you have helped us merging 9 tests. Thank you very
much :-) !

As of today (3/10) we have the following tests that require just one more
+2 to merge. Please help us to move them forward:

https://review.openstack.org/#/c/71251
https://review.openstack.org/#/c/64271
https://review.openstack.org/#/c/66454
https://review.openstack.org/#/c/61118
https://review.openstack.org/#/c/66796
https://review.openstack.org/#/c/68626

Thanks
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] [Keystone] Where are the strings in the keystone API's defined?

2014-03-10 Thread Clint Byrum
Excerpts from Dolph Mathews's message of 2014-03-10 12:43:52 -0700:
> For posterity, I assume this thread is related to:
> 
> 
> http://lists.openstack.org/pipermail/openstack-dev/2014-February/028125.html
> 
> Anyway, keystone itself has issued 36-char tenant ID's in the past (diablo,
> I believe, if not essex as well). Something like this:
> 
>   $ python -c "import uuid; s = str(uuid.uuid4()); print s; print len(s);"
>   1b54024b-7c62-494e-9a34-6138e04e3dc7
>   36
> 
> Migrations to essex also pulled in existing tenant ID's from other
> pre-existing data sources. Most importantly, keystone is able to read
> tenants from backends such as LDAP, and you're welcome to write your own
> driver and return whatever data you want as an ID.
> 
> From an API perspective, keystone assumes that tenant ID's are URL-safe and
> globally unique, but does nothing to enforce either of those. Perhaps
> that's somewhere keystone could start (emit a warning if that's not the
> case?), before other services make stricter assumptions about the
> constraints around tenant ID's.

So, are you saying varchar(256) may be _too small_ ?

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat][Murano][TOSCA] Murano team contrib. to Heat TOSCA activities

2014-03-10 Thread Steven Dake

On 03/10/2014 11:51 AM, Randall Burt wrote:

On Mar 10, 2014, at 1:26 PM, Georgy Okrokvertskhov 

  wrote:


Hi,

Thomas and Zane initiated a good discussion about Murano DSL and TOSCA 
initiatives in Heat. I think will be beneficial for both teams to contribute 
into TOSCA.

Wasn't TOSCA developing a "simplified" version in order to converge with HOT?


While Mirantis is working on organizational part for OASIS. I would like to 
understand what is the current view on the TOSCA and HOT relations.
It looks like TOSCA can cover all aspects of declarative components HOT 
templates and imperative workflows which can be covered by Murano. What do you 
think about that?

Aren't workflows covered by Mistral? How would this be different than including 
mistral support in Heat?

Randall,

There is a difference between adding Mistral resources in Heat, vs 
adding Mistral (or Murano) workflow logic to the Orchestration program.


I believe what is proposed is to add the Murano workflow to the 
Orchestration program, but this needs to be weighed against Mistral as a 
possible first source for the population of the repository.  In a 
perfect world, the workflow communities of Mistral and Murano would 
merge in some way including the code base.


Regards
-steve


I think TOSCA format can be used a a descriptions of Applications and 
heat-translator can actually convert TOSCA descriptions to both HOT and Murano 
files which can be then used for actual Application deployment. Both Het and 
Murano workflows can coexist in Orchestration program and cover both 
declarative templates and imperative workflows use cases.

--
Georgy Okrokvertskhov
Architect,
OpenStack Platform Products,
Mirantis
http://www.mirantis.com
Tel. +1 650 963 9828
Mob. +1 650 996 3284
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] [Keystone] Where are the strings in the keystone API's defined?

2014-03-10 Thread Dolph Mathews
For posterity, I assume this thread is related to:


http://lists.openstack.org/pipermail/openstack-dev/2014-February/028125.html

Anyway, keystone itself has issued 36-char tenant ID's in the past (diablo,
I believe, if not essex as well). Something like this:

  $ python -c "import uuid; s = str(uuid.uuid4()); print s; print len(s);"
  1b54024b-7c62-494e-9a34-6138e04e3dc7
  36

Migrations to essex also pulled in existing tenant ID's from other
pre-existing data sources. Most importantly, keystone is able to read
tenants from backends such as LDAP, and you're welcome to write your own
driver and return whatever data you want as an ID.

>From an API perspective, keystone assumes that tenant ID's are URL-safe and
globally unique, but does nothing to enforce either of those. Perhaps
that's somewhere keystone could start (emit a warning if that's not the
case?), before other services make stricter assumptions about the
constraints around tenant ID's.

Note: all of the above applies equally to user ID's, as well!


On Mon, Mar 10, 2014 at 12:53 PM, Clint Byrum  wrote:

> While looking into this bug:
>
> https://bugs.launchpad.net/heat/+bug/1290274
>
> I was trying to find out why the original developers felt that tenant_ids
> should be a 'varchar(256)'. In addition to moving from a regular varchar
> into a text field, varchar(256) in MySQL will become a tinytext which
> causes all sorts of performance issues, this just seems a _lot_ bigger
> than anything I've ever seen shown as a tenant/project id. I'd expect at
> worst varchar(64). Also it is probably safe to store as varbinary since
> we don't ever sort on it or store case-insensitive equivalents to it.
>
> So I was wondering where users can go to find out what to expect in
> that field. I dug through the API documentation for Keystone and I see
> nothing that really would constituate a format or even length. But maybe
> I'm just not looking in the right place. Thanks!
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Conductor support for networking in Icehouse

2014-03-10 Thread Dan Smith
> However, when attempting to boot an instance, the Nova network service
> fails to retrieve network information from the controller. Adding the
> the database keys resolves the problem. I'm using
> the 2014.1~b3-0ubuntu1~cloud0 packages on Ubuntu 12.04.

Can you file a bug with details from the logs?

--Dan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][db] Thoughts on making instances.uuid non-nullable?

2014-03-10 Thread Joe Gordon
On Mon, Mar 10, 2014 at 12:12 PM, Boris Pavlovic  wrote:
> Joe,
>
> Fully agree. We should just make a blueprint "Get rid of soft deletion". So
> we will get much better performance of DB + cleaner code, and avoid such
> things like shadow tables and DB purge scripts.

++

>
> Probably we should some other thread to cover this topic?

++

>
>
> Best regards,
> Boris Pavlovic
>
>
> On Mon, Mar 10, 2014 at 10:57 PM, Joe Gordon  wrote:
>>
>> On Mon, Mar 10, 2014 at 7:11 AM, Matt Riedemann
>>  wrote:
>> >
>> >
>> > On 3/9/2014 9:18 PM, Jay Pipes wrote:
>> >>
>> >> On Mon, 2014-03-10 at 10:05 +0800, ChangBo Guo wrote:
>> >>>
>> >>>
>> >>>
>> >>>
>> >>> 2014-03-10 4:47 GMT+08:00 Jay Pipes :
>> >>>
>> >>>
>> >>>  > 3. This would make the instances and shadow_instances
>> >>> tables
>> >>>  have
>> >>>  > different schemas, i.e. instances.uuid would be
>> >>>  nullable=False in
>> >>>  > instances but nullable=True in shadow_instances.  Maybe
>> >>> this
>> >>>  doesn't matter.
>> >>>
>> >>>
>> >>>  No, I don't think this matters much, to be honest. I'm not
>> >>>  entirely sure
>> >>>  what the long-term purpose of the shadow tables are in Nova
>> >>> --
>> >>>  perhaps
>> >>>  someone could clue me in to whether the plan is to keep them
>> >>>  around?
>> >>>
>> >>>
>> >>> As I know the tables shadow_*  are used  by command ' nova-manage db
>> >>> archive_deleted_rows' , which moves  records with "deleted=True" to
>> >>> table shadow_* . That means these tables are used by other  process,
>> >>> So, I think we need other tables to store the old records in your
>> >>> migration.
>> >>
>> >>
>> >> Yeah, that's what I understood the shadow tables were used for, I just
>> >> didn't know what the long-term future of these tables was... curious if
>> >> there's been any discussion about that.
>> >>
>> >> Best,
>> >> -jay
>> >>
>> >>
>> >>
>> >> ___
>> >> OpenStack-dev mailing list
>> >> OpenStack-dev@lists.openstack.org
>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >>
>> >
>> > I think Joe Gordon was working on something in the hopes of eventually
>> > killing the shadow tables but I can't remember exactly what that was
>> > now.
>>
>> I haven't been working on this but I do have a plan.
>>
>> Originally we couldn't hard delete anything in nova DB, because people
>> wanted to keep the records around for well record keeping. The long
>> term solution is make nova support (although not default to) hard
>> delete. This means we need another place to store these records
>> (ceilometer). Until then, we have shadow tables as a short term
>> solution. If you want to keep records around but don't want them in
>> your main nova DB.
>>
>> On a related note, nothing in nova should actually be using soft
>> deleted data or shadow tables, any cases should be treated as bugs.
>>
>>
>>
>> >
>> > --
>> >
>> > Thanks,
>> >
>> > Matt Riedemann
>> >
>> >
>> >
>> > ___
>> > OpenStack-dev mailing list
>> > OpenStack-dev@lists.openstack.org
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Conductor support for networking in Icehouse

2014-03-10 Thread Matt Kassawara
Hi,

I'm updating the installation guide for Icehouse. Based on the following
blueprint, I removed the database configuration keys from nova.conf on the
compute node in my test environment.

https://blueprints.launchpad.net/nova/+spec/nova-network-objects

However, when attempting to boot an instance, the Nova network service
fails to retrieve network information from the controller. Adding the the
database keys resolves the problem. I'm using the 2014.1~b3-0ubuntu1~cloud0
packages on Ubuntu 12.04.

Matt
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat][Murano][TOSCA] Murano team contrib. to Heat TOSCA activities

2014-03-10 Thread Georgy Okrokvertskhov
Hi Randall,

I saw only the original XML based TOSCA 1.0 standard, I heard that there is
a new YAML based version but I did not see it.

The original TOSCA standard covered all aspects with TOSCA topology
templates (Heat templates) and TOSCA Plans (workflows). I hope they will
still use this approach. Mirantis is going to join OASIS and participate in
TOSCA standard development too.

As for Mistral, then it will have its own format for defining a tasks and
flows. Murano can perfectly coexists with Mistral as it provides an
application specific workflows. It is possible to define different actions
for Application like Application.backup and this action can be called by
Mistral to run a backup on schedule.


Thanks,
Georgy




On Mon, Mar 10, 2014 at 11:51 AM, Randall Burt
wrote:

>
> On Mar 10, 2014, at 1:26 PM, Georgy Okrokvertskhov <
> gokrokvertsk...@mirantis.com>
>  wrote:
>
> > Hi,
> >
> > Thomas and Zane initiated a good discussion about Murano DSL and TOSCA
> initiatives in Heat. I think will be beneficial for both teams to
> contribute into TOSCA.
>
> Wasn't TOSCA developing a "simplified" version in order to converge with
> HOT?
>
> > While Mirantis is working on organizational part for OASIS. I would like
> to understand what is the current view on the TOSCA and HOT relations.
> > It looks like TOSCA can cover all aspects of declarative components HOT
> templates and imperative workflows which can be covered by Murano. What do
> you think about that?
>
> Aren't workflows covered by Mistral? How would this be different than
> including mistral support in Heat?
>
> > I think TOSCA format can be used a a descriptions of Applications and
> heat-translator can actually convert TOSCA descriptions to both HOT and
> Murano files which can be then used for actual Application deployment. Both
> Het and Murano workflows can coexist in Orchestration program and cover
> both declarative templates and imperative workflows use cases.
> >
> > --
> > Georgy Okrokvertskhov
> > Architect,
> > OpenStack Platform Products,
> > Mirantis
> > http://www.mirantis.com
> > Tel. +1 650 963 9828
> > Mob. +1 650 996 3284
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Georgy Okrokvertskhov
Architect,
OpenStack Platform Products,
Mirantis
http://www.mirantis.com
Tel. +1 650 963 9828
Mob. +1 650 996 3284
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara] Prefix for -dev mailing list [savanna]

2014-03-10 Thread Sergey Lukjanov
NOTE.

We have a mailman topic for Sahara that accepts both sahara and savanna,
so people interested only in receiving messages with "[sahara]" and/or
"[savanna]" in the subject line can subscribe to the topic only.

How to use Mailman topics:
http://www.gnu.org/software/mailman/mailman-member/node31.html

http://lists.openstack.org/cgi-bin/mailman/options/openstack-dev/YOUR-EMAIL
to edit topics subscriptions.

Thanks for Stef for updating mailman topic.

On Sun, Mar 9, 2014 at 3:05 PM, Sergey Lukjanov  wrote:
> Hey folks,
>
> please, note, that we should prepend mail subject with "[sahara]" and
> append "[savanna]" to the end for transition period.
>
> Thanks.
>
> --
> Sincerely yours,
> Sergey Lukjanov
> Savanna Technical Lead
> Mirantis Inc.



-- 
Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] testr help

2014-03-10 Thread John Dennis
On 03/10/2014 02:31 PM, Zane Bitter wrote:
> Fewer logs is hardly ever what you want when debugging a unit test.
> 
> I think what John is looking for is a report at the end of each test run 
> that just lists the tests that failed instead of all the details (like 
> `testr failing --list`), or perhaps the complete list of tests with the 
> coloured pass/fail like IIRC nose does. Since testr's killer feature is 
> to helpfully store all of the results for later, maybe this output is 
> all you need in the first instance (along with a message telling the 
> user what command to run to see the full output, of course), or at least 
> that could be an option.

> It sounds like what John wants to do is pass a filter to something like 
> `testr failing` or `testr last` to only report a subset of the results, 
> in much the same way as it's possible to pass a filter to `testr` to 
> only run a subset of the tests.

Common vocabulary is essential to discuss this. A test result as emitted
by subunit is a metadata collection indexed by keys. To get the list of
failing tests one iterates over the set of results and looks for the
absense of the "successful" key in the result. That's the set of test
failures, as such it's a filter on the test results.

Therefore I see filtering as the act of producing a subset of the test
results (i.e, only those failing, or only those whose names match a
regexp, or the intersection of those). That is a filtered result set.
The filtering is performed by examining the key/value in each result
metadata to yield a subset of the results.

Next you have to display that filtered result set. When each result is
displayed one should be able to specify which pieces of metadata get
displayed. In my mind that's not filtering, it's a display option. One
common display option would be to emit only the test name. Another might
be to display the test name and the captured log data. As it stands now
it seems to display every piece of metadata in the result which is what
is producing the excessive verbosity.

Hopefully I'm making sense, yes/no?

-- 
John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] a question about instance snapshot

2014-03-10 Thread Jay Pipes
On Mon, 2014-03-10 at 12:13 -0400, Shawn Hartsock wrote:
> We have very strong interest in pursing this feature in the VMware
> driver as well. I would like to see the revert instance feature
> implemented at least.
> 
> When I used to work in multi-discipline roles involving operations it
> would be common for us to snapshot a vm, run through an upgrade
> process, then revert if something did not upgrade smoothly. This
> ability alone can be exceedingly valuable in long-lived virtual
> machines.
> 
> I also have some comments from parties interested in refactoring how
> the VMware drivers handle snapshots but I'm not certain how much that
> plays into this "live snapshot" discussion.

I think the reason that there isn't much interest in doing this kind of
thing is because the worldview that VMs are pets is antithetical to the
worldview that VMs are cattle, and Nova tends to favor the latter (where
DRS/DPM on vSphere tends to favor the former).

There's nothing about your scenario above of being able to "revert" an
instance to a particular state that isn't possible with today's Nova.
Snapshotting an instance, doing an upgrade of software on the instance,
and then restoring from the snapshot if something went wrong (reverting)
is already fully possible to do with the regular Nova snapshot and
restore operations. The only difference is that the "live-snapshot"
stuff would include saving the memory view of a VM in addition to its
disk state. And that, at least in my opinion, is only needed when you
are treating VMs like pets and not cattle.

Best,
-jay

> On Mon, Mar 10, 2014 at 12:04 AM, Bohai (ricky)  wrote:
> >> -Original Message-
> >> From: Alex Xu [mailto:x...@linux.vnet.ibm.com]
> >> Sent: Sunday, March 09, 2014 10:04 PM
> >> To: OpenStack Development Mailing List (not for usage questions)
> >> Subject: Re: [openstack-dev] [nova] a question about instance snapshot
> >>
> >> Hi, Jeremy, the discussion at here
> >> http://lists.openstack.org/pipermail/openstack-dev/2013-August/013688.html
> >>
> >
> > I have a great interest in the topic too.
> > I read the link you provided, and there is a little confusion for me.
> > I agree with the security consideration in the discussion and memory 
> > snapshot can't be used for the cloning instance easily.
> >
> > But I think it's safe for the using for Instance revert.
> > And revert the instance to a checkpoint is valuable for the user.
> > Why we didn't use it for instance revert in the first step?
> >
> > Best regards to you.
> > Ricky
> >
> >> Thanks
> >> Alex
> >> On 2014年03月07日 10:29, Liuji (Jeremy) wrote:
> >> > Hi, all
> >> >
> >> > Current openstack seems not support to snapshot instance with memory and
> >> dev states.
> >> > I searched the blueprint and found two relational blueprint like below.
> >> > But these blueprint failed to get in the branch.
> >> >
> >> > [1]: https://blueprints.launchpad.net/nova/+spec/live-snapshots
> >> > [2]: https://blueprints.launchpad.net/nova/+spec/live-snapshot-vms
> >> >
> >> > In the blueprint[1], there is a comment,"
> >> > We discussed this pretty extensively on the mailing list and in a design
> >> summit session.
> >> > The consensus is that this is not a feature we would like to have in 
> >> > nova.
> >> --russellb "
> >> > But I can't find the discuss mail about it. I hope to know why we think 
> >> > so ?
> >> > Without memory snapshot, we can't to provide the feature for user to 
> >> > revert
> >> a instance to a checkpoint.
> >> >
> >> > Anyone who knows the history can help me or give me a hint how to find 
> >> > the
> >> discuss mail?
> >> >
> >> > I am a newbie for openstack and I apologize if I am missing something 
> >> > very
> >> obvious.
> >> >
> >> >
> >> > Thanks,
> >> > Jeremy Liu
> >> >
> >> >
> >> > ___
> >> > OpenStack-dev mailing list
> >> > OpenStack-dev@lists.openstack.org
> >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >> >
> >> >
> >> >
> >>
> >>
> >> ___
> >> OpenStack-dev mailing list
> >> OpenStack-dev@lists.openstack.org
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [sahara] Rename Hortonworks hosted resources [savanna]

2014-03-10 Thread Sergey Lukjanov
Hey Erik, John,

Due to the project renaming, could you, please, rename all resources
hosted on Hortonworks CDN?

Thanks.

P.S. Looks like we have such urls in main savanna repo and dib
elements at least.

-- 
Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][db] Thoughts on making instances.uuid non-nullable?

2014-03-10 Thread Boris Pavlovic
Joe,

Fully agree. We should just make a blueprint "Get rid of soft deletion". So
we will get much better performance of DB + cleaner code, and avoid such
things like shadow tables and DB purge scripts.

Probably we should some other thread to cover this topic?


Best regards,
Boris Pavlovic


On Mon, Mar 10, 2014 at 10:57 PM, Joe Gordon  wrote:

> On Mon, Mar 10, 2014 at 7:11 AM, Matt Riedemann
>  wrote:
> >
> >
> > On 3/9/2014 9:18 PM, Jay Pipes wrote:
> >>
> >> On Mon, 2014-03-10 at 10:05 +0800, ChangBo Guo wrote:
> >>>
> >>>
> >>>
> >>>
> >>> 2014-03-10 4:47 GMT+08:00 Jay Pipes :
> >>>
> >>>
> >>>  > 3. This would make the instances and shadow_instances tables
> >>>  have
> >>>  > different schemas, i.e. instances.uuid would be
> >>>  nullable=False in
> >>>  > instances but nullable=True in shadow_instances.  Maybe this
> >>>  doesn't matter.
> >>>
> >>>
> >>>  No, I don't think this matters much, to be honest. I'm not
> >>>  entirely sure
> >>>  what the long-term purpose of the shadow tables are in Nova --
> >>>  perhaps
> >>>  someone could clue me in to whether the plan is to keep them
> >>>  around?
> >>>
> >>>
> >>> As I know the tables shadow_*  are used  by command ' nova-manage db
> >>> archive_deleted_rows' , which moves  records with "deleted=True" to
> >>> table shadow_* . That means these tables are used by other  process,
> >>> So, I think we need other tables to store the old records in your
> >>> migration.
> >>
> >>
> >> Yeah, that's what I understood the shadow tables were used for, I just
> >> didn't know what the long-term future of these tables was... curious if
> >> there's been any discussion about that.
> >>
> >> Best,
> >> -jay
> >>
> >>
> >>
> >> ___
> >> OpenStack-dev mailing list
> >> OpenStack-dev@lists.openstack.org
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >
> > I think Joe Gordon was working on something in the hopes of eventually
> > killing the shadow tables but I can't remember exactly what that was now.
>
> I haven't been working on this but I do have a plan.
>
> Originally we couldn't hard delete anything in nova DB, because people
> wanted to keep the records around for well record keeping. The long
> term solution is make nova support (although not default to) hard
> delete. This means we need another place to store these records
> (ceilometer). Until then, we have shadow tables as a short term
> solution. If you want to keep records around but don't want them in
> your main nova DB.
>
> On a related note, nothing in nova should actually be using soft
> deleted data or shadow tables, any cases should be treated as bugs.
>
>
>
> >
> > --
> >
> > Thanks,
> >
> > Matt Riedemann
> >
> >
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][db] Thoughts on making instances.uuid non-nullable?

2014-03-10 Thread Joe Gordon
On Mon, Mar 10, 2014 at 7:11 AM, Matt Riedemann
 wrote:
>
>
> On 3/9/2014 9:18 PM, Jay Pipes wrote:
>>
>> On Mon, 2014-03-10 at 10:05 +0800, ChangBo Guo wrote:
>>>
>>>
>>>
>>>
>>> 2014-03-10 4:47 GMT+08:00 Jay Pipes :
>>>
>>>
>>>  > 3. This would make the instances and shadow_instances tables
>>>  have
>>>  > different schemas, i.e. instances.uuid would be
>>>  nullable=False in
>>>  > instances but nullable=True in shadow_instances.  Maybe this
>>>  doesn't matter.
>>>
>>>
>>>  No, I don't think this matters much, to be honest. I'm not
>>>  entirely sure
>>>  what the long-term purpose of the shadow tables are in Nova --
>>>  perhaps
>>>  someone could clue me in to whether the plan is to keep them
>>>  around?
>>>
>>>
>>> As I know the tables shadow_*  are used  by command ' nova-manage db
>>> archive_deleted_rows' , which moves  records with "deleted=True" to
>>> table shadow_* . That means these tables are used by other  process,
>>> So, I think we need other tables to store the old records in your
>>> migration.
>>
>>
>> Yeah, that's what I understood the shadow tables were used for, I just
>> didn't know what the long-term future of these tables was... curious if
>> there's been any discussion about that.
>>
>> Best,
>> -jay
>>
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> I think Joe Gordon was working on something in the hopes of eventually
> killing the shadow tables but I can't remember exactly what that was now.

I haven't been working on this but I do have a plan.

Originally we couldn't hard delete anything in nova DB, because people
wanted to keep the records around for well record keeping. The long
term solution is make nova support (although not default to) hard
delete. This means we need another place to store these records
(ceilometer). Until then, we have shadow tables as a short term
solution. If you want to keep records around but don't want them in
your main nova DB.

On a related note, nothing in nova should actually be using soft
deleted data or shadow tables, any cases should be treated as bugs.



>
> --
>
> Thanks,
>
> Matt Riedemann
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [bug?] live migration fails with boot-from-volume

2014-03-10 Thread Chris Friesen

On 03/08/2014 02:23 AM, ChangBo Guo wrote:

Are you using  libvirt driver ?
As I remember,  the way to check if  compute nodes with shared storage
is :   create a temporary file from source node , then  check the file
from  dest node , by accessing file system from operating system level.
And boot from volume is just a way to boot instance , not means  shared
storage  or not .
For non-shared storage , have you try  block migration  with  option
--block-migration ?


Using block migration does seem to work.  However, it passes 
VIR_MIGRATE_NON_SHARED_INC to libvirt in the migration flags, which 
doesn't seem ideal for boot-from-volume.   I assume it starts to do an 
incremental copy but then decides that both are identical?


This raises an interesting question.  Why do we even need the user to 
explicitly specify --block-migration?  It seems like we could just test 
whether the instance storage is shared between the two compute nodes and 
set the appropriate flags automatically.


Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat][Murano][TOSCA] Murano team contrib. to Heat TOSCA activities

2014-03-10 Thread Randall Burt

On Mar 10, 2014, at 1:26 PM, Georgy Okrokvertskhov 

 wrote:

> Hi,
> 
> Thomas and Zane initiated a good discussion about Murano DSL and TOSCA 
> initiatives in Heat. I think will be beneficial for both teams to contribute 
> into TOSCA.

Wasn't TOSCA developing a "simplified" version in order to converge with HOT?

> While Mirantis is working on organizational part for OASIS. I would like to 
> understand what is the current view on the TOSCA and HOT relations. 
> It looks like TOSCA can cover all aspects of declarative components HOT 
> templates and imperative workflows which can be covered by Murano. What do 
> you think about that?

Aren't workflows covered by Mistral? How would this be different than including 
mistral support in Heat?

> I think TOSCA format can be used a a descriptions of Applications and 
> heat-translator can actually convert TOSCA descriptions to both HOT and 
> Murano files which can be then used for actual Application deployment. Both 
> Het and Murano workflows can coexist in Orchestration program and cover both 
> declarative templates and imperative workflows use cases.
> 
> -- 
> Georgy Okrokvertskhov
> Architect,
> OpenStack Platform Products,
> Mirantis
> http://www.mirantis.com
> Tel. +1 650 963 9828
> Mob. +1 650 996 3284
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] non-persistent storage(after stopping VM, data will be rollback automatically), do you think we shoud introduce this feature?

2014-03-10 Thread Joe Gordon
There is another thread on the ML discussing this:

http://lists.openstack.org/pipermail/openstack-dev/2014-March/029278.html

On Mon, Mar 10, 2014 at 1:06 AM, Yuzhou (C)  wrote:
> Hi Joe,
>
>
>
> If the VM is hacked or compromised,  the software solution inside of the VM
> maybe fail.
>
>
>
>  In fact, one of main use cases of non-persist disk is nonpersistent
> VDI.  There are three advantages:
>
> 1.  Image manageability, Since nonpersistent desktops are built from a
> master image,
>
> it's easier for administrators to patch and update the image,
>
> back it up quickly and deploy company-wide applications to all end users.
>
> 2.  Greater security, Users can't alter desktop settings or install
> their own applications,
>
> making the image more secure.
>
> 3. Less storage.
>
>
>
> The following two articles Maybe help you understand the usage of
> non-persisent disk:
>
>
>
> http://cormachogan.com/2013/04/16/what-are-dependent-independent-disks-persistent-and-non-persisent-modes/
>
>
>
> http://searchvirtualdesktop.techtarget.com/feature/Understanding-nonpersistent-vs-persistent-VDI
>
>
>
>
>
>
>
>
>
>
>
> From: Joe Gordon [mailto:joe.gord...@gmail.com]
> Sent: Saturday, March 08, 2014 4:40 AM
>
>
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [nova][cinder] non-persistent storage(after
> stopping VM, data will be rollback automatically), do you think we shoud
> introduce this feature?
>
>
>
>
>
>
>
> On Fri, Mar 7, 2014 at 1:26 AM, Qin Zhao  wrote:
>
> Hi Joe,
>
> Maybe my example is very rare. However, I think a new type of 'in-place'
> snapshot will have other advantages. For instance, the hypervisor can
> support to save memory content in snapshot file, so that user can revert his
> VM to running state. In this way, the user do not need to start each
> application again. Every thing is there. User can continue his work very
> easily. If the user spawn and boot a new VM, he will need to take a lot of
> time to resume his work. Does that make sense?
>
>
>
> I am not sure I follow. I think the use case you have brought up can be
> solved inside of the VM with something like http://unionfs.filesystems.org/
> are a filesystem that supports snapshotting.
>
>
>
>
>
>
>
> On Fri, Mar 7, 2014 at 2:20 PM, Joe Gordon  wrote:
>
> On Wed, Mar 5, 2014 at 11:45 AM, Qin Zhao  wrote:
>> Hi Joe,
>
>> For example, I used to use a private cloud system, which will calculate
>> charge bi-weekly. and it charging formula looks like "Total_charge =
>> Instance_number*C1 + Total_instance_duration*C2 + Image_number*C3 +
>> Volume_number*C4".  Those Instance/Image/Volume number are the number of
>> those objects that user created within these two weeks. And it also has
>> quota to limit total image size and total volume size. That formula is not
>
>> very exact, but you can see that it regards each of my 'create' operation
>> ass
>
>> a 'ticket', and will charge all those tickets, plus the instance duration
>
> Charging for creating a VM creation is not very cloud like.  Cloud
> instances should be treated as ephemeral and something that you can
> throw away and recreate at any time.  Additionally cloud should charge
> on resources used (instance CPU hour, network load etc), and not API
> calls (at least in any meaningful amount).
>
>
>> fee. In order to reduce the expense of my department, I am asked not to
>> create instance very frequently, and not to create too many images and
>> volume. The image quota is not very big. And I would never be permitted to
>> exceed the quota, since it request additional dollars.
>>
>>
>> On Thu, Mar 6, 2014 at 1:33 AM, Joe Gordon  wrote:
>>>
>>> On Wed, Mar 5, 2014 at 8:59 AM, Qin Zhao  wrote:
>>> > Hi Joe,
>>> > If we assume the user is willing to create a new instance, the workflow
>>> > you
>>> > are saying is exactly correct. However, what I am assuming is that the
>>> > user
>>> > is NOT willing to create a new instance. If Nova can revert the
>>> > existing
>>> > instance, instead of creating a new one, it will become the alternative
>>> > way
>>> > utilized by those users who are not allowed to create a new instance.
>>> > Both paths lead to the target. I think we can not assume all the people
>>> > should walk through path one and should not walk through path two.
>>> > Maybe
>>> > creating new instance or adjusting the quota is very easy in your point
>>> > of
>>> > view. However, the real use case is often limited by business process.
>>> > So I
>>> > think we may need to consider that some users can not or are not
>>> > allowed
>>> > to
>>> > creating the new instance under specific circumstances.
>>> >
>>>
>>> What sort of circumstances would prevent someone from deleting and
>>> recreating an instance?
>>>
>>> >
>>> > On Thu, Mar 6, 2014 at 12:02 AM, Joe Gordon 
>>> > wrote:
>>> >>
>>> >> On Tue, Mar 4, 2014 at 6:21 PM, Qin Zhao  wrote:
>>> >> > Hi Joe, my meaning is that cloud users may not hope to create n

Re: [openstack-dev] [Nova client] gate-python-novaclient-pypy

2014-03-10 Thread Jeremy Stanley
On 2014-03-10 02:37:02 -0700 (-0700), Gary Kotton wrote:
> The client gate seems to be broken with the following error:
[...]
> 2014-03-10 08:19:39.568 | error: option 
> --single-version-externally-managed not recognized
> 
> Any ideas?

I originally saw this setuptools bootstrapping issue when I first
tried to move PyPy-based unit tests from our older static workers to
single-use systems managed by nodepool. It was happening under
setuptools 2.1 but then I was unable to reproduce it once setuptools
2.2 came out so went ahead with moving the jobs. This issue seems to
be recurring and started (from what I can see in test logs)
coincident with the release of setuptools 3.1 on Saturday. Monty
originally proposed an improvement to the way we bootstrap
setuptools on our job workers which ought to help with this, so I'll
see whether it's in shape and still resolves the issue. If so, I'll
make sure we prioritize merging it ASAP to get client gating
unwedged and follow up to this thread with relevant updates.
-- 
Jeremy Stanley

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Mini-summit Interest?

2014-03-10 Thread Eugene Nikanorov
Hi Edgar,

I'm neutral to the suggestion of mini summit at this point.
Why do you think it will exclude developers?
If we keep it 1-3 days prior to OS Summit in Atlanta (e.g. in the same
city) that would allow anyone who joins OS Summit to save on extra
travelling.
OS Summit itself is too distractive to have really productive discussions,
unless your missing the sessions and spend time discussing.
For instance design sessions basically only good for declaration of
intents, but not for real discussion of a complex topic at meaningful
detail level.

What would be your suggestions to make this more inclusive?
I think the time and place is the key here - hence Atlanta and few days
prior OS summit.

Thanks,
Eugene.



On Mon, Mar 10, 2014 at 10:59 PM, Edgar Magana  wrote:

> Team,
>
> I found that having a mini-summit with a very short notice means excluding
> a lot of developers of such an interesting topic for Neutron.
> The OpenStack summit is the opportunity for all developers to come
> together and discuss the next steps, there are many developers that CAN
> NOT afford another trip for a "special" summit. I am personally against
> that and I do support Mark's proposal of having all the conversation over
> IRC and mailing list.
>
> Please, do not start excluding people that won't be able to attend another
> face-to-face meeting besides the summit. I believe that these are the
> little things that make an open source community weak if we do not control
> it.
>
> Thanks,
>
> Edgar
>
>
> On 3/6/14 9:51 PM, "Mark McClain"  wrote:
>
> >
> >On Mar 6, 2014, at 4:31 PM, Jay Pipes  wrote:
> >
> >> On Thu, 2014-03-06 at 21:14 +, Youcef Laribi wrote:
> >>> +1
> >>>
> >>> I think if we can have it before the Juno summit, we can take
> >>> concrete, well thought-out proposals to the community at the summit.
> >>
> >> Unless something has changed starting at the Hong Kong design summit
> >> (which unfortunately I was not able to attend), the design summits have
> >> always been a place to gather to *discuss* and *debate* proposed
> >> blueprints and design specs. It has never been about a gathering to
> >> rubber-stamp proposals that have already been hashed out in private
> >> somewhere else.
> >
> >You are correct that is the goal of the design summit.  While I do think
> >it is wise to discuss the next steps with LBaaS at this point in time, I
> >am not a proponent of in person mini-design summits.  Many contributors
> >to LBaaS are distributed all over the global, and scheduling a mini
> >summit with short notice will exclude valuable contributors to the team.
> >I¹d prefer to see an open process with discussions on the mailing list
> >and specially scheduled IRC meetings to discuss the ideas.
> >
> >mark
> >
> >
> >___
> >OpenStack-dev mailing list
> >OpenStack-dev@lists.openstack.org
> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] [Nova] Status of the nova "ironic" driver and CI

2014-03-10 Thread Devananda van der Veen
On Mar 10, 2014 4:57 AM, "John Garbutt"  wrote:
>
> On 9 March 2014 16:04, Devananda van der Veen 
wrote:
> > With the feature freeze in effect and our driver blocked from the Nova
tree
> > for this release cycle, last week we moved our driver into the Ironic
tree
> > in this patch set:
> >
> > https://review.openstack.org/#/c/78002/
> >
> > This allows Nova to load the "ironic" virt driver by importing the
> > "ironic.nova.virt.ironic.IronicDriver" class. We plan to resubmit this
> > driver to Nova when Juno opens, and remove it from the Ironic code base
once
> > it is accepted.
> >
> > Why did we do this? Most importantly, it allows us to continue working
on CI
> > testing during feature freeze and without any cross-project
dependencies. No
> > Nova changes are required, and -- for the moment -- we trust ourselves
> > enough to land code in this driver without integration tests.
> >
> > We also made a lot of progress on the devstack patch:
> >
> > https://review.openstack.org/#/c/70348/
> >
> > This patch configures Nova to use the "ironic" virt driver, sets up the
> > prerequisite environment, and performs integration tests (eg, with nova,
> > glance, keystone, and neutron) and functional tests of the PXE deploy
driver
> > in a mocked bare metal environment. This is now the only change
required to
> > perform these tests.
> >
> > If the infra team is amenable to adding an experimental check test to
> > Ironic's pipeline during feature-freeze, I will propose one, and we can
> > start identifying the set of tempest tests which make sense in a mocked
bare
> > metal environment. If not, we can wait until Juno opens to propose this.
> >
> > Thanks to agordeev, adam_g, and shrews for their hard work on adding
> > devstack support for Ironic! We're almost there :)
> >
>
> This all sounds good.
>
> Based on these requirements:
> https://wiki.openstack.org/wiki/HypervisorSupportMatrix/DeprecationPlan
>
> Does it seem feasible to get a CI running on all nova patches, using
> the above ironic virt driver, before we look at merging this into the
> Nova tree?
>
> John

That's the plan.

AIUI, it will need to be another non voting check in Nova and a voting
check/gate in Ironic, like our tempest API testing is today, up until
ironic graduates (iow, probably for all of Juno).

-Deva
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] testr help

2014-03-10 Thread Zane Bitter
Thanks Clark for this great write-up. However, I think the solution to 
the problem in question is richer commands and better output formatting, 
not discarding information.


On 07/03/14 16:30, Clark Boylan wrote:

But running tests in parallel introduces some fun problems. Like where
do you send logging and stdout output. If you send it to the console
it will be interleaved and essentially useless. The solution to this
problem (for which I am probably to blame) is to have each test
collect the logging, stdout, and stderr associated to that test and
attach it to that tests subunit reporting. This way you get all of the
output associated with a single test attached to that test and don't
have crazy interleaving that needs to be demuxed. The capturing of


This is not really a problem unique to parallel test runners. Printing 
to the console is just not a great way to handle stdout/stderr in 
general because it messes up the output of the test runner, and nose 
does exactly the same thing as testr in collecting them - except that 
nose combines messages from the 3 sources and prints the output for 
human consumption, rather than in separate groups surrounded by lots of 
{{{random braces}}}.



this data is toggleable in the test suite using environment variables
and is off by default so that when you are not using testr you don't
get this behavior [0]. However we seem to have neglected log capture
toggles.


Oh wow, there is actually a way to get the stdout and stderr? Fantastic! 
Why on earth are these disabled?


Please, please, please don't turn off the logging too. That's the only 
tool left for debugging now that stdout goes into a black hole.



Now onto indirectly answering some of the questions you have. If a
single unittest is producing thousands of lines of log output that is
probably a bug. The test scope is too large or the logging is too
verbose or both. To work around this we probably need to make the
fakeLogger fixture toggleable with configurable log level. Then you
could do something like `OS_LOG_LEVEL=error tox` and avoid getting all
of the debug level logs.


Fewer logs is hardly ever what you want when debugging a unit test.

I think what John is looking for is a report at the end of each test run 
that just lists the tests that failed instead of all the details (like 
`testr failing --list`), or perhaps the complete list of tests with the 
coloured pass/fail like IIRC nose does. Since testr's killer feature is 
to helpfully store all of the results for later, maybe this output is 
all you need in the first instance (along with a message telling the 
user what command to run to see the full output, of course), or at least 
that could be an option.



For examining test results you can `testr load $SUBUNIT_LOG_FILE` then
run commands like `testr last`, `testr failing`, `testr slowest`, and
`testr stats` against that run (if you want details on the last run
you don't need an explicit load). There are also a bunch of tools that
come with python-subunit like subunit-filter, subunit2pyunit,
subunit-stats, and so on that you can use to do additional processing.


It sounds like what John wants to do is pass a filter to something like 
`testr failing` or `testr last` to only report a subset of the results, 
in much the same way as it's possible to pass a filter to `testr` to 
only run a subset of the tests.


BTW, since I'm on the subject, testr would be a lot more 
confidence-inspiring if running `testr failing` immediately after 
running `testr` reported the same number of failures, or indeed if 
running `testr` twice in a row was guaranteed to report the same number 
of failures (assuming that the tests themselves are deterministic). I 
can't imagine why one of the subunit processes reporting a test failure 
is considered a failure in itself (that's what it's supposed to do!), 
particularly when `testr failing` manages to filter them out OK.


(I understand that some of the things mentioned above may already have 
been improved since the latest 0.0.18 release. I can't actually find the 
repo at the moment to check, because it's not linked from PyPI or 
Launchpad and 'testrepository' turns out to be an extremely common name 
for things on the Internet.)


cheers,
Zane.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat][Murano][TOSCA] Murano team contrib. to Heat TOSCA activities

2014-03-10 Thread Georgy Okrokvertskhov
Hi,

Thomas and Zane initiated a good discussion about Murano DSL and TOSCA
initiatives in Heat. I think will be beneficial for both teams to
contribute into TOSCA.

While Mirantis is working on organizational part for OASIS. I would like to
understand what is the current view on the TOSCA and HOT relations.
It looks like TOSCA can cover all aspects of declarative components HOT
templates and imperative workflows which can be covered by Murano. What do
you think about that?

I think TOSCA format can be used a a descriptions of Applications and
heat-translator can actually convert TOSCA descriptions to both HOT and
Murano files which can be then used for actual Application deployment. Both
Het and Murano workflows can coexist in Orchestration program and cover
both declarative templates and imperative workflows use cases.

-- 
Georgy Okrokvertskhov
Architect,
OpenStack Platform Products,
Mirantis
http://www.mirantis.com
Tel. +1 650 963 9828
Mob. +1 650 996 3284
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] UTF-8 required charset/encoding for openstack database?

2014-03-10 Thread Roman Podoliaka
Hi Chris,

AFAIK, most OpenStack projects enforce tables to be created with the
encoding set to UTF-8 because MySQL has horrible defaults and would
use latin1 otherwise. PostgreSQL must default to the locale of a
system on which it's running. And, I think, most systems default to
UTF-8 nowadays.

Actually, I can't think of a reason, why would you want to use
anything else than UTF-8 for storing and exchanging of textual data.
I'd recommend to reconsider your encoding settings for PostgreSQL.

Thanks,
Roman

On Mon, Mar 10, 2014 at 10:24 AM, Chris Friesen
 wrote:
>
> Hi,
>
> I'm using havana and recent we ran into an issue with heat related to
> character sets.
>
> In heat/db/sqlalchemy/api.py in user_creds_get() we call
> _decrypt() on an encrypted password stored in the database and then try to
> convert the result to unicode.  Today we hit a case where this errored out
> with the following message:
>
> UnicodeDecodeError: 'utf8' codec can't decode byte 0xf2 in position 0:
> invalid continuation byte
>
> We're using postgres and currently all the databases are using SQL_ASCII as
> the charset.
>
> I see that in icehouse heat will complain if you're using mysql and not
> using UTF-8.  There doesn't seem to be any checks for other databases
> though.
>
> It looks like devstack creates most databases as UTF-8 but uses latin1 for
> nova/nova_bm/nova_cell.  I assume this is because nova expects to migrate
> the db to UTF-8 later.  Given that those migrations specify a character set
> only for mysql, when using postgres should we explicitly default to UTF-8
> for everything?
>
> Thanks,
> Chris
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Mini-summit Interest?

2014-03-10 Thread Edgar Magana
Team,

I found that having a mini-summit with a very short notice means excluding
a lot of developers of such an interesting topic for Neutron.
The OpenStack summit is the opportunity for all developers to come
together and discuss the next steps, there are many developers that CAN
NOT afford another trip for a "special" summit. I am personally against
that and I do support Mark's proposal of having all the conversation over
IRC and mailing list.

Please, do not start excluding people that won't be able to attend another
face-to-face meeting besides the summit. I believe that these are the
little things that make an open source community weak if we do not control
it.

Thanks,

Edgar


On 3/6/14 9:51 PM, "Mark McClain"  wrote:

>
>On Mar 6, 2014, at 4:31 PM, Jay Pipes  wrote:
>
>> On Thu, 2014-03-06 at 21:14 +, Youcef Laribi wrote:
>>> +1
>>> 
>>> I think if we can have it before the Juno summit, we can take
>>> concrete, well thought-out proposals to the community at the summit.
>> 
>> Unless something has changed starting at the Hong Kong design summit
>> (which unfortunately I was not able to attend), the design summits have
>> always been a place to gather to *discuss* and *debate* proposed
>> blueprints and design specs. It has never been about a gathering to
>> rubber-stamp proposals that have already been hashed out in private
>> somewhere else.
>
>You are correct that is the goal of the design summit.  While I do think
>it is wise to discuss the next steps with LBaaS at this point in time, I
>am not a proponent of in person mini-design summits.  Many contributors
>to LBaaS are distributed all over the global, and scheduling a mini
>summit with short notice will exclude valuable contributors to the team.
>I¹d prefer to see an open process with discussions on the mailing list
>and specially scheduled IRC meetings to discuss the ideas.
>
>mark
>
>
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [solum] Second Solum Summit in Raleigh, NC (March 25th - 26th)

2014-03-10 Thread Daniel McPherson
Hi Everyone,

  The second solum summit has been scheduled:

https://wiki.openstack.org/wiki/Solum/Summit

  You can register here:

https://www.eventbrite.com/e/solum-workshop-tickets-10623104993

  And there is a starter list of topics here:

https://etherpad.openstack.org/p/SolumRaleighCommunityWorkshop

  Please add any topics you would like to have discussed and we will curate the 
list as a group.

  Feel free to contact me with any questions you have about the event.

Hope to see everyone there!

-Dan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat] [Keystone] Where are the strings in the keystone API's defined?

2014-03-10 Thread Clint Byrum
While looking into this bug:

https://bugs.launchpad.net/heat/+bug/1290274

I was trying to find out why the original developers felt that tenant_ids
should be a 'varchar(256)'. In addition to moving from a regular varchar
into a text field, varchar(256) in MySQL will become a tinytext which
causes all sorts of performance issues, this just seems a _lot_ bigger
than anything I've ever seen shown as a tenant/project id. I'd expect at
worst varchar(64). Also it is probably safe to store as varbinary since
we don't ever sort on it or store case-insensitive equivalents to it.

So I was wondering where users can go to find out what to expect in
that field. I dug through the API documentation for Keystone and I see
nothing that really would constituate a format or even length. But maybe
I'm just not looking in the right place. Thanks!

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [infra] Meeting Tuesday March 11th at 19:00 UTC

2014-03-10 Thread Elizabeth Krumbach Joseph
Hi everyone,

The OpenStack Infrastructure (Infra) team is hosting our weekly
meeting tomorrow, Tuesday March 11th, at 19:00 UTC in
#openstack-meeting

Meeting agenda available here:
https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting (anyone is
welcome to to add agenda items)

Everyone interested in infrastructure and process surrounding
automated testing and deployment is encouraged to attend.

-- 
Elizabeth Krumbach Joseph || Lyz || pleia2
http://www.princessleia.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] tgt restart fails in Cinder startup "start: job failed to start"

2014-03-10 Thread Sukhdev Kapur
Hey Rafael,

If it helps any, I have noticed that this issue generally occurs after
several runs of tempest tests. I run all smoke tests. I have been always
suspicious that some new test got added which is not cleaning up
afterwards.  Can you point me to which tempest test corresponds to
test_boot_pattern? I can do some investigation as well.

Thanks
-Sukhdev



On Mon, Mar 10, 2014 at 10:10 AM, Rafael Folco wrote:

> This looks very familiar
> https://bugzilla.redhat.com/show_bug.cgi?id=848942
> I'm not sure this is exactly the same bahavior, try turning debug messages
> on by adding -d to the tgtd start.
> I'm investigating a similar issue with tgt on test_boot_pattern test. It
> turns out that the file written to the volume in the first instance is not
> found by the second instance. But that's a separate issue.
>
> -rfolco
>
>
> On Mon, Mar 10, 2014 at 12:54 PM, Sukhdev Kapur wrote:
>
>> I see the same issue. This issue has crept in during the latest flurry of
>> check-ins. I started noticing this issue a day or two before the Icehouse
>> Feature Freeze deadline.
>>
>> I tried restarting tgt as well, but, it does not help.
>>
>> However, rebooting the VM helps clear it up.
>>
>> Has anybody else seen it as well? Does anybody have a solution for it?
>>
>> Thanks
>> -Sukhdev
>>
>>
>>
>>
>>
>> On Mon, Mar 10, 2014 at 8:37 AM, Dane Leblanc (leblancd) <
>> lebla...@cisco.com> wrote:
>>
>>> I don't know if anyone can give me some troubleshooting advice with this
>>> issue.
>>>
>>> I'm seeing an occasional problem whereby after several DevStack
>>> unstack.sh/stack.sh cycles, the tgt daemon (tgtd) fails to start during
>>> Cinder startup.  Here's a snippet from the stack.sh log:
>>>
>>> 2014-03-10 07:09:45.214 | Starting Cinder
>>> 2014-03-10 07:09:45.215 | + return 0
>>> 2014-03-10 07:09:45.216 | + sudo rm -f /etc/tgt/conf.d/stack.conf
>>> 2014-03-10 07:09:45.217 | + _configure_tgt_for_config_d
>>> 2014-03-10 07:09:45.218 | + [[ ! -d /etc/tgt/stack.d/ ]]
>>> 2014-03-10 07:09:45.219 | + is_ubuntu
>>> 2014-03-10 07:09:45.220 | + [[ -z deb ]]
>>> 2014-03-10 07:09:45.221 | + '[' deb = deb ']'
>>> 2014-03-10 07:09:45.222 | + sudo service tgt restart
>>> 2014-03-10 07:09:45.223 | stop: Unknown instance:
>>> 2014-03-10 07:09:45.619 | start: Job failed to start
>>> jenkins@neutronpluginsci:~/devstack$ 2014-03-10 07:09:45.621 | +
>>> exit_trap
>>> 2014-03-10 07:09:45.622 | + local r=1
>>> 2014-03-10 07:09:45.623 | ++ jobs -p
>>> 2014-03-10 07:09:45.624 | + jobs=
>>> 2014-03-10 07:09:45.625 | + [[ -n '' ]]
>>> 2014-03-10 07:09:45.626 | + exit 1
>>>
>>> If I try to restart tgt manually without success:
>>>
>>> jenkins@neutronpluginsci:~$ sudo service tgt restart
>>> stop: Unknown instance:
>>> start: Job failed to start
>>> jenkins@neutronpluginsci:~$ sudo tgtd
>>> librdmacm: couldn't read ABI version.
>>> librdmacm: assuming: 4
>>> CMA: unable to get RDMA device list
>>> (null): iser_ib_init(3263) Failed to initialize RDMA; load kernel
>>> modules?
>>> (null): fcoe_init(214) (null)
>>> (null): fcoe_create_interface(171) no interface specified.
>>> jenkins@neutronpluginsci:~$
>>>
>>> The config in /etc/tgt is:
>>>
>>> jenkins@neutronpluginsci:/etc/tgt$ ls -l
>>> total 8
>>> drwxr-xr-x 2 root root 4096 Mar 10 07:03 conf.d
>>> lrwxrwxrwx 1 root root   30 Mar 10 06:50 stack.d ->
>>> /opt/stack/data/cinder/volumes
>>> -rw-r--r-- 1 root root   58 Mar 10 07:07 targets.conf
>>> jenkins@neutronpluginsci:/etc/tgt$ cat targets.conf
>>> include /etc/tgt/conf.d/*.conf
>>> include /etc/tgt/stack.d/*
>>> jenkins@neutronpluginsci:/etc/tgt$ ls conf.d
>>> jenkins@neutronpluginsci:/etc/tgt$ ls /opt/stack/data/cinder/volumes
>>> jenkins@neutronpluginsci:/etc/tgt$
>>>
>>> I don't know if there's any missing Cinder config in my DevStack localrc
>>> files. Here's one that I'm using:
>>>
>>> MYSQL_PASSWORD=nova
>>> RABBIT_PASSWORD=nova
>>> SERVICE_TOKEN=nova
>>> SERVICE_PASSWORD=nova
>>> ADMIN_PASSWORD=nova
>>>
>>> ENABLED_SERVICES=g-api,g-reg,key,n-api,n-crt,n-obj,n-cpu,n-cond,cinder,c-sch,c-api,c-vol,n-sch,n-novnc,n-xvnc,n-cauth,horizon,rabbit
>>> enable_service mysql
>>> disable_service n-net
>>> enable_service q-svc
>>> enable_service q-agt
>>> enable_service q-l3
>>> enable_service q-dhcp
>>> enable_service q-meta
>>> enable_service q-lbaas
>>> enable_service neutron
>>> enable_service tempest
>>> VOLUME_BACKING_FILE_SIZE=2052M
>>> Q_PLUGIN=cisco
>>> declare -a Q_CISCO_PLUGIN_SUBPLUGINS=(openvswitch nexus)
>>> declare -A
>>> Q_CISCO_PLUGIN_SWITCH_INFO=([10.0.100.243]=admin:Cisco12345:22:neutronpluginsci:1/9)
>>> NCCLIENT_REPO=git://github.com/CiscoSystems/ncclient.git
>>> PHYSICAL_NETWORK=physnet1
>>> OVS_PHYSICAL_BRIDGE=br-eth1
>>> TENANT_VLAN_RANGE=810:819
>>> ENABLE_TENANT_VLANS=True
>>> API_RATE_LIMIT=False
>>> VERBOSE=True
>>> DEBUG=True
>>> LOGFILE=/opt/stack/logs/stack.sh.log
>>> USE_SCREEN=True
>>> SCREEN_LOGDIR=/opt/stack/logs
>>>
>>> Here are links to a log showing another localrc file that I use, and the
>>> corresponding

Re: [openstack-dev] [Neutron][IPv6][Security Group] BP: Support ICMP type filter by security group

2014-03-10 Thread Robert Li (baoli)
Hi Akihiro,

See inline for a question Š.

Thanks,
Robert

On 3/7/14 2:02 PM, "Akihiro Motoki"  wrote:

>Hi Robert,
>
>Thanks for the clarification. I understand the motivation.
>
>I think the problem can be split into two categories:
>(a) user configurable rules vs infra enforced rule, and
>(b) DHCP/RA service exists inside or outside of Neutron
>
>Regarding (a), I believe DHCP or RA related rules is better to be handled
>by the infra side because it is required to ensure DHCP/RA works well.
>I don't think it is a good idea to delegate users to configure rule to
>allow them.
>It works as long as DHCP/RA service works inside OpenStack.
>This is the main motivation of my previous question.
>
>On the other hand, there is no way to cooperate with DHCP/RA
>services outside of OpenStack at now. This blocks the usecase in your
>mind.
>It is true that the current Neutron cannot works with dhcp server
>outside of neutron.

I'd appreciate it if you can explain the above in more detail? I'd like to
understand what has caused the limitation.
thanks.

>
>I agree that adding a security group rule to allow RA is reasonable as
>a workaround.
>However, for a long time solution, it is better to explore a way to
>configure
>infra-required rules.
>
>Thanks,
>Akihiro
>
>
>On Sat, Mar 8, 2014 at 12:50 AM, Robert Li (baoli) 
>wrote:
>> Hi Akihiro,
>>
>> In the case of IPv6 RA, its source IP is a Link Local Address from the
>> router's RA advertising interface. This LLA address is automatically
>> generated and not saved in the neutron port DB. We are exploring the
>>idea
>> of retrieving this LLA if a native openstack RA service is running on
>>the
>> subnet.
>>
>> Would SG be needed with a provider net in which the RA service is
>>running
>> external to openstack?
>>
>> In the case of IPv4 DHCP, the dhcp port is created by the dhcp service,
>> and the dhcp server ip address is retrieved from this dhcp port. If the
>> dhcp server is running outside of openstack, and if we'd only allow dhcp
>> packets from this server, how is it done now?
>>
>> thanks,
>> Robert
>>
>> On 3/7/14 12:00 AM, "Akihiro Motoki"  wrote:
>>
>>>I wonder why RA needs to be exposed by security group API.
>>>Does a user need to configure security group to allow IPv6 RA? or
>>>should it be allowed in infra side?
>>>
>>>In the current implementation DHCP packets are allowed by provider
>>>rule (which is hardcoded in neutron code now).
>>>I think the role of IPv6 RA is similar to DHCP in IPv4. If so, we
>>>don't need to expose RA in security group API.
>>>Am I missing something?
>>>
>>>Thanks,
>>>Akihiro
>>>
>>>On Mon, Mar 3, 2014 at 10:39 PM, Xuhan Peng  wrote:
 I created a new blueprint [1] which is triggered by the requirement to
allow
 IPv6 Router Advertisement security group rule on compute node in my
on-going
 code review [2].

 Currently, only security group rule direction, protocol, ethertype and
port
 range are supported by neutron security group rule data structure. To
allow
 Router Advertisement coming from network node or provider network to
VM
on
 compute node, we need to specify ICMP type to only allow RA from known
hosts
 (network node dnsmasq binded IP or known provider gateway).

 To implement this and make the implementation extensible, maybe we can
add
 an additional table name "SecurityGroupRuleData" with Key, Value and
ID
in
 it. For ICMP type RA filter, we can add key="icmp-type" value="134",
and
 security group rule to the table. When other ICMP type filters are
needed,
 similar records can be stored. This table can also be used for other
 firewall rule key values.
 API change is also needed.

 Please let me know your comments about this blueprint.

 [1]

https://blueprints.launchpad.net/neutron/+spec/security-group-icmp-type
-f
ilter
 [2] https://review.openstack.org/#/c/72252/

 Thank you!
 Xuhan Peng

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

>>>
>>>___
>>>OpenStack-dev mailing list
>>>OpenStack-dev@lists.openstack.org
>>>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] UTF-8 required charset/encoding for openstack database?

2014-03-10 Thread Chris Friesen


Hi,

I'm using havana and recent we ran into an issue with heat related to 
character sets.


In heat/db/sqlalchemy/api.py in user_creds_get() we call
_decrypt() on an encrypted password stored in the database and then try 
to convert the result to unicode.  Today we hit a case where this 
errored out with the following message:


UnicodeDecodeError: 'utf8' codec can't decode byte 0xf2 in position 0: 
invalid continuation byte


We're using postgres and currently all the databases are using SQL_ASCII 
as the charset.


I see that in icehouse heat will complain if you're using mysql and not 
using UTF-8.  There doesn't seem to be any checks for other databases 
though.


It looks like devstack creates most databases as UTF-8 but uses latin1 
for nova/nova_bm/nova_cell.  I assume this is because nova expects to 
migrate the db to UTF-8 later.  Given that those migrations specify a 
character set only for mysql, when using postgres should we explicitly 
default to UTF-8 for everything?


Thanks,
Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][novaclient] How to get user's credentials for using novaclient API?

2014-03-10 Thread Nader Lahouti
Hi All,


I have a question regarding using novaclient API.


I need to use it for getting a list of instances for an user/project.

In oder to do that I tried to use :


from novaclient.v1_1 import client

nc = client.Client(*username*,*token_id*, project_id,
auth_url,insecure,cacert)

nc.servers.list()


( however, the comment on the code/document says different thing which as
far as tried it didn't work.

>>> client = Client(USERNAME, *PASSWORD*, PROJECT_ID, AUTH_URL)


so it seems token_id has to be provided.

I can get the token_id using keystone REST API
(*http://localhost:5000/v2.0/tokens
 ...-d ' the credentials ...username and
password'*.

And my question is: how to get credentials for an user in the code when
using the keystone's REST API? Is there any api to get such an info?


Appreciate your comments.


Regards,

Nader.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] how to enable logging for unit tests

2014-03-10 Thread Matt Riedemann



On 2/28/2014 4:04 PM, Clark Boylan wrote:

On Fri, Feb 28, 2014 at 1:30 PM, John Dennis  wrote:

I'd like to enable debug logging while running some specific unit tests
and I've not been able to find the right combination of levers to pull
to get logging output on the console.

In keystone/etc/keystone.conf.sample (which is config file loaded for
the unit tests) I've set debug to True, I've verified CONF.debug is true
when the test executes. I've also tried setting log_file and log_dir to
see if I could get logging written to a log file instead, but no luck.

I have noticed when a test fails I'll see all the debug logging emitted
inbetween

{{{
}}}

which I think is something testtools is doing.

This leads me to the theory testtools is somehow consuming the logging
output. Is that correct?

How do I get the debug logging to show up on the console during a test run?

--
John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


I was going to respond to this and say it is easy, you set
OS_LOG_CAPTURE=False in your test env and rerun the tests. But it
doesn't look like keystone has made log capturing configurable [0]. I
thought we had set this variable properly in places but I have
apparently misremembered. You could add an OS_LOG_CAPTURE flag and set
it in .testr.conf and see if it helps. The other thing you can do is
refer to the subunit log file in .testrepository/$TEST_ID after tests
have run.

[0] 
https://git.openstack.org/cgit/openstack/keystone/tree/keystone/tests/core.py#n338

Clark

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Maybe this can help:

https://review.openstack.org/#/c/71652/

--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Mistral] Community meeting minutes/log - 03/10/2014

2014-03-10 Thread Renat Akhmerov
Hi,

Thanks to everyone who joined us today at #openstack-meeting. For those who 
didn’t make it to attend the meeting here are the links to minutes and log:

http://eavesdrop.openstack.org/meetings/mistral/2014/mistral.2014-03-10-16.00.html
http://eavesdrop.openstack.org/meetings/mistral/2014/mistral.2014-03-10-16.00.log.html

The next meeting will be on March 17 at the same time (16.00 UTC)

Renat Akhmerov
@ Mirantis Inc.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] RFC - using Gerrit for Nova Blueprint review & approval

2014-03-10 Thread Russell Bryant
On 03/10/2014 11:41 AM, Russell Bryant wrote:
> On 03/10/2014 08:20 AM, John Garbutt wrote:
>> We probably need a mass un-approve of all the blueprints in Nova, so
>> all new blueprints in Juno go through the new process. I can take
>> charge of that part, and helping with joining some of the dots and
>> testing this out.
> 
> Sounds great.  I do think we should move forward here.  +1 to forcing
> all Juno to go through this.  Thanks for going through all the launchpad
> blueprints.  Next up will be to get the repo created.  I'll look into
> that part.
> 

openstack-infra/config patch for creating the repo:

https://review.openstack.org/#/c/79363/

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] Nominating Radomir Dopieralski to Horizon Core

2014-03-10 Thread Lyle, David
The results are unanimous.  Congratulations Radomir and welcome to the Horizon 
Core team.

Thanks for all of your efforts.

David

> -Original Message-
> From: Lyle, David
> Sent: Wednesday, March 05, 2014 3:36 PM
> To: OpenStack Development Mailing List
> Subject: [openstack-dev] [Horizon] Nominating Radomir Dopieralski to
> Horizon Core
> 
> I'd like to nominate Radomir Dopieralski to Horizon Core.  I find his reviews
> very insightful and more importantly have come to rely on their quality. He
> has contributed to several areas in Horizon and he understands the code
> base well.  Radomir is also very active in tuskar-ui both contributing and
> reviewing.
> 
> David
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] FFE Request: Ephemeral RBD image support

2014-03-10 Thread Dmitry Borodaenko
On Fri, Mar 7, 2014 at 8:55 AM, Sean Dague  wrote:
> On 03/07/2014 11:16 AM, Russell Bryant wrote:
>> On 03/07/2014 04:19 AM, Daniel P. Berrange wrote:
>>> On Thu, Mar 06, 2014 at 12:20:21AM -0800, Andrew Woodward wrote:
 I'd Like to request A FFE for the remaining patches in the Ephemeral
 RBD image support chain

 https://review.openstack.org/#/c/59148/
 https://review.openstack.org/#/c/59149/

 are still open after their dependency
 https://review.openstack.org/#/c/33409/ was merged.

 These should be low risk as:
 1. We have been testing with this code in place.
 2. It's nearly all contained within the RBD driver.

 This is needed as it implements an essential functionality that has
 been missing in the RBD driver and this will become the second release
 it's been attempted to be merged into.
>>>
>>> Add me as a sponsor.
>>
>> OK, great.  That's two.
>>
>> We have a hard deadline of Tuesday to get these FFEs merged (regardless
>> of gate status).
>>
>
> As alt release manager, FFE approved based on Russell's approval.
>
> The merge deadline for Tuesday is the release meeting, not end of day.
> If it's not merged by the release meeting, it's dead, no exceptions.

Both commits were merged, thanks a lot to everyone who helped land
this in Icehouse! Especially to Russel and Sean for approving the FFE,
and to Daniel, Michael, and Vish for reviewing the patches!

-- 
Dmitry Borodaenko

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] a question about instance snapshot

2014-03-10 Thread Shawn Hartsock
We have very strong interest in pursing this feature in the VMware
driver as well. I would like to see the revert instance feature
implemented at least.

When I used to work in multi-discipline roles involving operations it
would be common for us to snapshot a vm, run through an upgrade
process, then revert if something did not upgrade smoothly. This
ability alone can be exceedingly valuable in long-lived virtual
machines.

I also have some comments from parties interested in refactoring how
the VMware drivers handle snapshots but I'm not certain how much that
plays into this "live snapshot" discussion.

On Mon, Mar 10, 2014 at 12:04 AM, Bohai (ricky)  wrote:
>> -Original Message-
>> From: Alex Xu [mailto:x...@linux.vnet.ibm.com]
>> Sent: Sunday, March 09, 2014 10:04 PM
>> To: OpenStack Development Mailing List (not for usage questions)
>> Subject: Re: [openstack-dev] [nova] a question about instance snapshot
>>
>> Hi, Jeremy, the discussion at here
>> http://lists.openstack.org/pipermail/openstack-dev/2013-August/013688.html
>>
>
> I have a great interest in the topic too.
> I read the link you provided, and there is a little confusion for me.
> I agree with the security consideration in the discussion and memory snapshot 
> can't be used for the cloning instance easily.
>
> But I think it's safe for the using for Instance revert.
> And revert the instance to a checkpoint is valuable for the user.
> Why we didn't use it for instance revert in the first step?
>
> Best regards to you.
> Ricky
>
>> Thanks
>> Alex
>> On 2014年03月07日 10:29, Liuji (Jeremy) wrote:
>> > Hi, all
>> >
>> > Current openstack seems not support to snapshot instance with memory and
>> dev states.
>> > I searched the blueprint and found two relational blueprint like below.
>> > But these blueprint failed to get in the branch.
>> >
>> > [1]: https://blueprints.launchpad.net/nova/+spec/live-snapshots
>> > [2]: https://blueprints.launchpad.net/nova/+spec/live-snapshot-vms
>> >
>> > In the blueprint[1], there is a comment,"
>> > We discussed this pretty extensively on the mailing list and in a design
>> summit session.
>> > The consensus is that this is not a feature we would like to have in nova.
>> --russellb "
>> > But I can't find the discuss mail about it. I hope to know why we think so 
>> > ?
>> > Without memory snapshot, we can't to provide the feature for user to revert
>> a instance to a checkpoint.
>> >
>> > Anyone who knows the history can help me or give me a hint how to find the
>> discuss mail?
>> >
>> > I am a newbie for openstack and I apologize if I am missing something very
>> obvious.
>> >
>> >
>> > Thanks,
>> > Jeremy Liu
>> >
>> >
>> > ___
>> > OpenStack-dev mailing list
>> > OpenStack-dev@lists.openstack.org
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>> >
>> >
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
# Shawn.Hartsock - twitter: @hartsock - plus.google.com/+ShawnHartsock

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] #openstack-oslo on freenode

2014-03-10 Thread Doug Hellmann
A short while ago the Oslo team moved most of our discussions out of
#openstack-dev and into #openstack-oslo. I realized today that we never
announced the new room.

As far as I can tell all of us are in both channels, but if you have an
oslo-specific question the #openstack-oslo channel may be a little quieter
for working out answers.

Room logs are available via
http://eavesdrop.openstack.org/irclogs/%23openstack-oslo/

Doug
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [3rd party testing] Q&A meeting today at 14:00 EST / 18:00 UTC

2014-03-10 Thread Jay Pipes
Hi Stackers,

We'll be having our second weekly Q&A/workshop session around third
party testing today at 14:00 EST / 18:00 UTC in #openstack-meeting on
Freenode IRC. See you all there.

Best,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] tgt restart fails in Cinder startup "start: job failed to start"

2014-03-10 Thread Sukhdev Kapur
I see the same issue. This issue has crept in during the latest flurry of
check-ins. I started noticing this issue a day or two before the Icehouse
Feature Freeze deadline.

I tried restarting tgt as well, but, it does not help.

However, rebooting the VM helps clear it up.

Has anybody else seen it as well? Does anybody have a solution for it?

Thanks
-Sukhdev





On Mon, Mar 10, 2014 at 8:37 AM, Dane Leblanc (leblancd)  wrote:

> I don't know if anyone can give me some troubleshooting advice with this
> issue.
>
> I'm seeing an occasional problem whereby after several DevStack
> unstack.sh/stack.sh cycles, the tgt daemon (tgtd) fails to start during
> Cinder startup.  Here's a snippet from the stack.sh log:
>
> 2014-03-10 07:09:45.214 | Starting Cinder
> 2014-03-10 07:09:45.215 | + return 0
> 2014-03-10 07:09:45.216 | + sudo rm -f /etc/tgt/conf.d/stack.conf
> 2014-03-10 07:09:45.217 | + _configure_tgt_for_config_d
> 2014-03-10 07:09:45.218 | + [[ ! -d /etc/tgt/stack.d/ ]]
> 2014-03-10 07:09:45.219 | + is_ubuntu
> 2014-03-10 07:09:45.220 | + [[ -z deb ]]
> 2014-03-10 07:09:45.221 | + '[' deb = deb ']'
> 2014-03-10 07:09:45.222 | + sudo service tgt restart
> 2014-03-10 07:09:45.223 | stop: Unknown instance:
> 2014-03-10 07:09:45.619 | start: Job failed to start
> jenkins@neutronpluginsci:~/devstack$ 2014-03-10 07:09:45.621 | + exit_trap
> 2014-03-10 07:09:45.622 | + local r=1
> 2014-03-10 07:09:45.623 | ++ jobs -p
> 2014-03-10 07:09:45.624 | + jobs=
> 2014-03-10 07:09:45.625 | + [[ -n '' ]]
> 2014-03-10 07:09:45.626 | + exit 1
>
> If I try to restart tgt manually without success:
>
> jenkins@neutronpluginsci:~$ sudo service tgt restart
> stop: Unknown instance:
> start: Job failed to start
> jenkins@neutronpluginsci:~$ sudo tgtd
> librdmacm: couldn't read ABI version.
> librdmacm: assuming: 4
> CMA: unable to get RDMA device list
> (null): iser_ib_init(3263) Failed to initialize RDMA; load kernel modules?
> (null): fcoe_init(214) (null)
> (null): fcoe_create_interface(171) no interface specified.
> jenkins@neutronpluginsci:~$
>
> The config in /etc/tgt is:
>
> jenkins@neutronpluginsci:/etc/tgt$ ls -l
> total 8
> drwxr-xr-x 2 root root 4096 Mar 10 07:03 conf.d
> lrwxrwxrwx 1 root root   30 Mar 10 06:50 stack.d ->
> /opt/stack/data/cinder/volumes
> -rw-r--r-- 1 root root   58 Mar 10 07:07 targets.conf
> jenkins@neutronpluginsci:/etc/tgt$ cat targets.conf
> include /etc/tgt/conf.d/*.conf
> include /etc/tgt/stack.d/*
> jenkins@neutronpluginsci:/etc/tgt$ ls conf.d
> jenkins@neutronpluginsci:/etc/tgt$ ls /opt/stack/data/cinder/volumes
> jenkins@neutronpluginsci:/etc/tgt$
>
> I don't know if there's any missing Cinder config in my DevStack localrc
> files. Here's one that I'm using:
>
> MYSQL_PASSWORD=nova
> RABBIT_PASSWORD=nova
> SERVICE_TOKEN=nova
> SERVICE_PASSWORD=nova
> ADMIN_PASSWORD=nova
>
> ENABLED_SERVICES=g-api,g-reg,key,n-api,n-crt,n-obj,n-cpu,n-cond,cinder,c-sch,c-api,c-vol,n-sch,n-novnc,n-xvnc,n-cauth,horizon,rabbit
> enable_service mysql
> disable_service n-net
> enable_service q-svc
> enable_service q-agt
> enable_service q-l3
> enable_service q-dhcp
> enable_service q-meta
> enable_service q-lbaas
> enable_service neutron
> enable_service tempest
> VOLUME_BACKING_FILE_SIZE=2052M
> Q_PLUGIN=cisco
> declare -a Q_CISCO_PLUGIN_SUBPLUGINS=(openvswitch nexus)
> declare -A
> Q_CISCO_PLUGIN_SWITCH_INFO=([10.0.100.243]=admin:Cisco12345:22:neutronpluginsci:1/9)
> NCCLIENT_REPO=git://github.com/CiscoSystems/ncclient.git
> PHYSICAL_NETWORK=physnet1
> OVS_PHYSICAL_BRIDGE=br-eth1
> TENANT_VLAN_RANGE=810:819
> ENABLE_TENANT_VLANS=True
> API_RATE_LIMIT=False
> VERBOSE=True
> DEBUG=True
> LOGFILE=/opt/stack/logs/stack.sh.log
> USE_SCREEN=True
> SCREEN_LOGDIR=/opt/stack/logs
>
> Here are links to a log showing another localrc file that I use, and the
> corresponding stack.sh log:
>
> http://128.107.233.28:8080/job/neutron/1390/artifact/vpnaas_console_log.txt
>
> http://128.107.233.28:8080/job/neutron/1390/artifact/vpnaas_stack_sh_log.txt
>
> Does anyone have any advice on how to debug this, or recover from this
> (beyond rebooting the node)? Or am I missing any Cinder config?
>
> Thanks in advance for any help on this!!!
> Dane
>
>
>
> ___
> OpenStack-Infra mailing list
> openstack-in...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >