Re: [openstack-dev] UTF-8 required charset/encoding for openstack database?

2014-03-11 Thread Chris Friesen

On 03/11/2014 05:50 PM, Clint Byrum wrote:


But MySQL can't possibly know what you _meant_ when you were inserting
data. So, if you _assumed_ that the database was UTF-8, and inserted
UTF-8 with all of those things accidentally set for latin1, then you
will have UTF-8 in your db, but MySQL will think it is latin1. So if you
now try to alter the table to UTF-8, all of your high-byte strings will
be double-encoded.

It unfortunately takes analysis to determine what the course of action
is. That is why we added the check to Heat, so that it would complain
very early if your tables and/or server configuration were going to
disagree with the assumptions.


I find it interesting that the db migrations only specify character 
encodings for mysql, not any other database.  At the same time, devstack 
seems to create the nova* databases as latin1 for historical reasons.


postgres is supported under devstack, so I think this will end up 
causing a devstack/postgres setup to use utf-8 for most things but 
latin1 for the nova* databases, which seems odd.


Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] Edit subnet in workflows - ip_version hidden?

2014-03-11 Thread Radomir Dopieralski
On 11/03/14 16:57, Abishek Subramanian (absubram) wrote:

> Althouh - how up to date is this code?

This should be easy to check with the "git blame" command:

$ git blame
openstack_dashboard/dashboards/project/networks/subnets/workflows.py

[...]
31d55e50 (Akihiro MOTOKI  2013-01-04 18:33:03 +0900  56) class
CreateSubnet(network_workflows.CreateNetwork):
[...]
31d55e50 (Akihiro MOTOKI  2013-01-04 18:33:03 +0900  82) class
UpdateSubnetInfoAction(CreateSubnetInfoAction):
[...]
31d55e50 (Akihiro MOTOKI  2013-01-04 18:33:03 +0900 101)
#widget=forms.Select(
[...]

As you can see, it's all in the same patch, so it's on purpose.

It seems to me, that in the update dialog you are not supposed to change
the IP Version field, Akihiro Motoki tried to disable it
first, but then he hit the problem with the browser not submitting
the field's value and the form displaying the wrong option in there,
so he decided to hide it instead. But we won't know until the author
speaks for himself.

Personally, I would also add a check in the clean() method that the
IP Version field value indeed didn't change -- to make sure nobody
edited the form's HTML to get rid of the disabled or readonly attribute.

-- 
Radomir Dopieralski

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack][Nova][Docker] Devstack with docker driver

2014-03-11 Thread urgensherpa
hello there,i setup using devstack ..below is my docker version output
--
redhat@test:~/devstack$ docker version
Client version: 0.7.6
Go version (client): go1.2
Git commit (client): bc3b2ec
Server version: 0.7.6
Git commit (server): bc3b2ec
Go version (server): go1.2
Last stable version: 0.9.0, please update docker
---
I followed a guide from
*http://damithakumarage.wordpress.com/2014/01/31/how-to-setup-openstack-havana-with-docker-driver/*

---
I tagged an image using 

$ docker tag urgensherpa/lamp6 192.168.140.193:5042/lamp6
Below is my 'docker push' commad output.

redhat@test:~/devstack$ docker push 192.168.140.193:5042/lamp6

The push refers to a repository [192.168.140.193:5042/lamp6] (len: 1)
Sending image list
Pushing repository 192.168.140.193:5042/lamp6 (1 tags)
2014/03/11 13:22:03 HTTP code 500 while uploading metadata: invalid
character ‘<' looking for beginning of value
--
Please let me know what i need to do thanks 



--
View this message in context: 
http://openstack.10931.n7.nabble.com/Openstack-Nova-Docker-Devstack-with-docker-driver-tp28361p34942.html
Sent from the Developer mailing list archive at Nabble.com.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Neutron error using devstack

2014-03-11 Thread abhishek jain
Thanks for the help.
I'm now able to proceed further with your suggestions.
I'm now enabling live migration in /etc/nova/nova.conf by adding the below
line

live_migration_flag=VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE


However now i need  to restart nova-compute service, but whenever i try to
do the same,I'm getting the below logs

sudo systemctl restart nova-compute.service

Failed to issue method call: Unit nova-compute.service failed to load: No
such file or directory. See system logs and 'systemctl status
nova-compute.service' for details.

Please help regarding this.


On Sun, Mar 2, 2014 at 12:42 AM, Elena Ezhova  wrote:

> If you want to use Neutron with devstack you have to add the related
> settings to localrc.
>
> Please see https://wiki.openstack.org/wiki/NeutronDevstack for detailed
> instructions.
> 01 марта 2014 г. 22:11 пользователь "abhishek jain" <
> ashujain9...@gmail.com> написал:
>
>> Hi all
>>
>> I have installed devstack successfully from the following link...
>>
>>
>> http://www.linux.com/learn/tutorials/721712-intro-to-openstack-part-two-how-to-install-and-configure-openstack-on-a-server
>>
>> However I'm not able to run the neutron services.The error which
>> generally comes after running neutron command is as follows.
>>
>> neutron subnet-create vxlan-net 10.100.1.0/24 --name vxlan-net
>>
>> You must provide a username via either --os-username or env[OS_USERNAME]
>>
>> ashu@ashu $ ps -ef | grep neutron
>> ashu31039 30660  0 18:09 pts/25   00:00:00 grep --color=auto neutron
>>
>> Please help regarding this
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] making changes in openstack database

2014-03-11 Thread rash g
Hi,
I am a student and working on a project in openstack.In my project
I am making changes to the instance in openstack from kvm virtual
machine manager, say, change  the memory of the instance.This change I
want to notify to openstack and this change needs to be reflected in
openstack.This is our project requirement.
   How do I achieve this??Should I make changes in any of the
openstack databases? Can jclouds be used? or is there any other way...
   I would be glad if anyone can help me.

Thanks,
Rashmi

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Error on running tox

2014-03-11 Thread Renat Akhmerov
Ok. I might be related with oslo.messaging change that we merged in yesterday 
but I don’t see at this point how exactly.

Renat Akhmerov
@ Mirantis Inc.



On 12 Mar 2014, at 12:38, Manas Kelshikar  wrote:

> Yes it is 100% reproducible.
> 
> Was hoping it was environmental i.e. missing some dependency etc. but since 
> that does not seem to be the case I shall debug locally and report back.
> 
> Thanks!
> 
> 
> On Tue, Mar 11, 2014 at 9:54 PM, Renat Akhmerov  
> wrote:
> Hm.. Interesting. CI wasn’t able to reveal this for some reason.
> 
> My first guess is that there’s a race condition somewhere. Did you try to 
> debug it? And is this error 100% repeatable?
> 
> Renat Akhmerov
> @ Mirantis Inc.
> 
> 
> 
> On 12 Mar 2014, at 11:18, Manas Kelshikar  wrote:
> 
>> I see this error when I run tox. I pulled down a latest copy of master and 
>> tried to setup the environment. Any ideas?
>> 
>> See http://paste.openstack.org/show/73213/ for details. Any help is 
>> appreciated.
>> 
>> 
>> 
>> Thanks,
>> 
>> Manas
>> 
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Error on running tox

2014-03-11 Thread Manas Kelshikar
Yes it is 100% reproducible.

Was hoping it was environmental i.e. missing some dependency etc. but since
that does not seem to be the case I shall debug locally and report back.

Thanks!


On Tue, Mar 11, 2014 at 9:54 PM, Renat Akhmerov wrote:

> Hm.. Interesting. CI wasn't able to reveal this for some reason.
>
> My first guess is that there's a race condition somewhere. Did you try to
> debug it? And is this error 100% repeatable?
>
> Renat Akhmerov
> @ Mirantis Inc.
>
>
>
> On 12 Mar 2014, at 11:18, Manas Kelshikar  wrote:
>
> I see this error when I run tox. I pulled down a latest copy of master and
> tried to setup the environment. Any ideas?
>
> See http://paste.openstack.org/show/73213/ for details. Any help is
> appreciated.
>
>
> Thanks,
>
> Manas
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Local vs. Scalable Engine

2014-03-11 Thread Renat Akhmerov

On 12 Mar 2014, at 06:37, W Chan  wrote:

> Here're the proposed changes.
> 1) Rewrite the launch script to be more generic which contains option to 
> launch all components (i.e. API, engine, executor) on the same process but 
> over separate threads or launch each individually.

You mentioned test_executor.py so I think it would make sense first to refactor 
the code in there related with acquiring transport and launching executor. My 
suggestions are:
In test base class (mistral.tests.base.BaseTest) create the new method 
start_local_executor() that would deal with getting a fake driver inside and 
all that stuff. This would be enough for tests where we need to run engine and 
check something. start_local_executor() can be just a part of setUp() method 
for such tests.
As for the launch script I have the following thoughts:
Long-term launch scripts should be different for all API, engine and executor. 
Now API and engine start within the same process but it’s just a temporary 
solution.
Launch script for engine (which is the same as API’s for now) should have an 
option --use-local-executor to be able to run an executor along with engine 
itself within the same process.

> 2) Move transport to a global variables, similar to global _engine and then 
> shared by the different component.

Not sure why we need it. Can you please explain more detailed here? The better 
way would be to initialize engine and executor with transport when we create 
them. If our current structure doesn’t allow this easily we should discuss it 
and change it.

In mistral.engine.engine.py we now have:

 def load_engine():
global _engine
module_name = cfg.CONF.engine.engine
module = importutils.import_module(module_name)
_engine = module.get_engine()

As an option we could have the code that loads engine in engine launch script 
(once we decouple it from API process) so that when we call get_engine() we 
could pass in all needed configuration parameters like transport.

> 3) Modified the engine and the executor to use a factory method to get the 
> global transport

If we made a decision on #2 we won’t need it.


A side note: when we discuss things like that I really miss DI container :)

Renat Akhmerov
@ Mirantis Inc.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Error on running tox

2014-03-11 Thread Renat Akhmerov
Hm.. Interesting. CI wasn’t able to reveal this for some reason.

My first guess is that there’s a race condition somewhere. Did you try to debug 
it? And is this error 100% repeatable?

Renat Akhmerov
@ Mirantis Inc.



On 12 Mar 2014, at 11:18, Manas Kelshikar  wrote:

> I see this error when I run tox. I pulled down a latest copy of master and 
> tried to setup the environment. Any ideas?
> 
> See http://paste.openstack.org/show/73213/ for details. Any help is 
> appreciated.
> 
> 
> 
> Thanks,
> 
> Manas
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Mistral] Error on running tox

2014-03-11 Thread Manas Kelshikar
I see this error when I run tox. I pulled down a latest copy of master and
tried to setup the environment. Any ideas?

See http://paste.openstack.org/show/73213/ for details. Any help is
appreciated.


Thanks,

Manas
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Dev] Refresher on OSLO-Incubator

2014-03-11 Thread John Griffith
On Tue, Mar 11, 2014 at 9:07 PM, John Griffith
wrote:

> Hey Everyone,
>
> I wanted to send an email out to point out something that we ran across in
> Cinder yesterday.  First I want to review my understanding of how
> OSLO-Incubator is intended to work:
>
> The idea behind having the OSLO repository is to consolidate the various
> modules and such that all of the OpenStack projects use.  Not only is this
> great to reduce code duplication (at least reinventing the wheel), it also
> provides consistency and what should in the end be more reliable modules
> for all of those methods and functionality that all of the OpenStack
> projects share.
>
> Typically in Cinder if a patch comes along that attempts to modify
> anything in cinder/openstack/common directly it's rejected, the reason is
> that the idea of OSLO is that it is to be the master/upstream repository
> for the shared code.  If a change is needed or a bug needs fixing it needs
> to be fixed their first, and then synched back to the other projects.
>
> In my personal opinion the whole concept of OSLO-Incubator falls apart and
> doesn't work if this process isn't followed.  If the OSLO code needs a
> special customization for a single project then we need to look at the
> module and see if it can be modified to suit everyones needs, or said
> project just shouldn't import that module and should use their own (I know
> some won't like that but hey, it's reality).
>
> Anyway, the reason I'm sending this email out is that recently we had a
> problem showing up in CI with Cinder-API logging a ton of tracebacks.  It
> wasn't overly visible at first because the tests were actually passing, but
> it was a problem in logging and the logging messages.  After some digging
> it turned out that the problem was actually a bug in the
> openstack/common/log.py module which we just recently synched from OSLO,
> bug here [1].
>
> When I first started looking at this I discounted the synch with log.py
> because I noticed that other project (based on git history) had performed
> the same sync recently and had the same version.  After some digging and
> some work by Luis and others however we noticed that those projects had
> patched the log.py file directly in the project (Nova and Glance
> inparticular).
>
> So the problem now is that even though we have what we call "common" it
> seems there's a good chance that a number of projects have their own custom
> version of the code that's there.  That defeats the purpose in my opinion.
>  I don't want to argue the concept or policy of OSLO-Incubator code, but my
> point is that we do have a policy and we agreed on it so we should be
> careful to make sure we follow it.  It's easy for things like this to slip
> by so I'm by no means criticizing (especially since I'm sure there's
> similar things in Cinder), I just mentioned it in the project meeting today
> and folks thought it might be good to get it out on the ML to remind all of
> us about the process here.
>
> Thanks,
> John
>
> sorry... it's not nice to reference a link and not include it
[1]: https://bugs.launchpad.net/cinder/+bug/1290503
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] FFE Request: Ephemeral RBD image support

2014-03-11 Thread Zhi Yan Liu
Jay thanks your correct analysis and quick fix.

zhiyan

On Wed, Mar 12, 2014 at 4:11 AM, Jay Pipes  wrote:
> On Tue, 2014-03-11 at 14:18 -0500, Matt Riedemann wrote:
>>
>> On 3/10/2014 11:20 AM, Dmitry Borodaenko wrote:
>> > On Fri, Mar 7, 2014 at 8:55 AM, Sean Dague  wrote:
>> >> On 03/07/2014 11:16 AM, Russell Bryant wrote:
>> >>> On 03/07/2014 04:19 AM, Daniel P. Berrange wrote:
>>  On Thu, Mar 06, 2014 at 12:20:21AM -0800, Andrew Woodward wrote:
>> > I'd Like to request A FFE for the remaining patches in the Ephemeral
>> > RBD image support chain
>> >
>> > https://review.openstack.org/#/c/59148/
>> > https://review.openstack.org/#/c/59149/
>> >
>> > are still open after their dependency
>> > https://review.openstack.org/#/c/33409/ was merged.
>> >
>> > These should be low risk as:
>> > 1. We have been testing with this code in place.
>> > 2. It's nearly all contained within the RBD driver.
>> >
>> > This is needed as it implements an essential functionality that has
>> > been missing in the RBD driver and this will become the second release
>> > it's been attempted to be merged into.
>> 
>>  Add me as a sponsor.
>> >>>
>> >>> OK, great.  That's two.
>> >>>
>> >>> We have a hard deadline of Tuesday to get these FFEs merged (regardless
>> >>> of gate status).
>> >>>
>> >>
>> >> As alt release manager, FFE approved based on Russell's approval.
>> >>
>> >> The merge deadline for Tuesday is the release meeting, not end of day.
>> >> If it's not merged by the release meeting, it's dead, no exceptions.
>> >
>> > Both commits were merged, thanks a lot to everyone who helped land
>> > this in Icehouse! Especially to Russel and Sean for approving the FFE,
>> > and to Daniel, Michael, and Vish for reviewing the patches!
>> >
>>
>> There was a bug reported today [1] that looks like a regression in this
>> new code, so we need people involved in this looking at it as soon as
>> possible because we have a proposed revert in case we need to yank it
>> out [2].
>>
>> [1] https://bugs.launchpad.net/nova/+bug/1291014
>> [2]
>> https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bug/1291014,n,z
>
> Note that I have identified the source of the problem and am pushing a
> patch shortly with unit tests.
>
> Best,
> -jay
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] a question about instance snapshot

2014-03-11 Thread Sheng Bo Hou
Hi everyone,

I got excited to hear that this live snapshot has been taken into 
discussion in our community. Recently my clients in China came up with 
this live snapshot requirement as well, because they have already had 
their legacy environment and expect the original functions work fine when 
they transfer to use OpenStack. In my opinion, we need to think a little 
bit about these clients' needs, because it is also a potential market for 
OpenStack.

I registered a new blueprint for Nova 
https://blueprints.launchpad.net/nova/+spec/driver-specific-snapshot. It 
is named driver-specific before, but can be changed later.

The Nova API could be implemented via the extension, the following API may 
be added:
• CreateSnapshot: create a snapshot from the VM. The snapshot can be live 
snapshot or other hypervisor native way to create a snapshot.
• RestoreFromSnapshot: restore/revert the VM from a snapshot.
• DeleteSnapshot: delete a snapshot.
• ListSnapshot: list all the snapshots or list all the snapshots if a VM 
id is given.
• SpawnFromSnapshot: spawn a new VM from an existing snapshot, which is 
the live snapshot or the snapshot of other snapshot created in a 
hypervisor native way.
The features in this blueprint can be optional for any drivers. If a 
driver does not have a "native way" to do live snapshot or other kind of 
snapshots, it is fine to leave the API not implemented; if a driver can 
provide the "native feature" to do snapshot, it is an opportunity to 
reinforce Nova with this snapshot support. 

I sincerely need your comments and hope we can figure it out in a most 
favorable way. 
Thank you so much.

Best wishes,
Vincent Hou (侯胜博)

Staff Software Engineer, Open Standards and Open Source Team, Emerging 
Technology Institute, IBM China Software Development Lab

Tel: 86-10-82450778 Fax: 86-10-82453660
Notes ID: Sheng Bo Hou/China/IBM@IBMCNE-mail: sb...@cn.ibm.com 
Address:3F Ring, Building 28 Zhongguancun Software Park, 8 Dongbeiwang 
West Road, Haidian District, Beijing, P.R.C.100193
地址:北京市海淀区东北旺西路8号中关村软件园28号楼环宇大厦3层 邮编:100193



Jay Pipes  
2014/03/12 03:15
Please respond to
"OpenStack Development Mailing List \(not for usage questions\)" 



To
openstack-dev@lists.openstack.org, 
cc

Subject
Re: [openstack-dev] [nova] a question about instance snapshot






On Tue, 2014-03-11 at 06:35 +, Bohai (ricky) wrote:
> > -Original Message-
> > From: Jay Pipes [mailto:jaypi...@gmail.com]
> > Sent: Tuesday, March 11, 2014 3:20 AM
> > To: openstack-dev@lists.openstack.org
> > Subject: Re: [openstack-dev] [nova] a question about instance snapshot
> >
> > On Mon, 2014-03-10 at 12:13 -0400, Shawn Hartsock wrote:
> > > We have very strong interest in pursing this feature in the VMware
> > > driver as well. I would like to see the revert instance feature
> > > implemented at least.
> > >
> > > When I used to work in multi-discipline roles involving operations 
it
> > > would be common for us to snapshot a vm, run through an upgrade
> > > process, then revert if something did not upgrade smoothly. This
> > > ability alone can be exceedingly valuable in long-lived virtual
> > > machines.
> > >
> > > I also have some comments from parties interested in refactoring how
> > > the VMware drivers handle snapshots but I'm not certain how much 
that
> > > plays into this "live snapshot" discussion.
> >
> > I think the reason that there isn't much interest in doing this kind 
of thing is
> > because the worldview that VMs are pets is antithetical to the 
worldview that
> > VMs are cattle, and Nova tends to favor the latter (where DRS/DPM on
> > vSphere tends to favor the former).
> >
> > There's nothing about your scenario above of being able to "revert" an 
instance
> > to a particular state that isn't possible with today's Nova.
> > Snapshotting an instance, doing an upgrade of software on the 
instance, and
> > then restoring from the snapshot if something went wrong (reverting) 
is
> > already fully possible to do with the regular Nova snapshot and 
restore
> > operations. The only difference is that the "live-snapshot"
> > stuff would include saving the memory view of a VM in addition to its 
disk state.
> > And that, at least in my opinion, is only needed when you are treating 
VMs like
> > pets and not cattle.
> >
> 
> Hi Jay,
> 
> I read every words in your reply and respect what you said.
> 
> But i can't agree with you that memory snapshot is a feature for pat not 
for cattle.
> I think it's a feature whatever what do you look the instance as.
> 
> The world doesn't care about what we look the instance as, in fact, 
currently almost all the
> mainstream hypervisors have supported the memory snapshot.
> If it's just a dispensable feature and no users need it, I can't 
understand why
> the hypervisors provide it without exception.
> 
> In the document " OPENSTACK OPERATIONS GUIDE" section " Live snapshots" 
has the
> below words:
> " To ensure that important services have written their content

Re: [openstack-dev] [all][db][performance] Proposal: Get rid of soft deletion (step by step)

2014-03-11 Thread Jay Pipes
On Wed, 2014-03-12 at 01:47 +, Joshua Harlow wrote:
> The question that I don't understand is why does this process have to be
> involve the database to begin with?
> 
> If you want to archive images per-say, on deletion just export it to a
> 'backup tape' (for example) and store enough of the metadata on that
> 'tape' to re-insert it if this is really desired and then delete it from
> the database (or do the export... asynchronously). The same could be said
> with VMs, although likely not all resources, aka networks/.../ make sense
> to do this.
> 
> So instead of deleted = 1, wait for cleaner, just save the resource (if
> possible) + enough metadata on some other system ('backup tape', alternate
> storage location, hdfs, ceph...) and leave it there unless it's really
> needed. Making the database more complex (and all associated code) to
> achieve this same goal seems like a hack that just needs to be addressed
> with a better way to do archiving.
> 
> In a cloudy world of course people would be able to recreate everything
> they need on-demand so who needs undelete anyway ;-)

Good points.

Another way to ask the question: does Amazon provide an undelete
functionality?

Best,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [OpenStack-Dev] Refresher on OSLO-Incubator

2014-03-11 Thread John Griffith
Hey Everyone,

I wanted to send an email out to point out something that we ran across in
Cinder yesterday.  First I want to review my understanding of how
OSLO-Incubator is intended to work:

The idea behind having the OSLO repository is to consolidate the various
modules and such that all of the OpenStack projects use.  Not only is this
great to reduce code duplication (at least reinventing the wheel), it also
provides consistency and what should in the end be more reliable modules
for all of those methods and functionality that all of the OpenStack
projects share.

Typically in Cinder if a patch comes along that attempts to modify anything
in cinder/openstack/common directly it's rejected, the reason is that the
idea of OSLO is that it is to be the master/upstream repository for the
shared code.  If a change is needed or a bug needs fixing it needs to be
fixed their first, and then synched back to the other projects.

In my personal opinion the whole concept of OSLO-Incubator falls apart and
doesn't work if this process isn't followed.  If the OSLO code needs a
special customization for a single project then we need to look at the
module and see if it can be modified to suit everyones needs, or said
project just shouldn't import that module and should use their own (I know
some won't like that but hey, it's reality).

Anyway, the reason I'm sending this email out is that recently we had a
problem showing up in CI with Cinder-API logging a ton of tracebacks.  It
wasn't overly visible at first because the tests were actually passing, but
it was a problem in logging and the logging messages.  After some digging
it turned out that the problem was actually a bug in the
openstack/common/log.py module which we just recently synched from OSLO,
bug here [1].

When I first started looking at this I discounted the synch with log.py
because I noticed that other project (based on git history) had performed
the same sync recently and had the same version.  After some digging and
some work by Luis and others however we noticed that those projects had
patched the log.py file directly in the project (Nova and Glance
inparticular).

So the problem now is that even though we have what we call "common" it
seems there's a good chance that a number of projects have their own custom
version of the code that's there.  That defeats the purpose in my opinion.
 I don't want to argue the concept or policy of OSLO-Incubator code, but my
point is that we do have a policy and we agreed on it so we should be
careful to make sure we follow it.  It's easy for things like this to slip
by so I'm by no means criticizing (especially since I'm sure there's
similar things in Cinder), I just mentioned it in the project meeting today
and folks thought it might be good to get it out on the ML to remind all of
us about the process here.

Thanks,
John
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] a question about instance snapshot

2014-03-11 Thread Qin Zhao
Live-snapshot is definitely generic. Mainstream hypervisors (vmware, kvm,
hyperv) already support it for a long time. In
https://review.openstack.org/#/c/34036/, I see the last comment made by
Russell Bryant is that only libvirt backend request to implement it. I am
very curious about that. To my understanding, at least vmware, kvm, hyperv
should propose to implement this function. Why did we abandon that last
year? Can we consider to continue this work in Juno?


On Wed, Mar 12, 2014 at 3:15 AM, Jay Pipes  wrote:

> On Tue, 2014-03-11 at 06:35 +, Bohai (ricky) wrote:
> > > -Original Message-
> > > From: Jay Pipes [mailto:jaypi...@gmail.com]
> > > Sent: Tuesday, March 11, 2014 3:20 AM
> > > To: openstack-dev@lists.openstack.org
> > > Subject: Re: [openstack-dev] [nova] a question about instance snapshot
> > >
> > > On Mon, 2014-03-10 at 12:13 -0400, Shawn Hartsock wrote:
> > > > We have very strong interest in pursing this feature in the VMware
> > > > driver as well. I would like to see the revert instance feature
> > > > implemented at least.
> > > >
> > > > When I used to work in multi-discipline roles involving operations it
> > > > would be common for us to snapshot a vm, run through an upgrade
> > > > process, then revert if something did not upgrade smoothly. This
> > > > ability alone can be exceedingly valuable in long-lived virtual
> > > > machines.
> > > >
> > > > I also have some comments from parties interested in refactoring how
> > > > the VMware drivers handle snapshots but I'm not certain how much that
> > > > plays into this "live snapshot" discussion.
> > >
> > > I think the reason that there isn't much interest in doing this kind
> of thing is
> > > because the worldview that VMs are pets is antithetical to the
> worldview that
> > > VMs are cattle, and Nova tends to favor the latter (where DRS/DPM on
> > > vSphere tends to favor the former).
> > >
> > > There's nothing about your scenario above of being able to "revert" an
> instance
> > > to a particular state that isn't possible with today's Nova.
> > > Snapshotting an instance, doing an upgrade of software on the
> instance, and
> > > then restoring from the snapshot if something went wrong (reverting) is
> > > already fully possible to do with the regular Nova snapshot and restore
> > > operations. The only difference is that the "live-snapshot"
> > > stuff would include saving the memory view of a VM in addition to its
> disk state.
> > > And that, at least in my opinion, is only needed when you are treating
> VMs like
> > > pets and not cattle.
> > >
> >
> > Hi Jay,
> >
> > I read every words in your reply and respect what you said.
> >
> > But i can't agree with you that memory snapshot is a feature for pat not
> for cattle.
> > I think it's a feature whatever what do you look the instance as.
> >
> > The world doesn't care about what we look the instance as, in fact,
> currently almost all the
> > mainstream hypervisors have supported the memory snapshot.
> > If it's just a dispensable feature and no users need it, I can't
> understand why
> > the hypervisors provide it without exception.
> >
> > In the document " OPENSTACK OPERATIONS GUIDE" section " Live snapshots"
> has the
> > below words:
> > " To ensure that important services have written their contents to disk
> (such as, databases),
> > we recommend you read the documentation for those applications to
> determine what commands
> > to issue to have them sync their contents to disk. If you are unsure how
> to do this,
> >  the safest approach is to simply stop these running services normally.
> > "
> > This just pushes all the responsibility to guarantee the consistency of
> the instance to the end user.
> > It's absolutely not convenient and I doubt whether it's appropriate.
>
> Hi Ricky,
>
> I guess we will just have to disagree about the relative usefulness of
> this kind of thing for users of the cloud (and not users of traditional
> managed hosting) :) Like I said, if it does not affect the performance
> of other tenants' instances, I'm fine with adding the functionality in a
> way that is generic (not hypervisor-specific).
>
> Best,
> -jay
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Qin Zhao
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] UTF-8 required charset/encoding for openstack database?

2014-03-11 Thread Clint Byrum
Excerpts from Ben Nemec's message of 2014-03-10 13:02:47 -0700:
> On 2014-03-10 12:24, Chris Friesen wrote:
> > Hi,
> > 
> > I'm using havana and recent we ran into an issue with heat related to
> > character sets.
> > 
> > In heat/db/sqlalchemy/api.py in user_creds_get() we call
> > _decrypt() on an encrypted password stored in the database and then
> > try to convert the result to unicode.  Today we hit a case where this
> > errored out with the following message:
> > 
> > UnicodeDecodeError: 'utf8' codec can't decode byte 0xf2 in position 0:
> > invalid continuation byte
> > 
> > We're using postgres and currently all the databases are using
> > SQL_ASCII as the charset.
> > 
> > I see that in icehouse heat will complain if you're using mysql and
> > not using UTF-8.  There doesn't seem to be any checks for other
> > databases though.
> > 
> > It looks like devstack creates most databases as UTF-8 but uses latin1
> > for nova/nova_bm/nova_cell.  I assume this is because nova expects to
> > migrate the db to UTF-8 later.  Given that those migrations specify a
> > character set only for mysql, when using postgres should we explicitly
> > default to UTF-8 for everything?
> > 
> > Thanks,
> > Chris
> 
> We just had a discussion about this in #openstack-oslo too.  See the 
> discussion starting at 2014-03-10T16:32:26 
> http://eavesdrop.openstack.org/irclogs/%23openstack-oslo/%23openstack-oslo.2014-03-10.log
> 
> While it seems Heat does require utf8 (or at least matching character 
> sets) across all tables, I'm not sure the current solution is good.  It 
> seems like we may want a migration to help with this for anyone who 
> might already have mismatched tables.  There's a lot of overlap between 
> that discussion and how to handle Postgres with this, I think.
> 
> I don't have a definite answer for any of this yet but I think it is 
> something we need to figure out, so hopefully we can get some input from 
> people who know more about the encoding requirements of the Heat and 
> other projects' databases.

Doing a migration for this is haphazard. MySQL has _four_ places which
govern character set of any operation.

server charset
client charset
db charset
table charset

There are also per-column charsets but those basically trump all the
others.

But MySQL can't possibly know what you _meant_ when you were inserting
data. So, if you _assumed_ that the database was UTF-8, and inserted
UTF-8 with all of those things accidentally set for latin1, then you
will have UTF-8 in your db, but MySQL will think it is latin1. So if you
now try to alter the table to UTF-8, all of your high-byte strings will
be double-encoded.

It unfortunately takes analysis to determine what the course of action
is. That is why we added the check to Heat, so that it would complain
very early if your tables and/or server configuration were going to
disagree with the assumptions.

It would likely be best for there to be a more generally available
solution for stopping and complaining loudly when a badly configured
database is encountered.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo.messaging] mongodb notification driver

2014-03-11 Thread Hiroyuki Eguchi
I'm envisioning a mongodb notification driver.

Currently, For troubleshooting, I'm using a log driver of notification, and 
sent notification log to rsyslog server, and store log in database using 
rsyslog-mysql package.

I would like to make it more simple, So I came up with this feature.

Ceilometer can manage notifications using mongodb, but Ceilometer should have 
the role of Metering, not Troubleshooting.

If you have any comments or suggestion, please let me know.
And please let me know if there's any discussion about this.

Thanks.
--hiroyuki

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Service VM: irc discussion?

2014-03-11 Thread Isaku Yamahata
Hi. Sorry for it.

Tuesdays at 23:00 UTC is correct.
I mixed the time with it due to the daylight saving time.
On the next week(March 18), it will be held on the correct time.

Again, sorry for it.
thanks,

On Tue, Mar 11, 2014 at 04:15:27PM -0700,
Stephen Wong  wrote:

> Hi Isaku,
> 
> Seems like you had the meeting at 22:00 UTC instead of 23:00 UTC?
> 
> [15:01]  hello? is anybody there for servicevm meeting?
> [15:02]  #startmeeting neutron/servicevm
> [15:02]  Meeting started Tue Mar 11 22:02:14 2014 UTC and is due
> to finish in 60 minutes.  The chair is yamahata. Information about MeetBot
> at http://wiki.debian.org/MeetBot.
> [snip]
> [15:24]  #endmeeting
> [15:24] *** openstack sets the channel topic to " (Meeting topic: project)".
> [15:24]  Meeting ended Tue Mar 11 22:24:08 2014 UTC.
>  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)
> 
> To clarify, are you looking at Tuesdays at 22:00 UTC or 23:00 UTC?
> 
> Thanks,
> - Stephen
> 
> 
> 
> On Wed, Mar 5, 2014 at 9:57 AM, Isaku Yamahata 
> wrote:
> 
> > Since I received some mails privately, I'd like to start weekly IRC
> > meeting.
> > The first meeting will be
> >
> >   Tuesdays 23:00UTC from March 11, 2014
> >   #openstack-meeting
> >   https://wiki.openstack.org/wiki/Meetings/ServiceVM
> >   If you have topics to discuss, please add to the page.
> >
> > Sorry if the time is inconvenient for you. The schedule will also be
> > discussed, and the meeting time would be changed from the 2nd one.
> >
> > Thanks,
> >
> > On Mon, Feb 10, 2014 at 03:11:43PM +0900,
> > Isaku Yamahata  wrote:
> >
> > > As the first patch for service vm framework is ready for review[1][2],
> > > it would be a good idea to have IRC meeting.
> > > Anyone interested in it? How about schedule?
> > >
> > > Schedule candidate
> > > Monday  22:00UTC-, 23:00UTC-
> > > Tuesday 22:00UTC-, 23:00UTC-
> > > (Although the slot of servanced service vm[3] can be resuled,
> > >  it doesn't work for me because my timezone is UTC+9.)
> > >
> > > topics for
> > > - discussion/review on the patch
> > > - next steps
> > > - other open issues?
> > >
> > > [1] https://blueprints.launchpad.net/neutron/+spec/adv-services-in-vms
> > > [2] https://review.openstack.org/#/c/56892/
> > > [3] https://wiki.openstack.org/wiki/Meetings/AdvancedServices
> > > --
> > > Isaku Yamahata 
> >
> > --
> > Isaku Yamahata 
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >

> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


-- 
Isaku Yamahata 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][db][performance] Proposal: Get rid of soft deletion (step by step)

2014-03-11 Thread Joshua Harlow
The question that I don't understand is why does this process have to be
involve the database to begin with?

If you want to archive images per-say, on deletion just export it to a
'backup tape' (for example) and store enough of the metadata on that
'tape' to re-insert it if this is really desired and then delete it from
the database (or do the export... asynchronously). The same could be said
with VMs, although likely not all resources, aka networks/.../ make sense
to do this.

So instead of deleted = 1, wait for cleaner, just save the resource (if
possible) + enough metadata on some other system ('backup tape', alternate
storage location, hdfs, ceph...) and leave it there unless it's really
needed. Making the database more complex (and all associated code) to
achieve this same goal seems like a hack that just needs to be addressed
with a better way to do archiving.

In a cloudy world of course people would be able to recreate everything
they need on-demand so who needs undelete anyway ;-)

My 0.02 cents.

-Original Message-
From: Tim Bell 
Reply-To: "OpenStack Development Mailing List (not for usage questions)"

Date: Tuesday, March 11, 2014 at 11:43 AM
To: "OpenStack Development Mailing List (not for usage questions)"

Subject: Re: [openstack-dev] [all][db][performance] Proposal: Get rid of
soft deletion (step by step)

>
>Typical cases are user error where someone accidentally deletes an item
>from a tenant. The image guys have a good structure where images become
>unavailable and are recoverable for a certain period of time. A regular
>periodic task cleans up deleted items after a configurable number of
>seconds to avoid constant database growth.
>
>My preference would be to follow this model universally (an archive table
>is a nice way to do it without disturbing production).
>
>Tim
>
>
>> On Tue, Mar 11, 2014, Mike Wilson  wrote:
>> > Undeleting things is an important use case in my opinion. We do this
>> > in our environment on a regular basis. In that light I'm not sure that
>> > it would be appropriate just to log the deletion and git rid of the
>> > row. I would like to see it go to an archival table where it is
>>easily restored.
>> 
>> I'm curious, what are you undeleting and why?
>> 
>> JE
>> 
>> 
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] No route matched for POST

2014-03-11 Thread Vijay B
Hi Aaron!

I was able to get over the route issue - to begin with, turns out there was
a nasty single space rogue indent in the file (peril of not using a good
IDE). Apart from that, stepping through the api/extensions.py code showed
that I shouldn't be overriding the get_plugin_interface() method in tag.py
- because I don't have a plugin associated with this, and would use the
current plugin. Also, there was an attributes variable I was using wrongly
- had imported it as attr and had to use that..

So the next steps for me would be to go ahead in implementing the logic to
check/write to the db. Hopefully I'll get that to work quicker.

Thanks a lot again!

Regards,
Vijay


On Tue, Mar 11, 2014 at 9:42 AM, Vijay B  wrote:

> Hi Aaron!
>
> Yes, attaching the code diffs of the client and server. The diff
> 0001-Frist-commit-to-add-tag-create-CLI.patch needs to be applied on
> python-neutronclient's master branch, and the diff
> 0001-Adding-a-tag-extension.patch needs to be applied on neutron's
> stable/havana branch. After restarting q-svc, please run the CLI `neutron
> tag-create --name tag1 --key key1 --value val1` to test it out.  Thanks for
> offering to take a look at this!
>
> Regards,
> Vijay
>
>
> On Mon, Mar 10, 2014 at 10:10 PM, Aaron Rosen wrote:
>
>> Hi Vijay,
>>
>> I think you'd have to post you're code for anyone to really help you.
>> Otherwise we'll just be taking shots in the dark.
>>
>> Best,
>>
>> Aaron
>>
>>
>> On Mon, Mar 10, 2014 at 7:22 PM, Vijay B  wrote:
>>
>>> Hi,
>>>
>>> I'm trying to implement a new extension API in neutron, but am running
>>> into a "No route matched for POST" on the neutron service.
>>>
>>> I have followed the instructions in the link
>>> https://wiki.openstack.org/wiki/NeutronDevelopment#API_Extensions when
>>> trying to implement this extension.
>>>
>>> The extension doesn't depend on any plug in per se, akin to security
>>> groups.
>>>
>>> I have defined a new file in neutron/extensions/, called Tag.py, with a
>>> class Tag extending class extensions.ExtensionDescriptor, like the
>>> documentation requires. Much like many of the other extensions already
>>> implemented, I define my new extension as a dictionary, with fields like
>>> allow_post/allow_put etc, and then pass this to the controller. I still
>>> however run into a no route matched for POST error when I attempt to fire
>>> my CLI to create a tag. I also edited the ml2 plugin file
>>> neutron/plugins/ml2/plugin.py to add "tags" to
>>> _supported_extension_aliases, but that didn't resolve the issue.
>>>
>>> It looks like I'm missing something quite minor, causing the the new
>>> extension to not get registered, but I'm not sure what.
>>>
>>> I can provide more info/patches if anyone would like to take a look, and
>>> it would be very much appreciated if someone could help me out with this.
>>>
>>> Thanks!
>>> Regards,
>>> Vijay
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Service VM: irc discussion?

2014-03-11 Thread Stephen Wong
Hi Isaku,

Seems like you had the meeting at 22:00 UTC instead of 23:00 UTC?

[15:01]  hello? is anybody there for servicevm meeting?
[15:02]  #startmeeting neutron/servicevm
[15:02]  Meeting started Tue Mar 11 22:02:14 2014 UTC and is due
to finish in 60 minutes.  The chair is yamahata. Information about MeetBot
at http://wiki.debian.org/MeetBot.
[snip]
[15:24]  #endmeeting
[15:24] *** openstack sets the channel topic to " (Meeting topic: project)".
[15:24]  Meeting ended Tue Mar 11 22:24:08 2014 UTC.
 Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)

To clarify, are you looking at Tuesdays at 22:00 UTC or 23:00 UTC?

Thanks,
- Stephen



On Wed, Mar 5, 2014 at 9:57 AM, Isaku Yamahata wrote:

> Since I received some mails privately, I'd like to start weekly IRC
> meeting.
> The first meeting will be
>
>   Tuesdays 23:00UTC from March 11, 2014
>   #openstack-meeting
>   https://wiki.openstack.org/wiki/Meetings/ServiceVM
>   If you have topics to discuss, please add to the page.
>
> Sorry if the time is inconvenient for you. The schedule will also be
> discussed, and the meeting time would be changed from the 2nd one.
>
> Thanks,
>
> On Mon, Feb 10, 2014 at 03:11:43PM +0900,
> Isaku Yamahata  wrote:
>
> > As the first patch for service vm framework is ready for review[1][2],
> > it would be a good idea to have IRC meeting.
> > Anyone interested in it? How about schedule?
> >
> > Schedule candidate
> > Monday  22:00UTC-, 23:00UTC-
> > Tuesday 22:00UTC-, 23:00UTC-
> > (Although the slot of servanced service vm[3] can be resuled,
> >  it doesn't work for me because my timezone is UTC+9.)
> >
> > topics for
> > - discussion/review on the patch
> > - next steps
> > - other open issues?
> >
> > [1] https://blueprints.launchpad.net/neutron/+spec/adv-services-in-vms
> > [2] https://review.openstack.org/#/c/56892/
> > [3] https://wiki.openstack.org/wiki/Meetings/AdvancedServices
> > --
> > Isaku Yamahata 
>
> --
> Isaku Yamahata 
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] FFE Request: Ephemeral RBD image support

2014-03-11 Thread Matt Riedemann



On 3/11/2014 5:11 PM, Dmitry Borodaenko wrote:

On Tue, Mar 11, 2014 at 1:31 PM, Matt Riedemann
 wrote:

There was a bug reported today [1] that looks like a regression in this
new code, so we need people involved in this looking at it as soon as
possible because we have a proposed revert in case we need to yank it
out [2].

[1] https://bugs.launchpad.net/nova/+bug/1291014
[2] 
https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bug/1291014,n,z


Note that I have identified the source of the problem and am pushing a
patch shortly with unit tests.


My concern is how much else where assumes nova is working with the glance v2
API because there was a nova blueprint [1] to make nova work with the glance
V2 API but that never landed in Icehouse, so I'm worried about wack-a-mole
type problems here, especially since there is no tempest coverage for
testing multiple image location support via nova.

[1] https://blueprints.launchpad.net/nova/+spec/use-glance-v2-api


As I mentioned in the bug comments, the code that made the assumption
about glance v2 API actually landed in September 2012:
https://review.openstack.org/13017

The multiple image location patch simply made use of a method that was
already there for more than a year.

-DmitryB

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Yeah, I pointed that out today in IRC also.

So kudos to Jay for getting a patch up quickly, and a really nice one at 
that with extensive test coverage.


What I'd like to see in Juno is a tempest test that covers the multiple 
image locations code since it seems we obviously don't have that today. 
 How hard is something like that with an API test?


--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Local vs. Scalable Engine

2014-03-11 Thread W Chan
I want to propose the following changes to implement the local executor and
removal of the local engine.  As mentioned before, oslo.messaging includes
a "fake" driver that uses a simple queue.  An example in the use of this
fake driver is demonstrated in test_executor.  The use of the fake driver
requires that both the consumer and publisher of the queue is running in
the same process so the queue is in scope.  Currently, the launcher for
both the api/engine and the executor are launched on separate processes.

Here're the proposed changes.
1) Rewrite the launch script to be more generic which contains option to
launch all components (i.e. API, engine, executor) on the same process but
over separate threads or launch each individually.
2) Move transport to a global variables, similar to global _engine and then
shared by the different component.
3) Modified the engine and the executor to use a factory method to get the
global transport

This doesn't change how the workflows are being processed.  It just changed
how the services are launched.

Thoughts?
Winson
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] Intermittent failures cloning noVNC from github.com/kanaka

2014-03-11 Thread Sukhdev Kapur
Hey Monty,

The issue is when we are using stack.sh, how do we use cache dir as oppose
to going at git?
Is there any option which can be set to utilize this feature?

-Sukhdev



On Tue, Mar 11, 2014 at 4:42 PM, Monty Taylor  wrote:

> Honestly not being snarky here ... The reason is that github if quite
> flaky. We try very hard to never touch it in infra. And by try, I mean we
> NEVER clone from it live, and if we absoluely can't avoid it for some
> reason, we are clone into a cache dir.
>
> On Mar 11, 2014 4:28 PM, "Dane Leblanc (leblancd)" 
> wrote:
> >
> > Apologies if this is the wrong audience for this question…
> >
> >
> >
> > I’m seeing intermittent failures running stack.sh whereby ‘git clone
> https://github.com/kanaka/noVNC.git /opt/stack/noVNC’ is returning
> various errors.  Below are 2 examples.
> >
> >
> >
> > Is this a known issue? Are there any localrc settings which might help
> here?
> >
> >
> >
> > Example 1:
> >
> >
> >
> > 2014-03-11 15:00:33.779 | + is_service_enabled n-novnc
> >
> > 2014-03-11 15:00:33.780 | + return 0
> >
> > 2014-03-11 15:00:33.781 | ++ trueorfalse False
> >
> > 2014-03-11 15:00:33.782 | + NOVNC_FROM_PACKAGE=False
> >
> > 2014-03-11 15:00:33.783 | + '[' False = True ']'
> >
> > 2014-03-11 15:00:33.784 | + NOVNC_WEB_DIR=/opt/stack/noVNC
> >
> > 2014-03-11 15:00:33.785 | + git_clone
> https://github.com/kanaka/noVNC.git /opt/stack/noVNC master
> >
> > 2014-03-11 15:00:33.786 | + GIT_REMOTE=
> https://github.com/kanaka/noVNC.git
> >
> > 2014-03-11 15:00:33.788 | + GIT_DEST=/opt/stack/noVNC
> >
> > 2014-03-11 15:00:33.789 | + GIT_REF=master
> >
> > 2014-03-11 15:00:33.790 | ++ trueorfalse False False
> >
> > 2014-03-11 15:00:33.791 | + RECLONE=False
> >
> > 2014-03-11 15:00:33.792 | + [[ False = \T\r\u\e ]]
> >
> > 2014-03-11 15:00:33.793 | + echo master
> >
> > 2014-03-11 15:00:33.794 | + egrep -q '^refs'
> >
> > 2014-03-11 15:00:33.795 | + [[ ! -d /opt/stack/noVNC ]]
> >
> > 2014-03-11 15:00:33.796 | + [[ False = \T\r\u\e ]]
> >
> > 2014-03-11 15:00:33.797 | + git_timed clone
> https://github.com/kanaka/noVNC.git /opt/stack/noVNC
> >
> > 2014-03-11 15:00:33.798 | + local count=0
> >
> > 2014-03-11 15:00:33.799 | + local timeout=0
> >
> > 2014-03-11 15:00:33.801 | + [[ -n 0 ]]
> >
> > 2014-03-11 15:00:33.802 | + timeout=0
> >
> > 2014-03-11 15:00:33.803 | + timeout -s SIGINT 0 git clone
> https://github.com/kanaka/noVNC.git /opt/stack/noVNC
> >
> > 2014-03-11 15:00:33.804 | Cloning into '/opt/stack/noVNC'...
> >
> > 2014-03-11 15:03:13.694 | error: RPC failed; result=56, HTTP code = 200
> >
> > 2014-03-11 15:03:13.695 | fatal: The remote end hung up unexpectedly
> >
> > 2014-03-11 15:03:13.697 | fatal: early EOF
> >
> > 2014-03-11 15:03:13.698 | fatal: index-pack failed
> >
> > 2014-03-11 15:03:13.699 | + [[ 128 -ne 124 ]]
> >
> > 2014-03-11 15:03:13.700 | + die 596 'git call failed: [git clone'
> https://github.com/kanaka/noVNC.git '/opt/stack/noVNC]'
> >
> > 2014-03-11 15:03:13.701 | + local exitcode=0
> >
> > 2014-03-11 15:03:13.702 | [Call Trace]
> >
> > 2014-03-11 15:03:13.703 | ./stack.sh:736:install_nova
> >
> > 2014-03-11 15:03:13.705 |
> /var/lib/jenkins/devstack/lib/nova:618:git_clone
> >
> > 2014-03-11 15:03:13.706 |
> /var/lib/jenkins/devstack/functions-common:543:git_timed
> >
> > 2014-03-11 15:03:13.707 |
> /var/lib/jenkins/devstack/functions-common:596:die
> >
> > 2014-03-11 15:03:13.708 | [ERROR]
> /var/lib/jenkins/devstack/functions-common:596 git call failed: [git clone
> https://github.com/kanaka/noVNC.git /opt/stack/noVNC]
> >
> >
> >
> >
> >
> > Example 2:
> >
> >
> >
> > 2014-03-11 14:12:58.472 | + is_service_enabled n-novnc
> >
> > 2014-03-11 14:12:58.473 | + return 0
> >
> > 2014-03-11 14:12:58.474 | ++ trueorfalse False
> >
> > 2014-03-11 14:12:58.475 | + NOVNC_FROM_PACKAGE=False
> >
> > 2014-03-11 14:12:58.476 | + '[' False = True ']'
> >
> > 2014-03-11 14:12:58.477 | + NOVNC_WEB_DIR=/opt/stack/noVNC
> >
> > 2014-03-11 14:12:58.478 | + git_clone
> https://github.com/kanaka/noVNC.git /opt/stack/noVNC master
> >
> > 2014-03-11 14:12:58.479 | + GIT_REMOTE=
> https://github.com/kanaka/noVNC.git
> >
> > 2014-03-11 14:12:58.480 | + GIT_DEST=/opt/stack/noVNC
> >
> > 2014-03-11 14:12:58.481 | + GIT_REF=master
> >
> > 2014-03-11 14:12:58.482 | ++ trueorfalse False False
> >
> > 2014-03-11 14:12:58.483 | + RECLONE=False
> >
> > 2014-03-11 14:12:58.484 | + [[ False = \T\r\u\e ]]
> >
> > 2014-03-11 14:12:58.485 | + echo master
> >
> > 2014-03-11 14:12:58.486 | + egrep -q '^refs'
> >
> > 2014-03-11 14:12:58.487 | + [[ ! -d /opt/stack/noVNC ]]
> >
> > 2014-03-11 14:12:58.488 | + [[ False = \T\r\u\e ]]
> >
> > 2014-03-11 14:12:58.489 | + git_timed clone
> https://github.com/kanaka/noVNC.git /opt/stack/noVNC
> >
> > 2014-03-11 14:12:58.490 | + local count=0
> >
> > 2014-03-11 14:12:58.491 | + local timeout=0
> >
> > 2014-03-11 14:12:58.492 | + [[ -n 0 ]]
> >
> > 2014-03-11 14:12:58.493 | + timeout=0
> >
> > 2014-03-11 14:12:58.494 | + timeout -s SIGINT 0 g

Re: [openstack-dev] [OpenStack-Infra] Intermittent failures cloning noVNC from github.com/kanaka

2014-03-11 Thread Sukhdev Kapur
[adding openstack-dev list as well ]

I have noticed that this has stated hitting my builds within last few
hours. I have noticed exact same failures on almost 10 builds.
Looks like something has happened within last few hours - perhaps the load?

-Sukhdev



On Tue, Mar 11, 2014 at 4:28 PM, Dane Leblanc (leblancd)  wrote:

>  Apologies if this is the wrong audience for this question…
>
>
>
> I’m seeing intermittent failures running stack.sh whereby ‘git clone
> https://github.com/kanaka/noVNC.git /opt/stack/noVNC’ is returning
> various errors.  Below are 2 examples.
>
>
>
> Is this a known issue? Are there any localrc settings which might help
> here?
>
>
>
> Example 1:
>
>
>
> 2014-03-11 15:00:33.779 | + is_service_enabled n-novnc
>
> 2014-03-11 15:00:33.780 | + return 0
>
> 2014-03-11 15:00:33.781 | ++ trueorfalse False
>
> 2014-03-11 15:00:33.782 | + NOVNC_FROM_PACKAGE=False
>
> 2014-03-11 15:00:33.783 | + '[' False = True ']'
>
> 2014-03-11 15:00:33.784 | + NOVNC_WEB_DIR=/opt/stack/noVNC
>
> 2014-03-11 15:00:33.785 | + git_clone 
> https://github.com/kanaka/noVNC.git/opt/stack/noVNC master
>
> 2014-03-11 15:00:33.786 | + GIT_REMOTE=https://github.com/kanaka/noVNC.git
>
> 2014-03-11 15:00:33.788 | + GIT_DEST=/opt/stack/noVNC
>
> 2014-03-11 15:00:33.789 | + GIT_REF=master
>
> 2014-03-11 15:00:33.790 | ++ trueorfalse False False
>
> 2014-03-11 15:00:33.791 | + RECLONE=False
>
> 2014-03-11 15:00:33.792 | + [[ False = \T\r\u\e ]]
>
> 2014-03-11 15:00:33.793 | + echo master
>
> 2014-03-11 15:00:33.794 | + egrep -q '^refs'
>
> 2014-03-11 15:00:33.795 | + [[ ! -d /opt/stack/noVNC ]]
>
> 2014-03-11 15:00:33.796 | + [[ False = \T\r\u\e ]]
>
> 2014-03-11 15:00:33.797 | + git_timed clone
> https://github.com/kanaka/noVNC.git /opt/stack/noVNC
>
> 2014-03-11 15:00:33.798 | + local count=0
>
> 2014-03-11 15:00:33.799 | + local timeout=0
>
> 2014-03-11 15:00:33.801 | + [[ -n 0 ]]
>
> 2014-03-11 15:00:33.802 | + timeout=0
>
> 2014-03-11 15:00:33.803 | + timeout -s SIGINT 0 git clone
> https://github.com/kanaka/noVNC.git /opt/stack/noVNC
>
> 2014-03-11 15:00:33.804 | Cloning into '/opt/stack/noVNC'...
>
> 2014-03-11 15:03:13.694 | error: RPC failed; result=56, HTTP code = 200
>
> 2014-03-11 15:03:13.695 | fatal: The remote end hung up unexpectedly
>
> 2014-03-11 15:03:13.697 | fatal: early EOF
>
> 2014-03-11 15:03:13.698 | fatal: index-pack failed
>
> 2014-03-11 15:03:13.699 | + [[ 128 -ne 124 ]]
>
> 2014-03-11 15:03:13.700 | + die 596 'git call failed: [git clone'
> https://github.com/kanaka/noVNC.git '/opt/stack/noVNC]'
>
> 2014-03-11 15:03:13.701 | + local exitcode=0
>
> 2014-03-11 15:03:13.702 | [Call Trace]
>
> 2014-03-11 15:03:13.703 | ./stack.sh:736:install_nova
>
> 2014-03-11 15:03:13.705 | /var/lib/jenkins/devstack/lib/nova:618:git_clone
>
> 2014-03-11 15:03:13.706 |
> /var/lib/jenkins/devstack/functions-common:543:git_timed
>
> 2014-03-11 15:03:13.707 |
> /var/lib/jenkins/devstack/functions-common:596:die
>
> 2014-03-11 15:03:13.708 | [ERROR]
> /var/lib/jenkins/devstack/functions-common:596 git call failed: [git clone
> https://github.com/kanaka/noVNC.git /opt/stack/noVNC]
>
>
>
>
>
> Example 2:
>
>
>
> 2014-03-11 14:12:58.472 | + is_service_enabled n-novnc
>
> 2014-03-11 14:12:58.473 | + return 0
>
> 2014-03-11 14:12:58.474 | ++ trueorfalse False
>
> 2014-03-11 14:12:58.475 | + NOVNC_FROM_PACKAGE=False
>
> 2014-03-11 14:12:58.476 | + '[' False = True ']'
>
> 2014-03-11 14:12:58.477 | + NOVNC_WEB_DIR=/opt/stack/noVNC
>
> 2014-03-11 14:12:58.478 | + git_clone https://github.com/kanaka/noVNC.git 
> /opt/stack/noVNC master
>
> 2014-03-11 14:12:58.479 | + GIT_REMOTE=https://github.com/kanaka/noVNC.git
>
> 2014-03-11 14:12:58.480 | + GIT_DEST=/opt/stack/noVNC
>
> 2014-03-11 14:12:58.481 | + GIT_REF=master
>
> 2014-03-11 14:12:58.482 | ++ trueorfalse False False
>
> 2014-03-11 14:12:58.483 | + RECLONE=False
>
> 2014-03-11 14:12:58.484 | + [[ False = \T\r\u\e ]]
>
> 2014-03-11 14:12:58.485 | + echo master
>
> 2014-03-11 14:12:58.486 | + egrep -q '^refs'
>
> 2014-03-11 14:12:58.487 | + [[ ! -d /opt/stack/noVNC ]]
>
> 2014-03-11 14:12:58.488 | + [[ False = \T\r\u\e ]]
>
> 2014-03-11 14:12:58.489 | + git_timed clone 
> https://github.com/kanaka/noVNC.git /opt/stack/noVNC
>
> 2014-03-11 14:12:58.490 | + local count=0
>
> 2014-03-11 14:12:58.491 | + local timeout=0
>
> 2014-03-11 14:12:58.492 | + [[ -n 0 ]]
>
> 2014-03-11 14:12:58.493 | + timeout=0
>
> 2014-03-11 14:12:58.494 | + timeout -s SIGINT 0 git clone 
> https://github.com/kanaka/noVNC.git /opt/stack/noVNC
>
> 2014-03-11 14:12:58.495 | Cloning into '/opt/stack/noVNC'...
>
> 2014-03-11 14:14:02.315 | error: The requested URL returned error: 403 while 
> accessing https://github.com/kanaka/noVNC.git/info/refs
>
> 2014-03-11 14:14:02.316 | fatal: HTTP request failed
>
> 2014-03-11 14:14:02.317 | + [[ 128 -ne 124 ]]
>
> 2014-03-11 14:14:02.318 | + die 596 'git call failed: [git clone' 
> https://github.com/kanaka/noVNC.git '/opt/stack/noVNC]'
>
> 2014-03-11 14:14:02

Re: [openstack-dev] [OpenStack-Infra] Intermittent failures cloning noVNC from github.com/kanaka

2014-03-11 Thread Sukhdev Kapur
I have noticed that even clone of devstack has failed few times within last
couple of hours - it was running fairly smooth so far.

-Sukhdev



On Tue, Mar 11, 2014 at 5:05 PM, Sukhdev Kapur wrote:

> [adding openstack-dev list as well ]
>
> I have noticed that this has stated hitting my builds within last few
> hours. I have noticed exact same failures on almost 10 builds.
> Looks like something has happened within last few hours - perhaps the
> load?
>
> -Sukhdev
>
>
>
> On Tue, Mar 11, 2014 at 4:28 PM, Dane Leblanc (leblancd) <
> lebla...@cisco.com> wrote:
>
>>  Apologies if this is the wrong audience for this question…
>>
>>
>>
>> I’m seeing intermittent failures running stack.sh whereby ‘git clone
>> https://github.com/kanaka/noVNC.git /opt/stack/noVNC’ is returning
>> various errors.  Below are 2 examples.
>>
>>
>>
>> Is this a known issue? Are there any localrc settings which might help
>> here?
>>
>>
>>
>> Example 1:
>>
>>
>>
>> 2014-03-11 15:00:33.779 | + is_service_enabled n-novnc
>>
>> 2014-03-11 15:00:33.780 | + return 0
>>
>> 2014-03-11 15:00:33.781 | ++ trueorfalse False
>>
>> 2014-03-11 15:00:33.782 | + NOVNC_FROM_PACKAGE=False
>>
>> 2014-03-11 15:00:33.783 | + '[' False = True ']'
>>
>> 2014-03-11 15:00:33.784 | + NOVNC_WEB_DIR=/opt/stack/noVNC
>>
>> 2014-03-11 15:00:33.785 | + git_clone 
>> https://github.com/kanaka/noVNC.git/opt/stack/noVNC master
>>
>> 2014-03-11 15:00:33.786 | + GIT_REMOTE=
>> https://github.com/kanaka/noVNC.git
>>
>> 2014-03-11 15:00:33.788 | + GIT_DEST=/opt/stack/noVNC
>>
>> 2014-03-11 15:00:33.789 | + GIT_REF=master
>>
>> 2014-03-11 15:00:33.790 | ++ trueorfalse False False
>>
>> 2014-03-11 15:00:33.791 | + RECLONE=False
>>
>> 2014-03-11 15:00:33.792 | + [[ False = \T\r\u\e ]]
>>
>> 2014-03-11 15:00:33.793 | + echo master
>>
>> 2014-03-11 15:00:33.794 | + egrep -q '^refs'
>>
>> 2014-03-11 15:00:33.795 | + [[ ! -d /opt/stack/noVNC ]]
>>
>> 2014-03-11 15:00:33.796 | + [[ False = \T\r\u\e ]]
>>
>> 2014-03-11 15:00:33.797 | + git_timed clone
>> https://github.com/kanaka/noVNC.git /opt/stack/noVNC
>>
>> 2014-03-11 15:00:33.798 | + local count=0
>>
>> 2014-03-11 15:00:33.799 | + local timeout=0
>>
>> 2014-03-11 15:00:33.801 | + [[ -n 0 ]]
>>
>> 2014-03-11 15:00:33.802 | + timeout=0
>>
>> 2014-03-11 15:00:33.803 | + timeout -s SIGINT 0 git clone
>> https://github.com/kanaka/noVNC.git /opt/stack/noVNC
>>
>> 2014-03-11 15:00:33.804 | Cloning into '/opt/stack/noVNC'...
>>
>> 2014-03-11 15:03:13.694 | error: RPC failed; result=56, HTTP code = 200
>>
>> 2014-03-11 15:03:13.695 | fatal: The remote end hung up unexpectedly
>>
>> 2014-03-11 15:03:13.697 | fatal: early EOF
>>
>> 2014-03-11 15:03:13.698 | fatal: index-pack failed
>>
>> 2014-03-11 15:03:13.699 | + [[ 128 -ne 124 ]]
>>
>> 2014-03-11 15:03:13.700 | + die 596 'git call failed: [git clone'
>> https://github.com/kanaka/noVNC.git '/opt/stack/noVNC]'
>>
>> 2014-03-11 15:03:13.701 | + local exitcode=0
>>
>> 2014-03-11 15:03:13.702 | [Call Trace]
>>
>> 2014-03-11 15:03:13.703 | ./stack.sh:736:install_nova
>>
>> 2014-03-11 15:03:13.705 | /var/lib/jenkins/devstack/lib/nova:618:git_clone
>>
>> 2014-03-11 15:03:13.706 |
>> /var/lib/jenkins/devstack/functions-common:543:git_timed
>>
>> 2014-03-11 15:03:13.707 |
>> /var/lib/jenkins/devstack/functions-common:596:die
>>
>> 2014-03-11 15:03:13.708 | [ERROR]
>> /var/lib/jenkins/devstack/functions-common:596 git call failed: [git clone
>> https://github.com/kanaka/noVNC.git /opt/stack/noVNC]
>>
>>
>>
>>
>>
>> Example 2:
>>
>>
>>
>> 2014-03-11 14:12:58.472 | + is_service_enabled n-novnc
>>
>> 2014-03-11 14:12:58.473 | + return 0
>>
>> 2014-03-11 14:12:58.474 | ++ trueorfalse False
>>
>> 2014-03-11 14:12:58.475 | + NOVNC_FROM_PACKAGE=False
>>
>> 2014-03-11 14:12:58.476 | + '[' False = True ']'
>>
>> 2014-03-11 14:12:58.477 | + NOVNC_WEB_DIR=/opt/stack/noVNC
>>
>> 2014-03-11 14:12:58.478 | + git_clone https://github.com/kanaka/noVNC.git 
>> /opt/stack/noVNC master
>>
>> 2014-03-11 14:12:58.479 | + GIT_REMOTE=https://github.com/kanaka/noVNC.git
>>
>> 2014-03-11 14:12:58.480 | + GIT_DEST=/opt/stack/noVNC
>>
>> 2014-03-11 14:12:58.481 | + GIT_REF=master
>>
>> 2014-03-11 14:12:58.482 | ++ trueorfalse False False
>>
>> 2014-03-11 14:12:58.483 | + RECLONE=False
>>
>> 2014-03-11 14:12:58.484 | + [[ False = \T\r\u\e ]]
>>
>> 2014-03-11 14:12:58.485 | + echo master
>>
>> 2014-03-11 14:12:58.486 | + egrep -q '^refs'
>>
>> 2014-03-11 14:12:58.487 | + [[ ! -d /opt/stack/noVNC ]]
>>
>> 2014-03-11 14:12:58.488 | + [[ False = \T\r\u\e ]]
>>
>> 2014-03-11 14:12:58.489 | + git_timed clone 
>> https://github.com/kanaka/noVNC.git /opt/stack/noVNC
>>
>> 2014-03-11 14:12:58.490 | + local count=0
>>
>> 2014-03-11 14:12:58.491 | + local timeout=0
>>
>> 2014-03-11 14:12:58.492 | + [[ -n 0 ]]
>>
>> 2014-03-11 14:12:58.493 | + timeout=0
>>
>> 2014-03-11 14:12:58.494 | + timeout -s SIGINT 0 git clone 
>> https://github.com/kanaka/noVNC.git /opt/stack/noVNC
>>
>> 2014-03-11 14:12:58.495 | Cloning into '/opt/stack/noVN

[openstack-dev] [Glance] Need to revert "Don't enable all stores by default"

2014-03-11 Thread Clint Byrum
Hi. I asked in #openstack-glance a few times today but got no response,
so sorry for the list spam.

https://review.openstack.org/#/c/79710/

This change introduces a backward incompatible change to defaults with
Havana. If a user has chosen to configure swift, but did not add swift
to the known_stores, then when that user upgrades Glance, Glance will
fail to start because their swift configuration will be invalid.

This broke TripleO btw, which tries hard to use default configurations.

Also I am not really sure why this approach was taken. If a user has
explicitly put swift configuration options in their config file, why
not just load swift store? Oslo.config will help here in that you can
just add all of the config options but not actually expect them to be
set. It seems entirely backwards to just fail in this case.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] Intermittent failures cloning noVNC from github.com/kanaka

2014-03-11 Thread Joshua Harlow
https://status.github.com/messages

* 'GitHub.com is operating normally, despite an ongoing DDoS attack. The 
mitigations we have in place are proving effective in protecting us and we're 
hopeful that we've got this one resolved.'

If you were cloning from github.org and not http://git.openstack.org then you 
were likely seeing some of the DDoS attack in action.

From: Sukhdev Kapur mailto:sukhdevka...@gmail.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Tuesday, March 11, 2014 at 4:08 PM
To: "Dane Leblanc (leblancd)" mailto:lebla...@cisco.com>>
Cc: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>, 
"openstack-in...@lists.openstack.org"
 
mailto:openstack-in...@lists.openstack.org>>
Subject: Re: [openstack-dev] [OpenStack-Infra] Intermittent failures cloning 
noVNC from github.com/kanaka

I have noticed that even clone of devstack has failed few times within last 
couple of hours - it was running fairly smooth so far.

-Sukhdev



On Tue, Mar 11, 2014 at 5:05 PM, Sukhdev Kapur 
mailto:sukhdevka...@gmail.com>> wrote:
[adding openstack-dev list as well ]

I have noticed that this has stated hitting my builds within last few hours. I 
have noticed exact same failures on almost 10 builds.
Looks like something has happened within last few hours - perhaps the load?

-Sukhdev



On Tue, Mar 11, 2014 at 4:28 PM, Dane Leblanc (leblancd) 
mailto:lebla...@cisco.com>> wrote:
Apologies if this is the wrong audience for this question…

I’m seeing intermittent failures running stack.sh whereby ‘git clone 
https://github.com/kanaka/noVNC.git /opt/stack/noVNC’ is returning various 
errors.  Below are 2 examples.

Is this a known issue? Are there any localrc settings which might help here?

Example 1:

2014-03-11 15:00:33.779 | + is_service_enabled n-novnc
2014-03-11 15:00:33.780 | + return 0
2014-03-11 15:00:33.781 | ++ trueorfalse False
2014-03-11 15:00:33.782 | + NOVNC_FROM_PACKAGE=False
2014-03-11 15:00:33.783 | + '[' False = True ']'
2014-03-11 15:00:33.784 | + NOVNC_WEB_DIR=/opt/stack/noVNC
2014-03-11 15:00:33.785 | + git_clone https://github.com/kanaka/noVNC.git 
/opt/stack/noVNC master
2014-03-11 15:00:33.786 | + GIT_REMOTE=https://github.com/kanaka/noVNC.git
2014-03-11 15:00:33.788 | + GIT_DEST=/opt/stack/noVNC
2014-03-11 15:00:33.789 | + GIT_REF=master
2014-03-11 15:00:33.790 | ++ trueorfalse False False
2014-03-11 15:00:33.791 | + RECLONE=False
2014-03-11 15:00:33.792 | + [[ False = \T\r\u\e ]]
2014-03-11 15:00:33.793 | + echo master
2014-03-11 15:00:33.794 | + egrep -q '^refs'
2014-03-11 15:00:33.795 | + [[ ! -d /opt/stack/noVNC ]]
2014-03-11 15:00:33.796 | + [[ False = \T\r\u\e ]]
2014-03-11 15:00:33.797 | + git_timed clone https://github.com/kanaka/noVNC.git 
/opt/stack/noVNC
2014-03-11 15:00:33.798 | + local count=0
2014-03-11 15:00:33.799 | + local timeout=0
2014-03-11 15:00:33.801 | + [[ -n 0 ]]
2014-03-11 15:00:33.802 | + timeout=0
2014-03-11 15:00:33.803 | + timeout -s SIGINT 0 git clone 
https://github.com/kanaka/noVNC.git /opt/stack/noVNC
2014-03-11 15:00:33.804 | Cloning into '/opt/stack/noVNC'...
2014-03-11 15:03:13.694 | error: RPC failed; result=56, HTTP code = 200
2014-03-11 15:03:13.695 | fatal: The remote end hung up unexpectedly
2014-03-11 15:03:13.697 | fatal: early EOF
2014-03-11 15:03:13.698 | fatal: index-pack failed
2014-03-11 15:03:13.699 | + [[ 128 -ne 124 ]]
2014-03-11 15:03:13.700 | + die 596 'git call failed: [git clone' 
https://github.com/kanaka/noVNC.git '/opt/stack/noVNC]'
2014-03-11 15:03:13.701 | + local exitcode=0
2014-03-11 15:03:13.702 | [Call Trace]
2014-03-11 15:03:13.703 | ./stack.sh:736:install_nova
2014-03-11 15:03:13.705 | /var/lib/jenkins/devstack/lib/nova:618:git_clone
2014-03-11 15:03:13.706 | 
/var/lib/jenkins/devstack/functions-common:543:git_timed
2014-03-11 15:03:13.707 | /var/lib/jenkins/devstack/functions-common:596:die
2014-03-11 15:03:13.708 | [ERROR] 
/var/lib/jenkins/devstack/functions-common:596 git call failed: [git clone 
https://github.com/kanaka/noVNC.git /opt/stack/noVNC]


Example 2:


2014-03-11 14:12:58.472 | + is_service_enabled n-novnc

2014-03-11 14:12:58.473 | + return 0

2014-03-11 14:12:58.474 | ++ trueorfalse False

2014-03-11 14:12:58.475 | + NOVNC_FROM_PACKAGE=False

2014-03-11 14:12:58.476 | + '[' False = True ']'

2014-03-11 14:12:58.477 | + NOVNC_WEB_DIR=/opt/stack/noVNC

2014-03-11 14:12:58.478 | + git_clone https://github.com/kanaka/noVNC.git 
/opt/stack/noVNC master

2014-03-11 14:12:58.479 | + GIT_REMOTE=https://github.com/kanaka/noVNC.git

2014-03-11 14:12:58.480 | + GIT_DEST=/opt/stack/noVNC

2014-03-11 14:12:58.481 | + GIT_REF=master

2014-03-11 14:12:58.482 | ++ trueorfalse False False

2014-03-11 14:12:58.483 | + RECLONE=False

2014-03-11 14:12:58.484 | + [[ False = \T\r\u\e ]]

2014-03-11 14:12:58.485 | + echo master

2014-03-11 14:12:58.486 | + 

Re: [openstack-dev] [nova][neutron]A Question about creating instance with duplication sg_name

2014-03-11 Thread Xurong Yang
Hi,Lingxian & marios
Thank for response. yes,personally speaking, it should be using UUID
instead of 'name' such as network_id port_id as name(not the key) can't
differentiate security groups. so, i don't know that how about other
folks's view, maybe we need fix it.

thanks,Xurong


2014-03-11 21:33 GMT+08:00 mar...@redhat.com :

> On 11/03/14 10:20, Xurong Yang wrote:
> > It's allowed to create duplicate sg with the same name.
> > so exception happens when creating instance with the duplicate sg name.
>
> Hi Xurong - fyi there is a review open which raises this particular
> point at https://review.openstack.org/#/c/79270/2 (together with
> associated bug).
>
> imo we shouldn't be using 'name' to distinguish security groups - that's
> what the UUID is for,
>
> thanks, marios
>
> > code following:
> > 
> > security_groups = kwargs.get('security_groups', [])
> > security_group_ids = []
> >
> > # TODO(arosen) Should optimize more to do direct query for
> security
> > # group if len(security_groups) == 1
> > if len(security_groups):
> > search_opts = {'tenant_id': instance['project_id']}
> > user_security_groups = neutron.list_security_groups(
> > **search_opts).get('security_groups')
> >
> > for security_group in security_groups:
> > name_match = None
> > uuid_match = None
> > for user_security_group in user_security_groups:
> > if user_security_group['name'] == security_group:
> > if name_match:---exception happened here
> > raise exception.NoUniqueMatch(
> > _("Multiple security groups found matching"
> >   " '%s'. Use an ID to be more specific.") %
> >security_group)
> >
> > name_match = user_security_group['id']
> >   
> >
> > so it's maybe improper to create instance with the sg name parameter.
> > appreciation if any response.
> >
> >
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer]Collector's performance

2014-03-11 Thread Gordon Chung
i did notice the collector service was only ever writing one db connection 
at a time. i've opened a bug for that here: 
https://bugs.launchpad.net/ceilometer/+bug/1291054

i am curious as to why postgresql passes but not mysql? is postgres 
actually faster or are it's default configurations set up better?

cheers,
gordon chung
openstack, ibm software standards___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] FFE Request: Ephemeral RBD image support

2014-03-11 Thread Dmitry Borodaenko
On Tue, Mar 11, 2014 at 1:31 PM, Matt Riedemann
 wrote:
>>> There was a bug reported today [1] that looks like a regression in this
>>> new code, so we need people involved in this looking at it as soon as
>>> possible because we have a proposed revert in case we need to yank it
>>> out [2].
>>>
>>> [1] https://bugs.launchpad.net/nova/+bug/1291014
>>> [2] 
>>> https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bug/1291014,n,z
>>
>> Note that I have identified the source of the problem and am pushing a
>> patch shortly with unit tests.
>
> My concern is how much else where assumes nova is working with the glance v2
> API because there was a nova blueprint [1] to make nova work with the glance
> V2 API but that never landed in Icehouse, so I'm worried about wack-a-mole
> type problems here, especially since there is no tempest coverage for
> testing multiple image location support via nova.
>
> [1] https://blueprints.launchpad.net/nova/+spec/use-glance-v2-api

As I mentioned in the bug comments, the code that made the assumption
about glance v2 API actually landed in September 2012:
https://review.openstack.org/13017

The multiple image location patch simply made use of a method that was
already there for more than a year.

-DmitryB

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] MuranoPL questions?

2014-03-11 Thread Joshua Harlow
I guess I might be a bit biased to programming; so maybe I'm not the target 
audience.

I'm not exactly against DSL's, I just think that DSL's need to be really really 
proven to become useful (in general this applies to any language that 'joe' 
comp-sci student can create). Its not that hard to just make one, but the real 
hard part is making one that people actually like and use and survives the test 
of time. That’s why I think its just nicer to use languages that have stood the 
test of time already (if we can), creating a new DSL (muranoPL seems to be 
slightly more than a DSL imho) means creating a new language that has not stood 
the test of time (in terms of lifetime, battle tested, supported over years) so 
that’s just the concern I have.

Of course we have to accept innovation and I hope that the DSL/s makes it 
easier/simpler, I just tend to be a bit more pragmatic maybe in this area.

Here's hoping for the best! :-)

-Josh

From: Renat Akhmerov mailto:rakhme...@mirantis.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Monday, March 10, 2014 at 8:36 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] MuranoPL questions?

Although being a little bit verbose it makes a lot of sense to me.

@Joshua,

Even assuming Python could be sandboxed and whatever else that’s needed to be 
able to use it as DSL (for something like Mistral, Murano or Heat) is done  why 
do you think Python would be a better alternative for people who don’t know 
neither these new DSLs nor Python itself. Especially, given the fact that 
Python has A LOT of things that they’d never use. I know many people who have 
been programming in Python for a while and they admit they don’t know all the 
nuances of Python and actually use 30-40% of all of its capabilities. Even not 
in domain specific development. So narrowing a feature set that a language 
provides and limiting it to a certain domain vocabulary is what helps people 
solve tasks of that specific domain much easier and in the most expressive 
natural way. Without having to learn tons and tons of details that a general 
purpose language (GPL, hah :) ) provides (btw, the reason to write thick books).

I agree with Stan, if you begin to use a technology you’ll have to learn 
something anyway, be it TaskFlow API and principles or DSL. Well-designed DSL 
just encapsulates essential principles of a system it is used for. By learning 
DSL you’re leaning the system itself, as simple as that.

Renat Akhmerov
@ Mirantis Inc.



On 10 Mar 2014, at 05:35, Stan Lagun 
mailto:sla...@mirantis.com>> wrote:

> I'd be very interested in knowing the resource controls u plan to add. 
> Memory, CPU...
We haven't discussed it yet. Any suggestions are welcomed

> I'm still trying to figure out where something like 
> https://github.com/istalker2/MuranoDsl/blob/master/meta/com.mirantis.murano.demoApp.DemoInstance/manifest.yaml
>  would be beneficial, why not > just spend effort sand boxing lua, python... 
> Instead of spending effort on creating a new language and then having to 
> sandbox it as well... Especially if u picked languages that are made to be  
> sandboxed from the start (not python)...

1. See my detailed answer in Mistral thread why haven't we used any of those 
languages. There are many reasons besides sandboxing.

2. You don't need to sandbox MuranoPL. Sandboxing is restricting some 
operations. In MuranoPL ALL operations (including operators in expressions, 
functions, methods etc.) are just those that you explicitly provided. So there 
is nothing to restrict. There are no builtins that throw AccessViolationError

3. Most of the value of MuranoPL comes not form the workflow code but from 
class declarations. In all OOP languages classes are just a convenient to 
organize your code. There are classes that represent real-life objects and 
classes that are nothing more than data-structures, DTOs etc. In Murano classes 
in MuranoPL are deployable entities like Heat resources application components, 
services etc. In dashboard UI user works with those entities. He (in UI!) 
creates instances of those classes, fills their property values, binds objects 
together (assigns on object to a property of another). And this is done without 
even a single MuranoPL line being executed! That is possible because everything 
in MuranoPL is a subject to declaration and because it is just plain YAML 
anyone can easily extract those declarations from MuranoPL classes.
Now suppose it was Python instead of MuranoPL. Then you would have to parse 
*.py files to get list of declared classes (without executing anything). 
Suppose that you managed to solve this somehow. Probably you wrote regexp that 
finds all class declarations it text files. Are you done? No! There are no 
properties (attributes) declarations in Python. You cannot infer al

Re: [openstack-dev] [all][db][performance] Proposal: Get rid of soft deletion (step by step)

2014-03-11 Thread Tim Bell
Can we therefore make that no removal of deleted column is permitted if there 
is no implementation of shadow tables ?

Tim

From: Joe Gordon [mailto:joe.gord...@gmail.com]
Sent: 11 March 2014 20:57
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all][db][performance] Proposal: Get rid of soft 
deletion (step by step)



On Tue, Mar 11, 2014 at 12:43 PM, Tim Bell 
mailto:tim.b...@cern.ch>> wrote:

Typical cases are user error where someone accidentally deletes an item from a 
tenant. The image guys have a good structure where images become unavailable 
and are recoverable for a certain period of time. A regular periodic task 
cleans up deleted items after a configurable number of seconds to avoid 
constant database growth.

My preference would be to follow this model universally (an archive table is a 
nice way to do it without disturbing production).

That was the goal of the shadow table, if it doesn't support that now then its 
a bug.


Tim


> On Tue, Mar 11, 2014, Mike Wilson 
> mailto:geekinu...@gmail.com>> wrote:
> > Undeleting things is an important use case in my opinion. We do this
> > in our environment on a regular basis. In that light I'm not sure that
> > it would be appropriate just to log the deletion and git rid of the
> > row. I would like to see it go to an archival table where it is easily 
> > restored.
>
> I'm curious, what are you undeleting and why?
>
> JE
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] FFE Request: Ephemeral RBD image support

2014-03-11 Thread Matt Riedemann



On 3/11/2014 3:11 PM, Jay Pipes wrote:

On Tue, 2014-03-11 at 14:18 -0500, Matt Riedemann wrote:


On 3/10/2014 11:20 AM, Dmitry Borodaenko wrote:

On Fri, Mar 7, 2014 at 8:55 AM, Sean Dague  wrote:

On 03/07/2014 11:16 AM, Russell Bryant wrote:

On 03/07/2014 04:19 AM, Daniel P. Berrange wrote:

On Thu, Mar 06, 2014 at 12:20:21AM -0800, Andrew Woodward wrote:

I'd Like to request A FFE for the remaining patches in the Ephemeral
RBD image support chain

https://review.openstack.org/#/c/59148/
https://review.openstack.org/#/c/59149/

are still open after their dependency
https://review.openstack.org/#/c/33409/ was merged.

These should be low risk as:
1. We have been testing with this code in place.
2. It's nearly all contained within the RBD driver.

This is needed as it implements an essential functionality that has
been missing in the RBD driver and this will become the second release
it's been attempted to be merged into.


Add me as a sponsor.


OK, great.  That's two.

We have a hard deadline of Tuesday to get these FFEs merged (regardless
of gate status).



As alt release manager, FFE approved based on Russell's approval.

The merge deadline for Tuesday is the release meeting, not end of day.
If it's not merged by the release meeting, it's dead, no exceptions.


Both commits were merged, thanks a lot to everyone who helped land
this in Icehouse! Especially to Russel and Sean for approving the FFE,
and to Daniel, Michael, and Vish for reviewing the patches!



There was a bug reported today [1] that looks like a regression in this
new code, so we need people involved in this looking at it as soon as
possible because we have a proposed revert in case we need to yank it
out [2].

[1] https://bugs.launchpad.net/nova/+bug/1291014
[2]
https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bug/1291014,n,z


Note that I have identified the source of the problem and am pushing a
patch shortly with unit tests.

Best,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



My concern is how much else where assumes nova is working with the 
glance v2 API because there was a nova blueprint [1] to make nova work 
with the glance V2 API but that never landed in Icehouse, so I'm worried 
about wack-a-mole type problems here, especially since there is no 
tempest coverage for testing multiple image location support via nova.


[1] https://blueprints.launchpad.net/nova/+spec/use-glance-v2-api

--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] FFE Request: Ephemeral RBD image support

2014-03-11 Thread Jay Pipes
On Tue, 2014-03-11 at 14:18 -0500, Matt Riedemann wrote:
> 
> On 3/10/2014 11:20 AM, Dmitry Borodaenko wrote:
> > On Fri, Mar 7, 2014 at 8:55 AM, Sean Dague  wrote:
> >> On 03/07/2014 11:16 AM, Russell Bryant wrote:
> >>> On 03/07/2014 04:19 AM, Daniel P. Berrange wrote:
>  On Thu, Mar 06, 2014 at 12:20:21AM -0800, Andrew Woodward wrote:
> > I'd Like to request A FFE for the remaining patches in the Ephemeral
> > RBD image support chain
> >
> > https://review.openstack.org/#/c/59148/
> > https://review.openstack.org/#/c/59149/
> >
> > are still open after their dependency
> > https://review.openstack.org/#/c/33409/ was merged.
> >
> > These should be low risk as:
> > 1. We have been testing with this code in place.
> > 2. It's nearly all contained within the RBD driver.
> >
> > This is needed as it implements an essential functionality that has
> > been missing in the RBD driver and this will become the second release
> > it's been attempted to be merged into.
> 
>  Add me as a sponsor.
> >>>
> >>> OK, great.  That's two.
> >>>
> >>> We have a hard deadline of Tuesday to get these FFEs merged (regardless
> >>> of gate status).
> >>>
> >>
> >> As alt release manager, FFE approved based on Russell's approval.
> >>
> >> The merge deadline for Tuesday is the release meeting, not end of day.
> >> If it's not merged by the release meeting, it's dead, no exceptions.
> >
> > Both commits were merged, thanks a lot to everyone who helped land
> > this in Icehouse! Especially to Russel and Sean for approving the FFE,
> > and to Daniel, Michael, and Vish for reviewing the patches!
> >
> 
> There was a bug reported today [1] that looks like a regression in this 
> new code, so we need people involved in this looking at it as soon as 
> possible because we have a proposed revert in case we need to yank it 
> out [2].
> 
> [1] https://bugs.launchpad.net/nova/+bug/1291014
> [2] 
> https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bug/1291014,n,z

Note that I have identified the source of the problem and am pushing a
patch shortly with unit tests.

Best,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat]Policy on upgades required config changes

2014-03-11 Thread Zane Bitter

On 11/03/14 01:05, Keith Bray wrote:

We do run close to Heat master here at
Rackspace, and we'd be happy to set up a non-voting job to notify when a
review would break Heat on our cloud if that would be beneficial.  Some of
the breaks we have seen have been things that simply weren't caught in
code review (a human intensive effort), were specific to the way we
configure Heat for large-scale cloud use, applicable to the entire Heat
project, and not necessarily service provider specific.


+1, thanks Keith that sounds like a great idea. it's obviously not 
possible to test every configuration, but testing a "typical large 
operator" configuration would be a big plus for the project.


cheers,
Zane.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [python-openstacksdk] Minutes from 11 Mar meeting

2014-03-11 Thread Brian Curtin
We just wrapped up our weekly meeting, and the minutes and log are available.

Minutes: 
http://eavesdrop.openstack.org/meetings/python_openstacksdk/2014/python_openstacksdk.2014-03-11-19.00.html

Minutes (text):
http://eavesdrop.openstack.org/meetings/python_openstacksdk/2014/python_openstacksdk.2014-03-11-19.00.txt

Log: 
http://eavesdrop.openstack.org/meetings/python_openstacksdk/2014/python_openstacksdk.2014-03-11-19.00.log.html

We're starting to get into code, so a lot of the discussion was around
the direction of the current example
(https://review.openstack.org/#/c/79435/) and some of the library
choices. There is some research to be done and more reviews to be had,
but it's off to a good start.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][db][performance] Proposal: Get rid of soft deletion (step by step)

2014-03-11 Thread Joe Gordon
On Tue, Mar 11, 2014 at 12:43 PM, Tim Bell  wrote:

>
> Typical cases are user error where someone accidentally deletes an item
> from a tenant. The image guys have a good structure where images become
> unavailable and are recoverable for a certain period of time. A regular
> periodic task cleans up deleted items after a configurable number of
> seconds to avoid constant database growth.
>
> My preference would be to follow this model universally (an archive table
> is a nice way to do it without disturbing production).
>

That was the goal of the shadow table, if it doesn't support that now then
its a bug.


>
> Tim
>
>
> > On Tue, Mar 11, 2014, Mike Wilson  wrote:
> > > Undeleting things is an important use case in my opinion. We do this
> > > in our environment on a regular basis. In that light I'm not sure that
> > > it would be appropriate just to log the deletion and git rid of the
> > > row. I would like to see it go to an archival table where it is easily
> restored.
> >
> > I'm curious, what are you undeleting and why?
> >
> > JE
> >
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][db][performance] Proposal: Get rid of soft deletion (step by step)

2014-03-11 Thread Tim Bell

Typical cases are user error where someone accidentally deletes an item from a 
tenant. The image guys have a good structure where images become unavailable 
and are recoverable for a certain period of time. A regular periodic task 
cleans up deleted items after a configurable number of seconds to avoid 
constant database growth.

My preference would be to follow this model universally (an archive table is a 
nice way to do it without disturbing production).

Tim


> On Tue, Mar 11, 2014, Mike Wilson  wrote:
> > Undeleting things is an important use case in my opinion. We do this
> > in our environment on a regular basis. In that light I'm not sure that
> > it would be appropriate just to log the deletion and git rid of the
> > row. I would like to see it go to an archival table where it is easily 
> > restored.
> 
> I'm curious, what are you undeleting and why?
> 
> JE
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] FFE Request: Ephemeral RBD image support

2014-03-11 Thread Matt Riedemann



On 3/10/2014 11:20 AM, Dmitry Borodaenko wrote:

On Fri, Mar 7, 2014 at 8:55 AM, Sean Dague  wrote:

On 03/07/2014 11:16 AM, Russell Bryant wrote:

On 03/07/2014 04:19 AM, Daniel P. Berrange wrote:

On Thu, Mar 06, 2014 at 12:20:21AM -0800, Andrew Woodward wrote:

I'd Like to request A FFE for the remaining patches in the Ephemeral
RBD image support chain

https://review.openstack.org/#/c/59148/
https://review.openstack.org/#/c/59149/

are still open after their dependency
https://review.openstack.org/#/c/33409/ was merged.

These should be low risk as:
1. We have been testing with this code in place.
2. It's nearly all contained within the RBD driver.

This is needed as it implements an essential functionality that has
been missing in the RBD driver and this will become the second release
it's been attempted to be merged into.


Add me as a sponsor.


OK, great.  That's two.

We have a hard deadline of Tuesday to get these FFEs merged (regardless
of gate status).



As alt release manager, FFE approved based on Russell's approval.

The merge deadline for Tuesday is the release meeting, not end of day.
If it's not merged by the release meeting, it's dead, no exceptions.


Both commits were merged, thanks a lot to everyone who helped land
this in Icehouse! Especially to Russel and Sean for approving the FFE,
and to Daniel, Michael, and Vish for reviewing the patches!



There was a bug reported today [1] that looks like a regression in this 
new code, so we need people involved in this looking at it as soon as 
possible because we have a proposed revert in case we need to yank it 
out [2].


[1] https://bugs.launchpad.net/nova/+bug/1291014
[2] 
https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bug/1291014,n,z


--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] a question about instance snapshot

2014-03-11 Thread Jay Pipes
On Tue, 2014-03-11 at 06:35 +, Bohai (ricky) wrote:
> > -Original Message-
> > From: Jay Pipes [mailto:jaypi...@gmail.com]
> > Sent: Tuesday, March 11, 2014 3:20 AM
> > To: openstack-dev@lists.openstack.org
> > Subject: Re: [openstack-dev] [nova] a question about instance snapshot
> >
> > On Mon, 2014-03-10 at 12:13 -0400, Shawn Hartsock wrote:
> > > We have very strong interest in pursing this feature in the VMware
> > > driver as well. I would like to see the revert instance feature
> > > implemented at least.
> > >
> > > When I used to work in multi-discipline roles involving operations it
> > > would be common for us to snapshot a vm, run through an upgrade
> > > process, then revert if something did not upgrade smoothly. This
> > > ability alone can be exceedingly valuable in long-lived virtual
> > > machines.
> > >
> > > I also have some comments from parties interested in refactoring how
> > > the VMware drivers handle snapshots but I'm not certain how much that
> > > plays into this "live snapshot" discussion.
> >
> > I think the reason that there isn't much interest in doing this kind of 
> > thing is
> > because the worldview that VMs are pets is antithetical to the worldview 
> > that
> > VMs are cattle, and Nova tends to favor the latter (where DRS/DPM on
> > vSphere tends to favor the former).
> >
> > There's nothing about your scenario above of being able to "revert" an 
> > instance
> > to a particular state that isn't possible with today's Nova.
> > Snapshotting an instance, doing an upgrade of software on the instance, and
> > then restoring from the snapshot if something went wrong (reverting) is
> > already fully possible to do with the regular Nova snapshot and restore
> > operations. The only difference is that the "live-snapshot"
> > stuff would include saving the memory view of a VM in addition to its disk 
> > state.
> > And that, at least in my opinion, is only needed when you are treating VMs 
> > like
> > pets and not cattle.
> >
> 
> Hi Jay,
> 
> I read every words in your reply and respect what you said.
> 
> But i can't agree with you that memory snapshot is a feature for pat not for 
> cattle.
> I think it's a feature whatever what do you look the instance as.
> 
> The world doesn't care about what we look the instance as, in fact, currently 
> almost all the
> mainstream hypervisors have supported the memory snapshot.
> If it's just a dispensable feature and no users need it, I can't understand 
> why
> the hypervisors provide it without exception.
> 
> In the document " OPENSTACK OPERATIONS GUIDE" section " Live snapshots" has 
> the
> below words:
> " To ensure that important services have written their contents to disk (such 
> as, databases),
> we recommend you read the documentation for those applications to determine 
> what commands
> to issue to have them sync their contents to disk. If you are unsure how to 
> do this,
>  the safest approach is to simply stop these running services normally.
> "
> This just pushes all the responsibility to guarantee the consistency of the 
> instance to the end user.
> It's absolutely not convenient and I doubt whether it's appropriate.

Hi Ricky,

I guess we will just have to disagree about the relative usefulness of
this kind of thing for users of the cloud (and not users of traditional
managed hosting) :) Like I said, if it does not affect the performance
of other tenants' instances, I'm fine with adding the functionality in a
way that is generic (not hypervisor-specific).

Best,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Developer documentation

2014-03-11 Thread Brandon Logan
As a someone who has just spent the time to learn the Neutron code, this would 
have been quite helpful when I started.  I'll add on to this when it is merged 
in.  Awesome job!

Thanks,
Brandon Logan

From: Collins, Sean [sean_colli...@cable.comcast.com]
Sent: Tuesday, March 11, 2014 12:42 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron] Developer documentation

I put together another review that starts to document the HTTP API layer
and structure.

https://review.openstack.org/#/c/79675/

I think it's pretty dense - there's a ton of terminology and concepts
about WSGI and python that I sort of skim over - it's probably not
newbie friendly just yet - comments and suggestions welcome - especially
on how to introduce WSGI and everything else without making someone's
head explode.

--
Sean M. Collins
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack][Nova][Docker] Devstack with docker driver

2014-03-11 Thread Daniel Kuffner
Hi,
what is the error reported by docker?
Can you post the docker registry log?
What version of docker do you use?
I assume you use devstack master branch?

thank you,
Daniel

On Tue, Mar 11, 2014 at 1:19 PM, urgensherpa  wrote:
> Hello!,
>
> i can run docker containers and push it to docker io but i failed to push it
> for local glance.and get the same error mentioned here.
> Could you please show some more light on  how you resolved it. i started
> settingup openstack and docker using devstack.
> here is my localrc
> FLOATING_RANGE=192.168.140.0/27
> FIXED_RANGE=10.11.12.0/24
> FIXED_NETWORK_SIZE=256
> FLAT_INTERFACE=eth1
> ADMIN_PASSWORD=g
> MYSQL_PASSWORD=g
> RABBIT_PASSWORD=g
> SERVICE_PASSWORD=g
> SERVICE_TOKEN=g
> SCHEDULER=nova.scheduler.filter_scheduler.FilterScheduler
> VIRT_DRIVER=docker
> SCREEN_LOGDIR=$DEST/logs/screen
> ---
> the machine im testing is on vmware ubuntu 13.01 with two nics  assuming
> eth0 connected to internet and eth1 to local network.
> ---
>
>
>
>
>
> --
> View this message in context: 
> http://openstack.10931.n7.nabble.com/Openstack-Nova-Docker-Devstack-with-docker-driver-tp28361p34845.html
> Sent from the Developer mailing list archive at Nabble.com.
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][db][performance] Proposal: Get rid of soft deletion (step by step)

2014-03-11 Thread Johannes Erdfelt
On Tue, Mar 11, 2014, Mike Wilson  wrote:
> Undeleting things is an important use case in my opinion. We do this in our
> environment on a regular basis. In that light I'm not sure that it would be
> appropriate just to log the deletion and git rid of the row. I would like
> to see it go to an archival table where it is easily restored.

I'm curious, what are you undeleting and why?

JE


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][db][performance] Proposal: Get rid of soft deletion (step by step)

2014-03-11 Thread Joe Gordon
On Tue, Mar 11, 2014 at 10:24 AM, Mike Wilson  wrote:

> Undeleting things is an important use case in my opinion. We do this in
> our environment on a regular basis. In that light I'm not sure that it
> would be appropriate just to log the deletion and git rid of the row. I
> would like to see it go to an archival table where it is easily restored.
>
>
Although we want to *support* hard deletion, we still want to support the
current behavior as well (Soft deletion, where the operator, can prune
deleted rows periodically).


> -Mike
>
>
> On Mon, Mar 10, 2014 at 3:44 PM, Joshua Harlow wrote:
>
>>  Sounds like a good idea to me.
>>
>>  I've never understood why we treat the DB as a LOG (keeping deleted ==
>> 0 records around) when we should just use a LOG (or similar system) to
>> begin with instead.
>>
>>  Does anyone use the feature of switching deleted == 1 back to deleted =
>> 0? Has this worked out for u?
>>
>>  Seems like some of the feedback on
>> https://etherpad.openstack.org/p/operators-feedback-mar14 also suggests
>> that this has been a operational pain-point for folks (Tool to delete
>> things properly suggestions and such...).
>>
>>   From: Boris Pavlovic 
>> Reply-To: "OpenStack Development Mailing List (not for usage questions)"
>> 
>> Date: Monday, March 10, 2014 at 1:29 PM
>> To: OpenStack Development Mailing List ,
>> Victor Sergeyev 
>> Subject: [openstack-dev] [all][db][performance] Proposal: Get rid of
>> soft deletion (step by step)
>>
>>   Hi stackers,
>>
>>  (It's proposal for Juno.)
>>
>>  Intro:
>>
>>  Soft deletion means that records from DB are not actually deleted, they
>> are just marked as a "deleted". To mark record as a "deleted" we put in
>> special table's column "deleted" record's ID value.
>>
>>  Issue 1: Indexes & Queries
>> We have to add in every query "AND deleted == 0" to get non-deleted
>> records.
>> It produce performance issue, cause we should add it in any index one
>> "extra" column.
>> As well it produce extra complexity in db migrations and building
>> queries.
>>
>>  Issue 2: Unique constraints
>> Why we store ID in deleted and not True/False?
>>  The reason is that we would like to be able to create real DB unique
>> constraints and avoid race conditions on "insert" operation.
>>
>>  Sample: we Have table (id, name, password, deleted) we would like to
>> put in column "name" only unique value.
>>
>>  Approach without UC: if count(`select  where name = name`) == 0:
>> insert(...)
>> (race cause we are able to add new record between )
>>
>>  Approach with UC: try: insert(...) except Duplicate: ...
>>
>>  So to add UC we have to add them on (name, deleted). (to be able to
>> make insert/delete/insert with same name)
>>
>>  As well it produce performance issues, because we have to use Complex
>> unique constraints on 2  or more columns. + extra code & complexity in db
>> migrations.
>>
>>  Issue 3: Garbage collector
>>
>>  It is really hard to make garbage collector that will have good
>> performance and be enough common to work in any case for any project.
>> Without garbage collector DevOps have to cleanup records by hand, (risk
>> to break something). If they don't cleanup DB they will get very soon
>> performance issue.
>>
>>  To put in a nutshell most important issues:
>> 1) Extra complexity to each select query & extra column in each index
>> 2) Extra column in each Unique Constraint (worse performance)
>> 3) 2 Extra column in each table: (deleted, deleted_at)
>> 4) Common garbage collector is required
>>
>>
>>  To resolve all these issues we should just remove soft deletion.
>>
>>  One of approaches that I see is in step by step removing "deleted"
>> column from every table with probably code refactoring.  Actually we have 3
>> different cases:
>>
>>  1) We don't use soft deleted records:
>> 1.1) Do .delete() instead of .soft_delete()
>> 1.2) Change query to avoid adding extra "deleted == 0" to each query
>> 1.3) Drop "deleted" and "deleted_at" columns
>>
>>  2) We use soft deleted records for internal stuff "e.g. periodic tasks"
>> 2.1) Refactor code somehow: E.g. store all required data by periodic task
>> in some special table that has: (id, type, json_data) columns
>> 2.2) On delete add record to this table
>> 2.3-5) similar to 1.1, 1.2, 13
>>
>>  3) We use soft deleted records in API
>> 3.1) Deprecated API call if it is possible
>> 3.2) Make proxy call to ceilometer from API
>> 3.3) On .delete() store info about records in (ceilometer, or somewhere
>> else)
>> 3.4-6) similar to 1.1, 1.2, 1.3
>>
>> This is not ready RoadMap, just base thoughts to start the constructive
>> discussion in the mailing list, so %stacker% your opinion is very
>> important!
>>
>>
>>  Best regards,
>> Boris Pavlovic
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> ___
> Open

Re: [openstack-dev] [Ceilometer]Collector's performance

2014-03-11 Thread Nadya Privalova
Ildiko,

Thanks for question, I forgot to write about it. The results for mysql, the
link to logs
http://logs.openstack.org/36/64136/20/check/check-tempest-dsvm-full/e361520/.
But I guess postgress stuff looks the same because it failed during last
test run (https://review.openstack.org/#/c/64136/). Will check tomorrow
anyway.

Nadya


On Tue, Mar 11, 2014 at 10:01 PM, Ildikó Váncsa
wrote:

>  Hi Nadya,
>
>
>
> You mentioned multiple DB backends in your mail. Which one did you use to
> perform these tests or did you get the same/similar performance results in
> case of both?
>
>
>
> Best Regards,
>
> Ildiko
>
>
>
> *From:* Nadya Privalova [mailto:nprival...@mirantis.com]
> *Sent:* Tuesday, March 11, 2014 6:05 PM
> *To:* OpenStack Development Mailing List
> *Subject:* [openstack-dev] [Ceilometer]Collector's performance
>
>
>
> Hi team!
>
> Last week we were working on notification problem in ceilometer during
> tempest tests creation. Tests for notification passed successfully on
> Postgres but failed on MySQL. This made us start investigations and this
> email contains some results.
>
> As it turned out, tempest as it is is something like performance-testing
> for Ceilometer. It contains 2057 tests. Almost in all test OpenStack
> resources are being created and deleted: images, instances, volumes. E.g.
> during instance creation nova sends 9 notifications. And all the tests are
> running in parallel for about 40 minutes.
>
> From ceilometer-collector logs we may found very useful message:
>
> 2014-03-10 09:42:41.356 
> 
>  22845 DEBUG ceilometer.dispatcher.database 
> [req-16ea95c5-6454-407a-9c64-94d5ef900c9e - - - - -] metering data 
> storage.objects.outgoing.bytes for b7a490322e65422cb1129b13b49020e6 @ 
> 2014-03-10T09:34:31.090107:
>
> So collector starts to process_metering_data in dispatcher only in 9:42
> but nova sent it in 9:34. To look at whole picture please take look at
> picture [1]. It illustrates time difference based on this message in logs.
>
> Besides, I decided to take a look on difference between the RPC-publisher
> sends the message and the collector receives the message. To create this
> plot I've parsed the lines like below from anotifications log:
>
>
>
> 2014-03-10 09:25:49.333 
> 
>  22833 DEBUG ceilometer.openstack.common.rpc.amqp [-] UNIQUE_ID is 
> 683dd3f130534b9fbb5606aef862b83d.
>
>
>
>
>
>  After that I found the corresponding id in collector log:
>
> 2014-03-10 09:25:49.352 
> 
>  22845 DEBUG ceilometer.openstack.common.rpc.amqp [-] received 
> {u'_context_domain': None, u'_context_request_id': 
> u'req-0a5fafe6-e097-4f90-a68a-a91da1cff22c',
>
>
>
> u'args': {u'data': [...,
>  u'message_id': u'f7ad63fc-a835-11e3-8223-bc764e205385', u'counter_type': 
> u'gauge'}]}, u'_context_read_only': False, u'_unique_id': 
> u'683dd3f130534b9fbb5606aef862b83d',
>
>
>
> u'_context_user_identity': u'- - - - -', u'_context_instance_uuid': None, 
> u'_context_show_deleted': False, u'_context_tenant': None, 
> u'_context_auth_token': '',
>
>
>
> } _safe_log 
> /opt/stack/new/ceilometer/ceilometer/openstack/common/rpc/common.py:280
>
> So in the example above we see time-difference only in 20 milliseconds.
> But it grows very quickly :( To see it please take a look on picture [2].
>
> To summarize pictures:
>
> 1. Picture 1: Axis Y: amount of seconds between nova creates notification
> and the collector retrieves the message. Axis X: timestamp
>
> 2. Picture 2: Axis Y: amount of seconds between the publisher publishes
> the message and the collector retrieves the message. Axis X: timestamp
>
> These pictures are almost the same and it makes me think that collector
> cannot manage with big amount of messages. What do you think about it? Do
> you agree or you need more evidences, e.g. amount of messages in rabbit or
> amth else?
>
> Let's discuss that in [Ceilometer] topic first, I will create a new thread
> about testing strategy in tempest later. Because in this circumstances we
> forced to refuse from created notification tests and cannot reduce time for
> polling because it will make everything even worst.
>
>
>
> [1]: http://postimg.org/image/r4501bdyb/
> [2]: http://postimg.org/image/yy5a1ste1/
>
>
>
> Thanks for your attention,
>
> Nadya
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-b

Re: [openstack-dev] [SWIFT] SWIFT object caching (HOT content)

2014-03-11 Thread Clay Gerrard
At the HK summit, the topic of hot content came up and seemed to broken
into two parts.

1) developing a "caching" storage tier for hot content that would allow
proxies to more quickly serve small data requests with even higher rates of
concurrent access.
2) developing a mechanism to programmatically/automatically (or even
explicitly) identify "hot" content that should be cached or expired from
the caching storage tier.

Much progress has been made during this development/release cycle on
"storage policies" [1] which would seem to offer a semantic building block
for the caching storage tierr - but to my knowledge no one is actively
working on the details of a caching storage police (besides maybe a
high-replica ring backed with ssds), or the second (harder?) part of
identifying which data should be cached or for how long.

I glanced at those blueprints and I'm not sure they line up entirely with
the current thinking on hot content - probably be a good idea to revisit
the topic at upcoming summit in ALT.  I believe proposals are open. [2]

-Clay

1. https://blueprints.launchpad.net/swift/+spec/storage-policies
2. http://summit.openstack.org/


On Mon, Mar 10, 2014 at 10:09 PM, Anbu  wrote:

> Hi,
> I came across this blueprint
> https://blueprints.launchpad.net/swift/+spec/swift-proxy-caching and a
> related etherpad https://etherpad.openstack.org/p/swift-kt about SWIFT
> object caching.
> I would like to contribute in this and I would also like to know if
> anybody has made any progress in this area.
> If anyone is aware of a discussion that has happened/happening in this,
> kindly point me to it.
>
> Thank you,
> Babu
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer]Collector's performance

2014-03-11 Thread Ildikó Váncsa
Hi Nadya,

You mentioned multiple DB backends in your mail. Which one did you use to 
perform these tests or did you get the same/similar performance results in case 
of both?

Best Regards,
Ildiko

From: Nadya Privalova [mailto:nprival...@mirantis.com]
Sent: Tuesday, March 11, 2014 6:05 PM
To: OpenStack Development Mailing List
Subject: [openstack-dev] [Ceilometer]Collector's performance

Hi team!
Last week we were working on notification problem in ceilometer during tempest 
tests creation. Tests for notification passed successfully on Postgres but 
failed on MySQL. This made us start investigations and this email contains some 
results.
As it turned out, tempest as it is is something like performance-testing for 
Ceilometer. It contains 2057 tests. Almost in all test OpenStack resources are 
being created and deleted: images, instances, volumes. E.g. during instance 
creation nova sends 9 notifications. And all the tests are running in parallel 
for about 40 minutes.
>From ceilometer-collector logs we may found very useful message:

2014-03-10 
09:42:41.356
 22845 DEBUG ceilometer.dispatcher.database 
[req-16ea95c5-6454-407a-9c64-94d5ef900c9e - - - - -] metering data 
storage.objects.outgoing.bytes for b7a490322e65422cb1129b13b49020e6 @ 
2014-03-10T09:34:31.090107:
So collector starts to process_metering_data in dispatcher only in 9:42 but 
nova sent it in 9:34. To look at whole picture please take look at picture [1]. 
It illustrates time difference based on this message in logs.
Besides, I decided to take a look on difference between the RPC-publisher sends 
the message and the collector receives the message. To create this plot I've 
parsed the lines like below from anotifications log:


2014-03-10 
09:25:49.333
 22833 DEBUG ceilometer.openstack.common.rpc.amqp [-] UNIQUE_ID is 
683dd3f130534b9fbb5606aef862b83d.





After that I found the corresponding id in collector log:

2014-03-10 
09:25:49.352
 22845 DEBUG ceilometer.openstack.common.rpc.amqp [-] received 
{u'_context_domain': None, u'_context_request_id': 
u'req-0a5fafe6-e097-4f90-a68a-a91da1cff22c',




u'args': {u'data': [...,
 u'message_id': u'f7ad63fc-a835-11e3-8223-bc764e205385', u'counter_type': 
u'gauge'}]}, u'_context_read_only': False, u'_unique_id': 
u'683dd3f130534b9fbb5606aef862b83d',




u'_context_user_identity': u'- - - - -', u'_context_instance_uuid': None, 
u'_context_show_deleted': False, u'_context_tenant': None, 
u'_context_auth_token': '',




} _safe_log 
/opt/stack/new/ceilometer/ceilometer/openstack/common/rpc/common.py:280
So in the example above we see time-difference only in 20 milliseconds. But it 
grows very quickly :( To see it please take a look on picture [2].
To summarize pictures:
1. Picture 1: Axis Y: amount of seconds between nova creates notification and 
the collector retrieves the message. Axis X: timestamp
2. Picture 2: Axis Y: amount of seconds between the publisher publishes the 
message and the collector retrieves the message. Axis X: timestamp
These pictures are almost the same and it makes me think that collector cannot 
manage with big amount of messages. What do you think about it? Do you agree or 
you need more evidences, e.g. amount of messages in rabbit or amth else?
Let's discuss that in [Ceilometer] topic first, I will create a new thread 
about testing strategy in tempest later. Because in this circumstances we 
forced to refuse from created notification tests and cannot reduce time for 
polling because it will make everything even worst.

[1]: http://postimg.org/image/r4501bdyb/
[2]: http://postimg.org/image/yy5a1ste1/

Thanks for your attention,
Nadya
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Trove] MySQL 5.6 disk-image-builder element

2014-03-11 Thread Clint Byrum
Excerpts from Lowery, Mathew's message of 2014-03-11 10:33:12 -0700:
> My colleague, Ranjitha Vemula, just submitted a trove-integration patch
> set to add a MySQL 5.6 disk-image-builder element. Two major hurdles were
> faced with this patch set.



> In my understanding, D.I.B. elements should be pretty dumb and the caller
> should worry about composing them so this setup seems like the best
> approach to me but it leaves ubuntu-mysql untouched. A point made by
> hub_cap is that now ubuntu-mysql, similar to ubuntu-guest, would imply
> "things common to all MySQL images" but as of right now, it is as it was
> before: a MySQL 5.5 image. So there's that to discuss.

Yes and no. Yes you should allow users to compose their images by listing
elements. However, you can compose your element from other elements as
well automatically by using The element-deps file in the root.

I'd suggest copying everything that is common to the two elements
into an ubuntu-mysql-common element, and having both ubuntu-mysql and
ubuntu-mysq-5.6 list ubuntu-mysql-common in element-deps.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Developer documentation

2014-03-11 Thread Collins, Sean
I put together another review that starts to document the HTTP API layer
and structure.

https://review.openstack.org/#/c/79675/

I think it's pretty dense - there's a ton of terminology and concepts
about WSGI and python that I sort of skim over - it's probably not
newbie friendly just yet - comments and suggestions welcome - especially
on how to introduce WSGI and everything else without making someone's
head explode.

-- 
Sean M. Collins
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Trove] MySQL 5.6 disk-image-builder element

2014-03-11 Thread Lowery, Mathew
My colleague, Ranjitha Vemula, just submitted a trove-integration patch
set to add a MySQL 5.6 disk-image-builder element. Two major hurdles were
faced with this patch set.

1) The manager

The resulting MySQL 5.6 image can be registered using mysql as the
datastore, mysql as the manager, and
trove.guestagent.datastore.mysql.manager.Manager as the class--in other
words, all the same config as MySQL 5.5 except a different image. To
repeat, no trove changes are required.

Since there is no official Ubuntu package for MySQL 5.6, the official
mysql.com Debian package was used.

Several assumptions made by the MySQL 5.5 manager (specifically paths) had
to be worked around.

The following are hard-coded in the my.cnf template and the default values
from MySQL's Debian package for these paths don't match those in the
manager.
* basedir
* pid-file

The following are referenced using absolute paths (that don't match
mysql.com's Debian package).
* /usr/sbin/mysqld

For all of the above path mismatches, a combination of symlinking and
startup script sed's were used. Regarding use of absolute paths to
binaries, the manager sometimes uses binaries from the PATH and sometimes
uses absolute paths. This should probably be consistent one way or the
other. Although using the PATH would add flexibility to the manager.
Regarding my.cnf template, should there be a way (e.g. database) to inject
some fundamental path mapping between the image layout and the manager?


2) disk-image-builder elements for multiple versions of a single datastore

The following layout was chosen (after debating whether logic should
instead be added to the existing ubuntu-mysql element):
trove-integration/scripts/files/elements/ubuntu-mysql-5.6/install.d/10-mysq
l

Paired with Viswa Vurtharkar's patch set
(https://review.openstack.org/#/c/72804/), this element can be
kick-started using:
DATASTORE_VERSION="-5.6" PACKAGES=" " ./redstack kick-start mysql

In my understanding, D.I.B. elements should be pretty dumb and the caller
should worry about composing them so this setup seems like the best
approach to me but it leaves ubuntu-mysql untouched. A point made by
hub_cap is that now ubuntu-mysql, similar to ubuntu-guest, would imply
"things common to all MySQL images" but as of right now, it is as it was
before: a MySQL 5.5 image. So there's that to discuss.

Feedback is appreciated.
Mat




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] os-cloud-config ssh access to cloud

2014-03-11 Thread Clint Byrum
Excerpts from Jiří Stránský's message of 2014-03-10 06:10:46 -0700:
> On 7.3.2014 14:50, Imre Farkas wrote:
> > On 03/07/2014 10:30 AM, Jiří Stránský wrote:
> >> Hi,
> >>
> >> there's one step in cloud initialization that is performed over SSH --
> >> calling "keystone-manage pki_setup". Here's the relevant code in
> >> keystone-init [1], here's a review for moving the functionality to
> >> os-cloud-config [2].
> >>
> >> The consequence of this is that Tuskar will need passwordless ssh key to
> >> access overcloud controller. I consider this suboptimal for two reasons:
> >>
> >> * It creates another security concern.
> >>
> >> * AFAIK nova is only capable of injecting one public SSH key into
> >> authorized_keys on the deployed machine, which means we can either give
> >> it Tuskar's public key and allow Tuskar to initialize overcloud, or we
> >> can give it admin's custom public key and allow admin to ssh into
> >> overcloud, but not both. (Please correct me if i'm mistaken.) We could
> >> probably work around this issue by having Tuskar do the user key
> >> injection as part of os-cloud-config, but it's a bit clumsy.
> >>
> >>
> >> This goes outside the scope of my current knowledge, i'm hoping someone
> >> knows the answer: Could pki_setup be run by combining powers of Heat and
> >> os-config-refresh? (I presume there's some reason why we're not doing
> >> this already.) I think it would help us a good bit if we could avoid
> >> having to SSH from Tuskar to overcloud.
> >
> > Yeah, it came up a couple times on the list. The current solution is
> > because if you have an HA setup, the nodes can't decide on its own,
> > which one should run pki_setup.
> > Robert described this topic and why it needs to be initialized
> > externally during a weekly meeting in last December. Check the topic
> > 'After heat stack-create init operations (lsmola)':
> > http://eavesdrop.openstack.org/meetings/tripleo/2013/tripleo.2013-12-17-19.02.log.html
> 
> Thanks for the reply Imre. Yeah i vaguely remember that meeting :)
> 
> I guess to do HA init we'd need to pick one of the controllers and run 
> the init just there (set some parameter that would then be recognized by 
> os-refresh-config). I couldn't find if Heat can do something like this 
> on it's own, probably we'd need to deploy one of the controller nodes 
> with different parameter set, which feels a bit weird.
> 
> Hmm so unless someone comes up with something groundbreaking, we'll 
> probably keep doing what we're doing. Having the ability to inject 
> multiple keys to instances [1] would help us get rid of the Tuskar vs. 
> admin key issue i mentioned in the initial e-mail. We might try asking a 
> fellow Nova developer to help us out here.
> 

I think the long term idea is to run a separate CA and use Barbican for
key distribution, as that is precisely what it is designed to do.

For now SSH'ing in one time to bootstrap a cloud seems an acceptable
risk, and the scope of that SSH key can be ratcheted down to just running
pki_setup, which may be a good idea.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] OpenStack/GSoC

2014-03-11 Thread Davanum Srinivas
Hi,

Mentors:
* Please click on "My Dashboard" then "Connect with organizations" and
request a connection as a mentor (on the GSoC web site -
http://www.google-melange.com/)

Students:
* Please see the Application template you will need to fill in on the GSoC site.
  http://www.google-melange.com/gsoc/org2/google/gsoc2014/openstack
* Please click on "My Dashboard" then "Connect with organizations" and
request a connection

Both Mentors and Students:
Let's meet on #openstack-gsoc channel on Thursday 9:00 AM EDT / 13:00
UTC for about 30 mins to meet and greet since all application deadline
is next week. If this time is not convenient, please send me a note
and i'll arrange for another time say on friday as well.
http://www.timeanddate.com/worldclock/fixedtime.html?iso=20140313T09&p1=43&am=30

We need to get an idea of how many slots we need to apply for based on
really strong applications with properly fleshed out project ideas and
mentor support. Hoping the meeting on IRC will nudge the students and
mentors work towards that goal.

Thanks,
dims

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][db][performance] Proposal: Get rid of soft deletion (step by step)

2014-03-11 Thread Mike Wilson
Undeleting things is an important use case in my opinion. We do this in our
environment on a regular basis. In that light I'm not sure that it would be
appropriate just to log the deletion and git rid of the row. I would like
to see it go to an archival table where it is easily restored.

-Mike


On Mon, Mar 10, 2014 at 3:44 PM, Joshua Harlow wrote:

>  Sounds like a good idea to me.
>
>  I've never understood why we treat the DB as a LOG (keeping deleted == 0
> records around) when we should just use a LOG (or similar system) to begin
> with instead.
>
>  Does anyone use the feature of switching deleted == 1 back to deleted =
> 0? Has this worked out for u?
>
>  Seems like some of the feedback on
> https://etherpad.openstack.org/p/operators-feedback-mar14 also suggests
> that this has been a operational pain-point for folks (Tool to delete
> things properly suggestions and such...).
>
>   From: Boris Pavlovic 
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Date: Monday, March 10, 2014 at 1:29 PM
> To: OpenStack Development Mailing List ,
> Victor Sergeyev 
> Subject: [openstack-dev] [all][db][performance] Proposal: Get rid of soft
> deletion (step by step)
>
>   Hi stackers,
>
>  (It's proposal for Juno.)
>
>  Intro:
>
>  Soft deletion means that records from DB are not actually deleted, they
> are just marked as a "deleted". To mark record as a "deleted" we put in
> special table's column "deleted" record's ID value.
>
>  Issue 1: Indexes & Queries
> We have to add in every query "AND deleted == 0" to get non-deleted
> records.
> It produce performance issue, cause we should add it in any index one
> "extra" column.
> As well it produce extra complexity in db migrations and building queries.
>
>  Issue 2: Unique constraints
> Why we store ID in deleted and not True/False?
>  The reason is that we would like to be able to create real DB unique
> constraints and avoid race conditions on "insert" operation.
>
>  Sample: we Have table (id, name, password, deleted) we would like to put
> in column "name" only unique value.
>
>  Approach without UC: if count(`select  where name = name`) == 0:
> insert(...)
> (race cause we are able to add new record between )
>
>  Approach with UC: try: insert(...) except Duplicate: ...
>
>  So to add UC we have to add them on (name, deleted). (to be able to make
> insert/delete/insert with same name)
>
>  As well it produce performance issues, because we have to use Complex
> unique constraints on 2  or more columns. + extra code & complexity in db
> migrations.
>
>  Issue 3: Garbage collector
>
>  It is really hard to make garbage collector that will have good
> performance and be enough common to work in any case for any project.
> Without garbage collector DevOps have to cleanup records by hand, (risk to
> break something). If they don't cleanup DB they will get very soon
> performance issue.
>
>  To put in a nutshell most important issues:
> 1) Extra complexity to each select query & extra column in each index
> 2) Extra column in each Unique Constraint (worse performance)
> 3) 2 Extra column in each table: (deleted, deleted_at)
> 4) Common garbage collector is required
>
>
>  To resolve all these issues we should just remove soft deletion.
>
>  One of approaches that I see is in step by step removing "deleted"
> column from every table with probably code refactoring.  Actually we have 3
> different cases:
>
>  1) We don't use soft deleted records:
> 1.1) Do .delete() instead of .soft_delete()
> 1.2) Change query to avoid adding extra "deleted == 0" to each query
> 1.3) Drop "deleted" and "deleted_at" columns
>
>  2) We use soft deleted records for internal stuff "e.g. periodic tasks"
> 2.1) Refactor code somehow: E.g. store all required data by periodic task
> in some special table that has: (id, type, json_data) columns
> 2.2) On delete add record to this table
> 2.3-5) similar to 1.1, 1.2, 13
>
>  3) We use soft deleted records in API
> 3.1) Deprecated API call if it is possible
> 3.2) Make proxy call to ceilometer from API
> 3.3) On .delete() store info about records in (ceilometer, or somewhere
> else)
> 3.4-6) similar to 1.1, 1.2, 1.3
>
> This is not ready RoadMap, just base thoughts to start the constructive
> discussion in the mailing list, so %stacker% your opinion is very
> important!
>
>
>  Best regards,
> Boris Pavlovic
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] MySQL 5.6 disk-image-builder element

2014-03-11 Thread Lowery, Mathew
My colleague, Ranjitha Vemula, just submitted a trove-integration patch
set to add a MySQL 5.6 disk-image-builder element. Two major hurdles were
faced with this patch set.

1) The manager

The resulting MySQL 5.6 image can be registered using mysql as the
datastore, mysql as the manager, and
trove.guestagent.datastore.mysql.manager.Manager as the class--in other
words, all the same config as MySQL 5.5 except a different image. To
repeat, no trove changes are required.

Since there is no official Ubuntu package for MySQL 5.6, the official
mysql.com Debian package was used.

Several assumptions made by the MySQL 5.5 manager (specifically paths) had
to be worked around.

The following are hard-coded in the my.cnf template and the default values
from MySQL's Debian package for these paths don't match those in the
manager.
* basedir
* pid-file

The following are referenced using absolute paths (that don't match
mysql.com's Debian package).
* /usr/sbin/mysqld

For all of the above path mismatches, a combination of symlinking and
startup script sed's were used. Regarding use of absolute paths to
binaries, the manager sometimes uses binaries from the PATH and sometimes
uses absolute paths. This should probably be consistent one way or the
other. Although using the PATH would add flexibility to the manager.
Regarding my.cnf template, should there be a way (e.g. database) to inject
some fundamental path mapping between the image layout and the manager?


2) disk-image-builder elements for multiple versions of a single datastore

The following layout was chosen (after debating whether logic should
instead be added to the existing ubuntu-mysql element):
trove-integration/scripts/files/elements/ubuntu-mysql-5.6/install.d/10-mysq
l

Paired with Viswa Vurtharkar's patch set
(https://review.openstack.org/#/c/72804/), this element can be
kick-started using:
DATASTORE_VERSION="-5.6" PACKAGES=" " ./redstack kick-start mysql

In my understanding, D.I.B. elements should be pretty dumb and the caller
should worry about composing them so this setup seems like the best
approach to me but it leaves ubuntu-mysql untouched. A point made by
hub_cap is that now ubuntu-mysql, similar to ubuntu-guest, would imply
"things common to all MySQL images" but as of right now, it is as it was
before: a MySQL 5.5 image. So there's that to discuss.

Feedback is appreciated.
Mat


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] os-cloud-config ssh access to cloud

2014-03-11 Thread Clint Byrum
Excerpts from Adam Young's message of 2014-03-11 07:50:58 -0700:
> On 03/11/2014 05:25 AM, Dmitry Mescheryakov wrote:
> > For what it's worth in Sahara (former Savanna) we inject the second
> > key by userdata. I.e. we add
> > echo "${public_key}" >> ${user_home}/.ssh/authorized_keys
> >
> > to the other stuff we do in userdata.
> >
> > Dmitry
> >
> > 2014-03-10 17:10 GMT+04:00 Jiří Stránský :
> >> On 7.3.2014 14:50, Imre Farkas wrote:
> >>> On 03/07/2014 10:30 AM, Jiří Stránský wrote:
>  Hi,
> 
>  there's one step in cloud initialization that is performed over SSH --
>  calling "keystone-manage pki_setup". Here's the relevant code in
>  keystone-init [1], here's a review for moving the functionality to
>  os-cloud-config [2].
> 
> You really should not be doing this.  I should never have written 
> pki_setup:  it is a developers tool:  user a real CA and a real certificate.
> 

This alludes to your point, but also says that keystone-manage can be used:

http://docs.openstack.org/developer/keystone/configuration.html#certificates-for-pki

Seems that some time should be spent making this more clear if for some
reason pki_setup is weak for production use cases. My brief analysis
of the code says that the weakness is that the CA should generally be
kept apart from the CSR's so that a compromise of a node does not lead
to an attacker being able to generate their own keystone service. This
seems like a low probability attack vector, as compromise of the keystone
machines also means write access to the token backend, and thus no need
to generate ones' own tokens (you can just steal all the existing tokens).

I'd like to see it called out in the section above though, so that
users can know what risk their accepting when they use what looks like a
recommended tool. Another thing would be to log copious warnings when
pki_setup is run that it is not for production usage. That should be
sufficient to scare some diligent deployers into reading the docs closely
and mitigating the risk.

Anyway, shaking fist at users and devs in -dev for using tools in the
documentation probably _isn't_ going to convince anyone to spend more
time setting up PKI tokens.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ceilometer]Collector's performance

2014-03-11 Thread Nadya Privalova
Hi team!

Last week we were working on notification problem in ceilometer during
tempest tests creation. Tests for notification passed successfully on
Postgres but failed on MySQL. This made us start investigations and this
email contains some results.

As it turned out, tempest as it is is something like performance-testing
for Ceilometer. It contains 2057 tests. Almost in all test OpenStack
resources are being created and deleted: images, instances, volumes. E.g.
during instance creation nova sends 9 notifications. And all the tests are
running in parallel for about 40 minutes.
>From ceilometer-collector logs we may found very useful message:

2014-03-10 09:42:41.356

22845 DEBUG ceilometer.dispatcher.database
[req-16ea95c5-6454-407a-9c64-94d5ef900c9e - - - - -] metering data
storage.objects.outgoing.bytes for b7a490322e65422cb1129b13b49020e6 @
2014-03-10T09:34:31.090107:

So collector starts to process_metering_data in dispatcher only in 9:42 but
nova sent it in 9:34. To look at whole picture please take look at picture
[1]. It illustrates time difference based on this message in logs.
Besides, I decided to take a look on difference between the RPC-publisher
sends the message and the collector receives the message. To create this
plot I've parsed the lines like below from anotifications log:

2014-03-10 09:25:49.333

22833 DEBUG ceilometer.openstack.common.rpc.amqp [-] UNIQUE_ID is
683dd3f130534b9fbb5606aef862b83d.


After that I found the corresponding id in collector log:

2014-03-10 09:25:49.352

22845 DEBUG ceilometer.openstack.common.rpc.amqp [-] received
{u'_context_domain': None, u'_context_request_id':
u'req-0a5fafe6-e097-4f90-a68a-a91da1cff22c',

u'args': {u'data': [...,
 u'message_id': u'f7ad63fc-a835-11e3-8223-bc764e205385',
u'counter_type': u'gauge'}]}, u'_context_read_only': False,
u'_unique_id': u'683dd3f130534b9fbb5606aef862b83d',

u'_context_user_identity': u'- - - - -', u'_context_instance_uuid':
None, u'_context_show_deleted': False, u'_context_tenant': None,
u'_context_auth_token': '',

} _safe_log
/opt/stack/new/ceilometer/ceilometer/openstack/common/rpc/common.py:280

So in the example above we see time-difference only in 20 milliseconds. But
it grows very quickly :( To see it please take a look on picture [2].

To summarize pictures:
1. Picture 1: Axis Y: amount of seconds between nova creates notification
and the collector retrieves the message. Axis X: timestamp
2. Picture 2: Axis Y: amount of seconds between the publisher publishes the
message and the collector retrieves the message. Axis X: timestamp

These pictures are almost the same and it makes me think that collector
cannot manage with big amount of messages. What do you think about it? Do
you agree or you need more evidences, e.g. amount of messages in rabbit or
amth else?
Let's discuss that in [Ceilometer] topic first, I will create a new thread
about testing strategy in tempest later. Because in this circumstances we
forced to refuse from created notification tests and cannot reduce time for
polling because it will make everything even worst.

[1]: http://postimg.org/image/r4501bdyb/
[2]: http://postimg.org/image/yy5a1ste1/

Thanks for your attention,
Nadya
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Mini-summit Interest?

2014-03-11 Thread Mike Wilson
Hangouts  worked well at the nova mid-cycle meetup. Just make sure you have
your network situation sorted out before hand. Bandwidth and firewalls are
what comes to mind immediately.

-Mike


On Tue, Mar 11, 2014 at 9:34 AM, Tom Creighton
wrote:

> When the Designate team had their mini-summit, they had an open Google
> Hangout for remote participants.  We could even have an open conference
> bridge if you are not partial to video conferencing.  With the issue of
> inclusion solved, let's focus on a date that is good for the team!
>
> Cheers,
>
> Tom Creighton
>
>
> On Mar 10, 2014, at 4:10 PM, Edgar Magana  wrote:
>
> > Eugene,
> >
> > A have a few arguments why I believe this is not 100% inclusive
> >   * Is the foundation involved on this process? How? What is the
> budget? Who is the responsible from the foundation  side?
> >   * If somebody made already travel arraignments, it won't be
> possible to make changes at not cost.
> >   * Staying extra days in a different city could impact anyone's
> budget
> >   * As a OpenStack developer. I want to understand why the summit is
> not enough for deciding the next steps for each project. If that is the
> case, I would prefer to make changes on the organization of the summit
> instead of creating mini-summits all around!
> > I could continue but I think these are good enough.
> >
> > I could agree with your point about previous summits being distractive
> for developers, this is why this time the OpenStack foundation is trying
> very hard to allocate specific days for the conference and specific days
> for the summit.
> > The point that I am totally agree with you is that we SHOULD NOT have
> session about work that will be done no matter what!  Those are just a
> waste of good time that could be invested in very interesting discussions
> about topics that are still not clear.
> > I would recommend that you express this opinion to Mark. He is the right
> guy to decide which sessions will bring interesting discussions and which
> ones will be just a declaration of intents.
> >
> > Thanks,
> >
> > Edgar
> >
> > From: Eugene Nikanorov 
> > Reply-To: OpenStack List 
> > Date: Monday, March 10, 2014 10:32 AM
> > To: OpenStack List 
> > Subject: Re: [openstack-dev] [Neutron][LBaaS] Mini-summit Interest?
> >
> > Hi Edgar,
> >
> > I'm neutral to the suggestion of mini summit at this point.
> > Why do you think it will exclude developers?
> > If we keep it 1-3 days prior to OS Summit in Atlanta (e.g. in the same
> city) that would allow anyone who joins OS Summit to save on extra
> travelling.
> > OS Summit itself is too distractive to have really productive
> discussions, unless your missing the sessions and spend time discussing.
> > For instance design sessions basically only good for declaration of
> intents, but not for real discussion of a complex topic at meaningful
> detail level.
> >
> > What would be your suggestions to make this more inclusive?
> > I think the time and place is the key here - hence Atlanta and few days
> prior OS summit.
> >
> > Thanks,
> > Eugene.
> >
> >
> >
> > On Mon, Mar 10, 2014 at 10:59 PM, Edgar Magana 
> wrote:
> >> Team,
> >>
> >> I found that having a mini-summit with a very short notice means
> excluding
> >> a lot of developers of such an interesting topic for Neutron.
> >> The OpenStack summit is the opportunity for all developers to come
> >> together and discuss the next steps, there are many developers that CAN
> >> NOT afford another trip for a "special" summit. I am personally against
> >> that and I do support Mark's proposal of having all the conversation
> over
> >> IRC and mailing list.
> >>
> >> Please, do not start excluding people that won't be able to attend
> another
> >> face-to-face meeting besides the summit. I believe that these are the
> >> little things that make an open source community weak if we do not
> control
> >> it.
> >>
> >> Thanks,
> >>
> >> Edgar
> >>
> >>
> >> On 3/6/14 9:51 PM, "Mark McClain"  wrote:
> >>
> >> >
> >> >On Mar 6, 2014, at 4:31 PM, Jay Pipes  wrote:
> >> >
> >> >> On Thu, 2014-03-06 at 21:14 +, Youcef Laribi wrote:
> >> >>> +1
> >> >>>
> >> >>> I think if we can have it before the Juno summit, we can take
> >> >>> concrete, well thought-out proposals to the community at the summit.
> >> >>
> >> >> Unless something has changed starting at the Hong Kong design summit
> >> >> (which unfortunately I was not able to attend), the design summits
> have
> >> >> always been a place to gather to *discuss* and *debate* proposed
> >> >> blueprints and design specs. It has never been about a gathering to
> >> >> rubber-stamp proposals that have already been hashed out in private
> >> >> somewhere else.
> >> >
> >> >You are correct that is the goal of the design summit.  While I do
> think
> >> >it is wise to discuss the next steps with LBaaS at this point in time,
> I
> >> >am not a proponent of in person mini-design summits.  Many contributors
> >> >to LBaaS are distribute

Re: [openstack-dev] testr help

2014-03-11 Thread Doug Hellmann
On Mon, Mar 10, 2014 at 7:20 PM, Zane Bitter  wrote:

> On 10/03/14 16:04, Clark Boylan wrote:
>
>> On Mon, Mar 10, 2014 at 11:31 AM, Zane Bitter  wrote:
>>
>>> Thanks Clark for this great write-up. However, I think the solution to
>>> the
>>> problem in question is richer commands and better output formatting, not
>>> discarding information.
>>>
>>>
>>> On 07/03/14 16:30, Clark Boylan wrote:
>>>

 But running tests in parallel introduces some fun problems. Like where
 do you send logging and stdout output. If you send it to the console
 it will be interleaved and essentially useless. The solution to this
 problem (for which I am probably to blame) is to have each test
 collect the logging, stdout, and stderr associated to that test and
 attach it to that tests subunit reporting. This way you get all of the
 output associated with a single test attached to that test and don't
 have crazy interleaving that needs to be demuxed. The capturing of

>>>
>>>
>>> This is not really a problem unique to parallel test runners. Printing to
>>> the console is just not a great way to handle stdout/stderr in general
>>> because it messes up the output of the test runner, and nose does exactly
>>> the same thing as testr in collecting them - except that nose combines
>>> messages from the 3 sources and prints the output for human consumption,
>>> rather than in separate groups surrounded by lots of {{{random braces}}}.
>>>
>>>  Except nose can make them all the same file descriptor and let
>> everything multiplex together. Nose isn't demuxing arbitrary numbers
>> of file descriptors from arbitrary numbers of processes.
>>
>
> Can't each subunit process do the same thing?
>
> As a user, here's how I want it to work:
>  - Each subunit process works like nose - multiplexing the various streams
> of output together and associating it with a particular test - except that
> nothing is written to the console but instead returned to testr in subunit
> format.
>  - testr reads the subunit data and saves it to the test repository.
>  - testr prints a report to the console based on the data it just
> received/saved.
>
> How it actually seems to work:
>  - A magic pixie creates a TestCase class with a magic incantation to
> capture your stdout/stderr/logging without breaking other test runners.
>  - Or they don't! You're hosed. The magic incantation is undocumented.
>  - You change all of your TestCases to inherit from the class with the
> magic pixie dust.
>  - Each subunit process associates the various streams of output (if you
> set it up to) with a particular test, but keeps them separate so that if
> you want to figure out the order of events you have to direct them all to
> the same channel - which, in practice, means you can only use logging
> (since some of the events you are interested in probably already exist in
> the code as logs).
>  - when you want to debug a test, you have to all the tedious loigging
> setup if it doesn't already exist in the file. It probably won't, because
> flake8 would have made you delete it unless it's being used already.
>  - testr reads the subunit data and saves it to the test repository.
>  - testr prints a report to the console based on the data it just
> received/saved, though parts of it look like a raw data dump.
>
> While there may be practical reasons why it currently works like the
> latter, I would submit that there is no technical reason it could not work
> like the former. In particular, there is nothing about the concept of
> running the tests in parallel that would prevent it, just as there is
> nothing about what nose does that would prevent two copies of nose from
> running at the same time on different sets of tests.
>
>
>  this data is toggleable in the test suite using environment variables
 and is off by default so that when you are not using testr you don't
 get this behavior [0]. However we seem to have neglected log capture
 toggles.

>>>
>>>
>>> Oh wow, there is actually a way to get the stdout and stderr? Fantastic!
>>> Why
>>> on earth are these disabled?
>>>
>>>  See above, testr has to deal with multiple writers to stdout and
>> stderr, you really don't want them all going to the same place when
>> using testr (which is why stdout and stderr are captured when running
>> testr but not otherwise).
>>
>
> Ah, OK, I think I understand now. testr passes the environment variables
> automatically, so you only have to know the magic incantation at the time
> you're writing the test, not when you're running it.
>
>
>  Please, please, please don't turn off the logging too. That's the only
>>> tool
>>> left for debugging now that stdout goes into a black hole.
>>>
>>>  Logging goes into the same "black hole" today, I am suggesting that we
>> make this toggleable like we have made stdout and stderr capturing
>> toggleable. FWIW this isn't a black hole it is all captured on disk
>> and you can refer back to it at any time (

Re: [openstack-dev] No route matched for POST

2014-03-11 Thread Vijay B
Hi Aaron!

Yes, attaching the code diffs of the client and server. The diff
0001-Frist-commit-to-add-tag-create-CLI.patch needs to be applied on
python-neutronclient's master branch, and the diff
0001-Adding-a-tag-extension.patch needs to be applied on neutron's
stable/havana branch. After restarting q-svc, please run the CLI `neutron
tag-create --name tag1 --key key1 --value val1` to test it out.  Thanks for
offering to take a look at this!

Regards,
Vijay


On Mon, Mar 10, 2014 at 10:10 PM, Aaron Rosen  wrote:

> Hi Vijay,
>
> I think you'd have to post you're code for anyone to really help you.
> Otherwise we'll just be taking shots in the dark.
>
> Best,
>
> Aaron
>
>
> On Mon, Mar 10, 2014 at 7:22 PM, Vijay B  wrote:
>
>> Hi,
>>
>> I'm trying to implement a new extension API in neutron, but am running
>> into a "No route matched for POST" on the neutron service.
>>
>> I have followed the instructions in the link
>> https://wiki.openstack.org/wiki/NeutronDevelopment#API_Extensions when
>> trying to implement this extension.
>>
>> The extension doesn't depend on any plug in per se, akin to security
>> groups.
>>
>> I have defined a new file in neutron/extensions/, called Tag.py, with a
>> class Tag extending class extensions.ExtensionDescriptor, like the
>> documentation requires. Much like many of the other extensions already
>> implemented, I define my new extension as a dictionary, with fields like
>> allow_post/allow_put etc, and then pass this to the controller. I still
>> however run into a no route matched for POST error when I attempt to fire
>> my CLI to create a tag. I also edited the ml2 plugin file
>> neutron/plugins/ml2/plugin.py to add "tags" to
>> _supported_extension_aliases, but that didn't resolve the issue.
>>
>> It looks like I'm missing something quite minor, causing the the new
>> extension to not get registered, but I'm not sure what.
>>
>> I can provide more info/patches if anyone would like to take a look, and
>> it would be very much appreciated if someone could help me out with this.
>>
>> Thanks!
>> Regards,
>> Vijay
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


0001-Adding-a-tag-extension.patch
Description: Binary data


0001-Frist-commit-to-add-tag-create-CLI.patch
Description: Binary data
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat]Policy on upgades required config changes

2014-03-11 Thread Steven Dake

On 03/11/2014 07:35 AM, Sean Dague wrote:

On 03/11/2014 10:15 AM, Steven Dake wrote:

On 03/11/2014 04:04 AM, Sean Dague wrote:

On 03/04/2014 12:39 PM, Steven Hardy wrote:

Hi all,

As some of you know, I've been working on the instance-users blueprint[1].

This blueprint implementation requires three new items to be added to the
heat.conf, or some resources (those which create keystone users) will not
work:

https://review.openstack.org/#/c/73978/
https://review.openstack.org/#/c/76035/

So on upgrade, the deployer must create a keystone domain and domain-admin
user, add the details to heat.conf, as already been done in devstack[2].

The changes requried for this to work have already landed in devstack, but
it was discussed to day and Clint suggested this may be unacceptable
upgrade behavior - I'm not sure so looking for guidance/comments.

My plan was/is:
- Make devstack work
- Talk to tripleo folks to assist in any transition (what prompted this
   discussion)
- Document the upgrade requirements in the Icehouse release notes so the
   wider community can upgrade from Havana.
- Try to give a heads-up to those maintaining downstream heat deployment
   tools (e.g stackforge/puppet-heat) that some tweaks will be required for
   Icehouse.

However some have suggested there may be an openstack-wide policy which
requires peoples old config files to continue working indefinitely on
upgrade between versions - is this right?  If so where is it documented?

This is basically enforced in code in grenade, the language for this
actually got lost in the project requirements discussion in the TC, I'll
bring that back in the post graduation requirements discussion we're
having again.

The issue is - Heat still doesn't materially participate in grenade.
Heat is substantially far behind the other integrated projects in it's
integration with the upstream testing. Only monday did we finally start
gating on a real unit of work for Heat (the heat-slow jobs). If I was
letter grading projects right now on upstream testing I'd give Nova an
A, Neutron a C (still no full run, no working grenade), and Heat a D.

Sean,

I agree the Heat community hasn't done a bang-up job of getting
integrated with Tempest.  We only have 50 functional tests implemented.
The community clearly needs to do more and provide better functional
coverage with Heat.

It is inappropriate to say "Only monday did we finally start gating"
because that was a huge move in the right direction.  It took alot of
effort and should not be so easily dismissed.  Clearly the community,
and especially the core developers, are making an effort.  Keep in mind
we have to balance upstream development work, answering user questions,
staying on top of a 5 page review queue, keeping relationships and track
of the various integrated projects which are consuming Heat as a
building block, plus all of the demands of our day jobs.

I agree it was a huge step in the right direction. It's not clear to me
why expressing that this was very recent was inappropriate.

Recent conversations have made me realize that a lot of the Heat core
team doesn't realize that Heat's participation in upstream gating is
below average, so I decided to be blunt about it. Because it was only
after being blunt about that with the Neutron team in Hong Kong did we
get any real motion on it (Neutron has seen huge gains this cycle).

All the integrated projects have the same challenges.

Upstream QA is really important. It not only protects heat from itself,
it protects it from changes in other projects.


We just don't have enough bandwidth on the core team to tackle writing
all of the tempest test cases ourselves.  We have made an effort to
distribute this work to the overall heat community via wishlist bugs in
Heat which several new folks have picked up.  I hope to see our coverage
improve over time, especially with more advanced scenario tests through
this effort.

Bandwidth is a problem for everyone. It's a matter of priorities. The
fact that realistic upstream gating is considered wishlist priority in
from a Heat perspective is something I find troubling.

Sean,

Unfortunately the root of the problem is there is no way to track in one 
place the suggested test cases for projects.  The Tempest community 
doesn't want test cases in the tempest launchpad tracker. At one point 
we were told to track the work using etherpads, which is absolutely 
ridiculous.


So we must resort to using wishlist priority.  In all cases, a user bug 
that has a negative impact on operation of Heat is higher priority then 
implementing functional testing.  I get that if we had functional 
testing, maybe that bug wouldn't have been filed in the first case.  
However, we are in a situation where we already have the bugs, and they 
already need to be addressed.


If the test cases were stored in tempest launchpad, they could be 
properly prioritized from a "upstream-testing POV".  The purpose of the 
Heat launchpad tracker is to ide

Re: [openstack-dev] [neutron][QOS]How is the BP about ml-qos going?

2014-03-11 Thread Collins, Sean
On Mon, Mar 10, 2014 at 11:13:47PM EDT, Yuzhou (C) wrote:
> Hi stackers,
> 
>   The progress of the bp about ml2-qos is code review for long time.
> Why didn't the implementation of qos commit the neutron master ?

For a while, I did not believe that this API extension would ever 
get merged, so I continued to do improvements and bug fixes and push
them to the Comcast GitHub repo for Neutron, to support our deployment,
but I did not update the reviews in Gerrit.

I recently revived the reviews - and have pushed some updates. I hope to
get this merged for the J release, and have scheduled a summit session
for Atlanta to discuss.

> Anyone who knows the history can help me or give me a hint how to find the 
> discuss mail?

Search for posts tagged [QoS] - that should get most of them.

-- 
Sean M. Collins
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] Edit subnet in workflows - ip_version hidden?

2014-03-11 Thread Abishek Subramanian (absubram)
Thanks Radomir.

Yes I've changed it to a readonly. But just wanted to double check
I didn't end up breaking something elsewhere :)

Althouh - how up to date is this code?

These are the actual lines of code -

# NOTE(amotoki): When 'disabled' attribute is set for the ChoiceField
# and ValidationError is raised for POST request, the initial value of
# the ip_version ChoiceField is not set in the re-displayed form
# As a result, 'IPv4' is displayed even when IPv6 is used if
# ValidationError is detected. In addition 'required=True' check
complains
# when re-POST since the value of the ChoiceField is not set.
# Thus now I use HiddenInput for the ip_version ChoiceField as a work
# around.
ip_version = forms.ChoiceField(choices=[(4, 'IPv4'), (6, 'IPv6')],
   #widget=forms.Select(
   #attrs={'disabled': 'disabled'}),
   widget=forms.HiddenInput(),
   label=_("IP Version"))




I don't think ip_version even has an attribute or an option to be set to
'disabled'.
Is this from an old version where the create side got fixed but the update
side was forgotten about?


On 3/11/14 11:30 AM, "Radomir Dopieralski"  wrote:

>On 11/03/14 15:52, Abishek Subramanian (absubram) wrote:
>> Hi,
>> 
>> I had a question regarding the
>> dashboards/project/networks/subnets/workflows.py
>> file and in particular the portion of the ip_version field.
>> 
>> It is marked as a hidden input field for the update subnet class with
>>this
>> note.
>> 
>> # NOTE(amotoki): When 'disabled' attribute is set for the ChoiceField
>> # and ValidationError is raised for POST request, the initial value
>>of
>> # the ip_version ChoiceField is not set in the re-displayed form
>> # As a result, 'IPv4' is displayed even when IPv6 is used if
>> # ValidationError is detected. In addition 'required=True' check
>> complains
>> # when re-POST since the value of the ChoiceField is not set.
>> # Thus now I use HiddenInput for the ip_version ChoiceField as a
>>work
>> # around.
>> 
>> 
>> 
>> Can I get a little more context to this please?
>> I'm not sure I understand why it says this field always is displayed as
>> IPv4.
>> Is this still the case? Adding some debug logs I seem to see that the
>> ipversion is correctly being detected as 4 or 6 as the case may be.
>
>Some browsers (Chrome, iirc) will not submit the values from form fields
>that are disabled. That means, that when re-displaying this form
>(after an error in any other field, for example), that field's value
>will be missing, and the browser will happily display the first option,
>which is ipv4.
>
>Another solution could be perhaps using "readonly" instead of "disabled".
>
>-- 
>Radomir Dopieralski
>
>
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [horizon] Private flavors

2014-03-11 Thread Baldassin, Santiago B
Hi everyone,

I'm writing to you because I notice that horizon is throwing an error when a 
private flavor is created and the current project is added within the flavor 
access list. The problem is that when a non-public project is created, nova 
adds the current project to the flavor access list. So when horizon adds the 
current project, nova throws an exception saying that the project is already 
added to the flavor.

I created the following bug to document the problem: 
https://bugs.launchpad.net/horizon/+bug/1286297

I think that when a private flavor is created, horizon should not try to add 
the current project since it was already added by nova. Moreover, we should 
include a message explaining that if the flavor is private, the current project 
will be added within the access list

Thoughts?


Santiago B. Baldassin
ASDC Argentina
Software Development Center
Email: santiago.b.baldas...@intel.com
P Save a tree. Print only when necessary.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Difficult to understand message when using incorrect role against object in Neutron

2014-03-11 Thread Sudipta Biswas3
Hi all,

I'm hitting a scenario where, a user runs an action against an object in 
neutron for which they don't have the authority to perform the 
action(perhaps their role allows read of the object, but not update). The 
following returned to back to the user when such an action is performed: 
"The resource could not be found".  This can be confusing to users.  For 
example, basic users may not have the privilege to edit a network and 
attempts doing that but ends up getting the resource not found message, 
even though they have read privileges.

This is a confusing message because the object they just read in is now 
stating that it does not exist. This is not true, the root issue is that 
they do not have authority to it. One can argue that for security reasons, 
we should state that the object does not exist. However, it creates a odd 
scenario where you have certain roles that can read an object, but then 
not create/update/delete it. 

I have filed a community bug for the same: 
https://bugs.launchpad.net/neutron/+bug/1290895

I'm proposing that we change the message to "The resource could not be 
found or user's role does not have sufficient privileges to run the 
operation."

I'm sending to the mailing list to see if there are any discussion points 
against making this change.

Thanks,
Sudipto___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Mini-summit Interest?

2014-03-11 Thread Tom Creighton
When the Designate team had their mini-summit, they had an open Google Hangout 
for remote participants.  We could even have an open conference bridge if you 
are not partial to video conferencing.  With the issue of inclusion solved, 
let’s focus on a date that is good for the team!

Cheers,

Tom Creighton


On Mar 10, 2014, at 4:10 PM, Edgar Magana  wrote:

> Eugene,
> 
> A have a few arguments why I believe this is not 100% inclusive
>   • Is the foundation involved on this process? How? What is the budget? 
> Who is the responsible from the foundation  side?
>   • If somebody made already travel arraignments, it won't be possible to 
> make changes at not cost.
>   • Staying extra days in a different city could impact anyone's budget
>   • As a OpenStack developer. I want to understand why the summit is not 
> enough for deciding the next steps for each project. If that is the case, I 
> would prefer to make changes on the organization of the summit instead of 
> creating mini-summits all around!
> I could continue but I think these are good enough.
> 
> I could agree with your point about previous summits being distractive for 
> developers, this is why this time the OpenStack foundation is trying very 
> hard to allocate specific days for the conference and specific days for the 
> summit.
> The point that I am totally agree with you is that we SHOULD NOT have session 
> about work that will be done no matter what!  Those are just a waste of good 
> time that could be invested in very interesting discussions about topics that 
> are still not clear.
> I would recommend that you express this opinion to Mark. He is the right guy 
> to decide which sessions will bring interesting discussions and which ones 
> will be just a declaration of intents.
> 
> Thanks,
> 
> Edgar
> 
> From: Eugene Nikanorov 
> Reply-To: OpenStack List 
> Date: Monday, March 10, 2014 10:32 AM
> To: OpenStack List 
> Subject: Re: [openstack-dev] [Neutron][LBaaS] Mini-summit Interest?
> 
> Hi Edgar,
> 
> I'm neutral to the suggestion of mini summit at this point. 
> Why do you think it will exclude developers? 
> If we keep it 1-3 days prior to OS Summit in Atlanta (e.g. in the same city) 
> that would allow anyone who joins OS Summit to save on extra travelling.
> OS Summit itself is too distractive to have really productive discussions, 
> unless your missing the sessions and spend time discussing.
> For instance design sessions basically only good for declaration of intents, 
> but not for real discussion of a complex topic at meaningful detail level.
> 
> What would be your suggestions to make this more inclusive? 
> I think the time and place is the key here - hence Atlanta and few days prior 
> OS summit.
> 
> Thanks,
> Eugene.
> 
> 
> 
> On Mon, Mar 10, 2014 at 10:59 PM, Edgar Magana  wrote:
>> Team,
>> 
>> I found that having a mini-summit with a very short notice means excluding
>> a lot of developers of such an interesting topic for Neutron.
>> The OpenStack summit is the opportunity for all developers to come
>> together and discuss the next steps, there are many developers that CAN
>> NOT afford another trip for a "special" summit. I am personally against
>> that and I do support Mark's proposal of having all the conversation over
>> IRC and mailing list.
>> 
>> Please, do not start excluding people that won't be able to attend another
>> face-to-face meeting besides the summit. I believe that these are the
>> little things that make an open source community weak if we do not control
>> it.
>> 
>> Thanks,
>> 
>> Edgar
>> 
>> 
>> On 3/6/14 9:51 PM, "Mark McClain"  wrote:
>> 
>> >
>> >On Mar 6, 2014, at 4:31 PM, Jay Pipes  wrote:
>> >
>> >> On Thu, 2014-03-06 at 21:14 +, Youcef Laribi wrote:
>> >>> +1
>> >>>
>> >>> I think if we can have it before the Juno summit, we can take
>> >>> concrete, well thought-out proposals to the community at the summit.
>> >>
>> >> Unless something has changed starting at the Hong Kong design summit
>> >> (which unfortunately I was not able to attend), the design summits have
>> >> always been a place to gather to *discuss* and *debate* proposed
>> >> blueprints and design specs. It has never been about a gathering to
>> >> rubber-stamp proposals that have already been hashed out in private
>> >> somewhere else.
>> >
>> >You are correct that is the goal of the design summit.  While I do think
>> >it is wise to discuss the next steps with LBaaS at this point in time, I
>> >am not a proponent of in person mini-design summits.  Many contributors
>> >to LBaaS are distributed all over the global, and scheduling a mini
>> >summit with short notice will exclude valuable contributors to the team.
>> >I¹d prefer to see an open process with discussions on the mailing list
>> >and specially scheduled IRC meetings to discuss the ideas.
>> >
>> >mark
>> >
>> >
>> >___
>> >OpenStack-dev mailing list
>> >OpenStack-dev@lists.openstack.org
>> >h

Re: [openstack-dev] [Horizon] Edit subnet in workflows - ip_version hidden?

2014-03-11 Thread Radomir Dopieralski
On 11/03/14 15:52, Abishek Subramanian (absubram) wrote:
> Hi,
> 
> I had a question regarding the
> dashboards/project/networks/subnets/workflows.py
> file and in particular the portion of the ip_version field.
> 
> It is marked as a hidden input field for the update subnet class with this
> note.
> 
> # NOTE(amotoki): When 'disabled' attribute is set for the ChoiceField
> # and ValidationError is raised for POST request, the initial value of
> # the ip_version ChoiceField is not set in the re-displayed form
> # As a result, 'IPv4' is displayed even when IPv6 is used if
> # ValidationError is detected. In addition 'required=True' check
> complains
> # when re-POST since the value of the ChoiceField is not set.
> # Thus now I use HiddenInput for the ip_version ChoiceField as a work
> # around.
> 
> 
> 
> Can I get a little more context to this please?
> I'm not sure I understand why it says this field always is displayed as
> IPv4.
> Is this still the case? Adding some debug logs I seem to see that the
> ipversion is correctly being detected as 4 or 6 as the case may be.

Some browsers (Chrome, iirc) will not submit the values from form fields
that are disabled. That means, that when re-displaying this form
(after an error in any other field, for example), that field's value
will be missing, and the browser will happily display the first option,
which is ipv4.

Another solution could be perhaps using "readonly" instead of "disabled".

-- 
Radomir Dopieralski


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][db][performance] Proposal: Get rid of soft deletion (step by step)

2014-03-11 Thread Tim Bell

If the deleted column is removed, how would the 'undelete' functionality be 
provided ? This saves operators when user accidents occur since restoring the 
whole database to a point in time affects the other tenants also.

Tim

> Hi all,
> 
> >>> I've never understood why we treat the DB as a LOG (keeping deleted == 0 
> >>> records around) when we should just use a LOG (or
> similar system) to begin with instead.
> 
> I can't agree more with you! Storing deleted records in tables is hardly 
> usable, bad for performance (as it makes tables and indexes
> larger) and it probably covers a very limited set of use cases (if
> any) of OpenStack users.
> 

If the deleted column is removed, how would the 'undelete' functionality be 
provided ? This saves operators when user accidents occur since restoring the 
whole database to a point in time affects the other tenants also.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Horizon] Edit subnet in workflows - ip_version hidden?

2014-03-11 Thread Abishek Subramanian (absubram)
Hi,

I had a question regarding the
dashboards/project/networks/subnets/workflows.py
file and in particular the portion of the ip_version field.

It is marked as a hidden input field for the update subnet class with this
note.

# NOTE(amotoki): When 'disabled' attribute is set for the ChoiceField
# and ValidationError is raised for POST request, the initial value of
# the ip_version ChoiceField is not set in the re-displayed form
# As a result, 'IPv4' is displayed even when IPv6 is used if
# ValidationError is detected. In addition 'required=True' check
complains
# when re-POST since the value of the ChoiceField is not set.
# Thus now I use HiddenInput for the ip_version ChoiceField as a work
# around.



Can I get a little more context to this please?
I'm not sure I understand why it says this field always is displayed as
IPv4.
Is this still the case? Adding some debug logs I seem to see that the
ipversion is correctly being detected as 4 or 6 as the case may be.

Thanks!



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] os-cloud-config ssh access to cloud

2014-03-11 Thread Adam Young

On 03/11/2014 05:25 AM, Dmitry Mescheryakov wrote:

For what it's worth in Sahara (former Savanna) we inject the second
key by userdata. I.e. we add
echo "${public_key}" >> ${user_home}/.ssh/authorized_keys

to the other stuff we do in userdata.

Dmitry

2014-03-10 17:10 GMT+04:00 Jiří Stránský :

On 7.3.2014 14:50, Imre Farkas wrote:

On 03/07/2014 10:30 AM, Jiří Stránský wrote:

Hi,

there's one step in cloud initialization that is performed over SSH --
calling "keystone-manage pki_setup". Here's the relevant code in
keystone-init [1], here's a review for moving the functionality to
os-cloud-config [2].


You really should not be doing this.  I should never have written 
pki_setup:  it is a developers tool:  user a real CA and a real certificate.




The consequence of this is that Tuskar will need passwordless ssh key to
access overcloud controller. I consider this suboptimal for two reasons:

* It creates another security concern.

* AFAIK nova is only capable of injecting one public SSH key into
authorized_keys on the deployed machine, which means we can either give
it Tuskar's public key and allow Tuskar to initialize overcloud, or we
can give it admin's custom public key and allow admin to ssh into
overcloud, but not both. (Please correct me if i'm mistaken.) We could
probably work around this issue by having Tuskar do the user key
injection as part of os-cloud-config, but it's a bit clumsy.


This goes outside the scope of my current knowledge, i'm hoping someone
knows the answer: Could pki_setup be run by combining powers of Heat and
os-config-refresh? (I presume there's some reason why we're not doing
this already.) I think it would help us a good bit if we could avoid
having to SSH from Tuskar to overcloud.


Yeah, it came up a couple times on the list. The current solution is
because if you have an HA setup, the nodes can't decide on its own,
which one should run pki_setup.
Robert described this topic and why it needs to be initialized
externally during a weekly meeting in last December. Check the topic
'After heat stack-create init operations (lsmola)':

http://eavesdrop.openstack.org/meetings/tripleo/2013/tripleo.2013-12-17-19.02.log.html


Thanks for the reply Imre. Yeah i vaguely remember that meeting :)

I guess to do HA init we'd need to pick one of the controllers and run the
init just there (set some parameter that would then be recognized by
os-refresh-config). I couldn't find if Heat can do something like this on
it's own, probably we'd need to deploy one of the controller nodes with
different parameter set, which feels a bit weird.

Hmm so unless someone comes up with something groundbreaking, we'll probably
keep doing what we're doing. Having the ability to inject multiple keys to
instances [1] would help us get rid of the Tuskar vs. admin key issue i
mentioned in the initial e-mail. We might try asking a fellow Nova developer
to help us out here.


Jirka

[1] https://bugs.launchpad.net/nova/+bug/917850


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] a question about instance snapshot

2014-03-11 Thread Kashyap Chamarthy
On Fri, Mar 07, 2014 at 02:29:04AM +, Liuji (Jeremy) wrote:
> Hi, all
> 
> Current openstack seems not support to snapshot instance with memory
> and dev states.  I searched the blueprint and found two relational
> blueprint like below.  But these blueprint failed to get in the
> branch.
> 
> [1]: https://blueprints.launchpad.net/nova/+spec/live-snapshots [2]:
> https://blueprints.launchpad.net/nova/+spec/live-snapshot-vms
> 
> In the blueprint[1], there is a comment," We discussed this pretty
> extensively on the mailing list and in a design summit session.  The
> consensus is that this is not a feature we would like to have in nova.
> --russellb " But I can't find the discuss mail about it. I hope to
> know why we think so ?  Without memory snapshot, we can't to provide
> the feature for user to revert a instance to a checkpoint. 

I agree, it's a useful feature.

Speaking from a libvirt/QEMU standpoint, with recent upstream versions,
it's entirely possible to do a live memory and disk snapshot in a single
operation. I think it's a matter of someone adding wiring up the support
in Nova.

In libvirt's parlance, it's called External 'system checkpoint' snapshot
i.e: the guest's disk-state will be saved in one file, its RAM &
device-state will be saved in another new file.

  NOTE: 'system checkpoint' meaning - it captures VM state and disk
  state; VM state meaning - it captures memory and device state (but
  _not_ "disk" state).

 
I just did a quick test  libvirt's virsh:

1. Start the guest:

  $ virsh start ostack-controller
  Domain ostack-controller started

2. List its block device in use:

  $ virsh domblklist ostack-controller
  Target Source
  
  vda/var/lib/libvirt/images/ostack-controller.qcow2

3. Take a LIVE external system checkpoint snapshot, specifying both disk
   file _and_ memory file:

  $ virsh snapshot-create-as --domain ostack-controller snap1 \
--diskspec vda,file=/export/vmimages/disk-snap.qcow2,snapshot=external \
--memspec file=/export/vmimages/mem-snap.qcow2,snapshot=external \
--atomic
  Domain snapshot snap1 created

  NOTE: Once the above command is issued, the original disk image of
ostack-controller will become the backing_file & the new overlay
image specified (disk-snap.qcow2) will be used to track the new
changes. Here on, libvirt will use this overlay for further
write operations (while using the original image as a read-only
backing_file).

4. List the snapshot: 

  $ virsh snapshot-list ostack-controller
  Name Creation Time State
  
  snap12014-03-11 20:01:54 +0530 running

5. Optionally, check if the snapshot file we specified (disk-snap.qocw2)
   is indeed the new overlay


That's the versions I used to test the above:

  $ uname -r; rpm -q qemu-system-x86 libvirt
  3.13.4-200.fc20.x86_64
  qemu-system-x86-1.7.0-5.fc21.x86_64
  libvirt-1.2.3-1.fc20.x86_64


Hope that helps.

-- 
/kashyap

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat]Policy on upgades required config changes

2014-03-11 Thread Sean Dague
On 03/11/2014 10:15 AM, Steven Dake wrote:
> On 03/11/2014 04:04 AM, Sean Dague wrote:
>> On 03/04/2014 12:39 PM, Steven Hardy wrote:
>>> Hi all,
>>>
>>> As some of you know, I've been working on the instance-users blueprint[1].
>>>
>>> This blueprint implementation requires three new items to be added to the
>>> heat.conf, or some resources (those which create keystone users) will not
>>> work:
>>>
>>> https://review.openstack.org/#/c/73978/
>>> https://review.openstack.org/#/c/76035/
>>>
>>> So on upgrade, the deployer must create a keystone domain and domain-admin
>>> user, add the details to heat.conf, as already been done in devstack[2].
>>>
>>> The changes requried for this to work have already landed in devstack, but
>>> it was discussed to day and Clint suggested this may be unacceptable
>>> upgrade behavior - I'm not sure so looking for guidance/comments.
>>>
>>> My plan was/is:
>>> - Make devstack work
>>> - Talk to tripleo folks to assist in any transition (what prompted this
>>>   discussion)
>>> - Document the upgrade requirements in the Icehouse release notes so the
>>>   wider community can upgrade from Havana.
>>> - Try to give a heads-up to those maintaining downstream heat deployment
>>>   tools (e.g stackforge/puppet-heat) that some tweaks will be required for
>>>   Icehouse.
>>>
>>> However some have suggested there may be an openstack-wide policy which
>>> requires peoples old config files to continue working indefinitely on
>>> upgrade between versions - is this right?  If so where is it documented?
>> This is basically enforced in code in grenade, the language for this
>> actually got lost in the project requirements discussion in the TC, I'll
>> bring that back in the post graduation requirements discussion we're
>> having again.
>>
>> The issue is - Heat still doesn't materially participate in grenade.
>> Heat is substantially far behind the other integrated projects in it's
>> integration with the upstream testing. Only monday did we finally start
>> gating on a real unit of work for Heat (the heat-slow jobs). If I was
>> letter grading projects right now on upstream testing I'd give Nova an
>> A, Neutron a C (still no full run, no working grenade), and Heat a D.
> Sean,
> 
> I agree the Heat community hasn't done a bang-up job of getting
> integrated with Tempest.  We only have 50 functional tests implemented. 
> The community clearly needs to do more and provide better functional
> coverage with Heat.
> 
> It is inappropriate to say "Only monday did we finally start gating"
> because that was a huge move in the right direction.  It took alot of
> effort and should not be so easily dismissed.  Clearly the community,
> and especially the core developers, are making an effort.  Keep in mind
> we have to balance upstream development work, answering user questions,
> staying on top of a 5 page review queue, keeping relationships and track
> of the various integrated projects which are consuming Heat as a
> building block, plus all of the demands of our day jobs.

I agree it was a huge step in the right direction. It's not clear to me
why expressing that this was very recent was inappropriate.

Recent conversations have made me realize that a lot of the Heat core
team doesn't realize that Heat's participation in upstream gating is
below average, so I decided to be blunt about it. Because it was only
after being blunt about that with the Neutron team in Hong Kong did we
get any real motion on it (Neutron has seen huge gains this cycle).

All the integrated projects have the same challenges.

Upstream QA is really important. It not only protects heat from itself,
it protects it from changes in other projects.

> We just don't have enough bandwidth on the core team to tackle writing
> all of the tempest test cases ourselves.  We have made an effort to
> distribute this work to the overall heat community via wishlist bugs in
> Heat which several new folks have picked up.  I hope to see our coverage
> improve over time, especially with more advanced scenario tests through
> this effort.

Bandwidth is a problem for everyone. It's a matter of priorities. The
fact that realistic upstream gating is considered wishlist priority in
from a Heat perspective is something I find troubling.

Putting the investment into realistic scenarios in Tempest / gate is
going to be a huge timesaving for the Heat team. It will ensure Heat is
functioning at every commit (not just releases), it will protect Heat
from chasing breaking issues in Keystone or Nova, and it will mean that
we'll expose more subtle issues that only come with being able to do
data analysis on 10k runs.

I get it's never fun to hear that a project is below average on a metric
that's important to the OpenStack community. But if we aren't honest and
open about these things they never change.

-Sean

--
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP dig

Re: [openstack-dev] [ceilometer] nominating Ildikó Váncsa and Nadya Privalova to ceilometer-core

2014-03-11 Thread Mehdi Abaakouk
On Mon, Mar 10, 2014 at 05:15:08AM -0400, Eoghan Glynn wrote:
> 
> Folks,
> 
> Time for some new blood on the ceilometer core team.
> 
>  * Ildikó co-authored the complex query API extension with Balazs Gibizer
>and showed a lot of tenacity in pushing this extensive blueprint
>through gerrit over multiple milestones.

+1 

>  * Nadya has shown much needed love to the previously neglected HBase
>driver bringing it much closer to feature parity with the other
>supported DBs, and has also driven the introduction of ceilometer
>coverage in Tempest.

+1

Cheers,

-- 
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht


signature.asc
Description: Digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] Tempest review and development priorities until release

2014-03-11 Thread Kenichi Oomichi

> -Original Message-
> From: Sean Dague [mailto:s...@dague.net]
> Sent: Tuesday, March 11, 2014 11:02 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [qa] Tempest review and development priorities 
> until release
> 
> On 03/11/2014 09:48 AM, Kenichi Oomichi wrote:
> >
> >> -Original Message-
> >> From: Sean Dague [mailto:s...@dague.net]
> >> Sent: Tuesday, March 11, 2014 10:06 PM
> >> To: OpenStack Development Mailing List
> >> Subject: [openstack-dev] [qa] Tempest review and development priorities 
> >> until release
> >>
> >> Tempest has no feature freeze in the same way as the core projects, in a
> >> lot of ways some of our most useful effort happens right now, as
> >> projects shore up features within the tempest code.
> >>
> >> That being said, the review queue remains reasonably large, so I would
> >> like to focus review attention on items that will make a material impact
> >> on the quality of the Icehouse release.
> >>
> >> That means I'd like to *stop* doing patches and reviews that are
> >> internal refactorings. We can start doing those again in Juno. I know
> >> there were some client refactorings, and hacking cleanups in flight.
> >> Those should wait until Icehouse is released.
> >>
> >> From my perspective the top priorities for things to be reviewed /
> >> developed are:
> >>  * Heat related tests (especially on the heat slow job) as we're now
> >> gating with that, but still only have 1 real test
> >>  * Changes to get us Neutron full support (I actually think the tempest
> >> side is complete, but just in case)
> >>  * Unit tests of Tempest function (so we know that we are doing the
> >> things we think)
> >>  * Bugs in Tempest itself
> >>  * The Keystone multi auth patches (so was can actually test v3)
> >>  * Any additional positive API / scenario tests for *integrated*
> >> projects (incubated projects are currently best effort).
> >
> > I got it, and I'd like to clarify whether one task is acceptable or not.
> >
> > In most test cases, Tempest does not check API response body(API 
> > attributes).
> > Now I am working for improving API attribute test coverage for Nova API[1].
> > I think the task is useful for the backward compatibility and finding some
> > latent bags(API sample files etc). In addition, this improvement is 
> > necessary
> > to prove the concept of Nova "v2.1" API because the we need to check v2.1 
> > API
> > does not cause backward incompatibility issues.
> >
> > Can we continue this improvement?
> > Of course, I will do review for the above areas(Heat, etc) also.
> 
> Yes, absolutely.
> 
> I would count the API response checks in the Additional posititive API /
> scenario tests for integrated projects. I should have been clear that it
> also means enhancements of those tests that ensures they are properly
> checking things.
> 
> I think these are the kind of changes that help ensure a solid Icehouse
> release.

I have got courage by your words.
Thank you, Sean!


Thanks
Ken'ichi Ohmichi

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat]Policy on upgades required config changes

2014-03-11 Thread Steven Dake

On 03/11/2014 04:04 AM, Sean Dague wrote:

On 03/04/2014 12:39 PM, Steven Hardy wrote:

Hi all,

As some of you know, I've been working on the instance-users blueprint[1].

This blueprint implementation requires three new items to be added to the
heat.conf, or some resources (those which create keystone users) will not
work:

https://review.openstack.org/#/c/73978/
https://review.openstack.org/#/c/76035/

So on upgrade, the deployer must create a keystone domain and domain-admin
user, add the details to heat.conf, as already been done in devstack[2].

The changes requried for this to work have already landed in devstack, but
it was discussed to day and Clint suggested this may be unacceptable
upgrade behavior - I'm not sure so looking for guidance/comments.

My plan was/is:
- Make devstack work
- Talk to tripleo folks to assist in any transition (what prompted this
   discussion)
- Document the upgrade requirements in the Icehouse release notes so the
   wider community can upgrade from Havana.
- Try to give a heads-up to those maintaining downstream heat deployment
   tools (e.g stackforge/puppet-heat) that some tweaks will be required for
   Icehouse.

However some have suggested there may be an openstack-wide policy which
requires peoples old config files to continue working indefinitely on
upgrade between versions - is this right?  If so where is it documented?

This is basically enforced in code in grenade, the language for this
actually got lost in the project requirements discussion in the TC, I'll
bring that back in the post graduation requirements discussion we're
having again.

The issue is - Heat still doesn't materially participate in grenade.
Heat is substantially far behind the other integrated projects in it's
integration with the upstream testing. Only monday did we finally start
gating on a real unit of work for Heat (the heat-slow jobs). If I was
letter grading projects right now on upstream testing I'd give Nova an
A, Neutron a C (still no full run, no working grenade), and Heat a D.

Sean,

I agree the Heat community hasn't done a bang-up job of getting 
integrated with Tempest.  We only have 50 functional tests implemented.  
The community clearly needs to do more and provide better functional 
coverage with Heat.


It is inappropriate to say "Only monday did we finally start gating" 
because that was a huge move in the right direction.  It took alot of 
effort and should not be so easily dismissed.  Clearly the community, 
and especially the core developers, are making an effort.  Keep in mind 
we have to balance upstream development work, answering user questions, 
staying on top of a 5 page review queue, keeping relationships and track 
of the various integrated projects which are consuming Heat as a 
building block, plus all of the demands of our day jobs.


We just don't have enough bandwidth on the core team to tackle writing 
all of the tempest test cases ourselves.  We have made an effort to 
distribute this work to the overall heat community via wishlist bugs in 
Heat which several new folks have picked up.  I hope to see our coverage 
improve over time, especially with more advanced scenario tests through 
this effort.


Regards
-steve


So in short. Heat did the wrong thing. You should be able to use your
configs from the last release. This is what all the mature projects in
OpenStack do. In the event that you *have* to make a change like that it
requires an UpgradeImpact tag in the commit. And those should be limited
really aggressively. This is the whole point of the deprecation cycle.

-Sean



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [sahara] savanna/sahara graduation review [savanna]

2014-03-11 Thread Sergey Lukjanov
Hey folks,

please, note that today will be our project graduation review on TC
meeting - https://wiki.openstack.org/wiki/Governance/TechnicalCommittee#Meeting

Thanks.

-- 
Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] Tempest review and development priorities until release

2014-03-11 Thread Sean Dague
On 03/11/2014 09:48 AM, Kenichi Oomichi wrote:
> 
> Hi Sean,
> 
>> -Original Message-
>> From: Sean Dague [mailto:s...@dague.net]
>> Sent: Tuesday, March 11, 2014 10:06 PM
>> To: OpenStack Development Mailing List
>> Subject: [openstack-dev] [qa] Tempest review and development priorities 
>> until release
>>
>> Tempest has no feature freeze in the same way as the core projects, in a
>> lot of ways some of our most useful effort happens right now, as
>> projects shore up features within the tempest code.
>>
>> That being said, the review queue remains reasonably large, so I would
>> like to focus review attention on items that will make a material impact
>> on the quality of the Icehouse release.
>>
>> That means I'd like to *stop* doing patches and reviews that are
>> internal refactorings. We can start doing those again in Juno. I know
>> there were some client refactorings, and hacking cleanups in flight.
>> Those should wait until Icehouse is released.
>>
>> From my perspective the top priorities for things to be reviewed /
>> developed are:
>>  * Heat related tests (especially on the heat slow job) as we're now
>> gating with that, but still only have 1 real test
>>  * Changes to get us Neutron full support (I actually think the tempest
>> side is complete, but just in case)
>>  * Unit tests of Tempest function (so we know that we are doing the
>> things we think)
>>  * Bugs in Tempest itself
>>  * The Keystone multi auth patches (so was can actually test v3)
>>  * Any additional positive API / scenario tests for *integrated*
>> projects (incubated projects are currently best effort).
> 
> I got it, and I'd like to clarify whether one task is acceptable or not.
> 
> In most test cases, Tempest does not check API response body(API attributes).
> Now I am working for improving API attribute test coverage for Nova API[1].
> I think the task is useful for the backward compatibility and finding some
> latent bags(API sample files etc). In addition, this improvement is necessary
> to prove the concept of Nova "v2.1" API because the we need to check v2.1 API
> does not cause backward incompatibility issues.
> 
> Can we continue this improvement?
> Of course, I will do review for the above areas(Heat, etc) also.

Yes, absolutely.

I would count the API response checks in the Additional posititive API /
scenario tests for integrated projects. I should have been clear that it
also means enhancements of those tests that ensures they are properly
checking things.

I think these are the kind of changes that help ensure a solid Icehouse
release.

Thanks Kenichi!

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Horizon] test_launch_instance_post questions

2014-03-11 Thread Abishek Subramanian (absubram)
Hi,

Can I please get some help with this UT?
I am having a little issue with the nics argument -
nics = [{"net-id": netid, "v4-fixed-ip": ""}


I wish to add a second network to this argument, but somehow
the UT only picks up the first network.

Any guidance will be appreciated.


Thanks!


On 3/6/14 12:06 PM, "Abishek Subramanian (absubram)" 
wrote:

>Hi,
>
>I had a couple of questions regarding this UT and the
>JS template that it ends up using.
>Hopefully someone can point me in the right direction
>and help me understand this a little better.
>
>I see that for this particular UT, we have a total of 3 networks
>in the network_list (the second network is supposed to be disabled
>though).
>For the nic argument needed by the nova/server_create API though we
>only pass the first network's net_id.
>
>I am trying to modify this unit test so as to be able to accept 2
>network_ids 
>instead of just one. This should be possible yes?
>We can have two nics in an instance of just one?
>However, I always see that when the test runs,
>in code it only finds the first network from the list.
>
>This line of code -
>
> if netids:
>nics = [{"net-id": netid, "v4-fixed-ip": ""}
>for netid in netids]
>
>There's always just one net-id in this dictionary even though I've added
>a new network in the neutron test_data. Can someone please help me
>figure out what I might be doing wrong?
>
>How does the JS code in horizon.instances.js file work?
>I assume this is where the network list is obtained from?
>How does this translate in the unit test environment?
>
>
>
>Thanks!
>Abishek
>
>
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][IPv6] IRC meeting today?

2014-03-11 Thread Collins, Sean
It starts at 10AM EST, due to daylight savings. See you in a couple minutes

Sean M. Collins

From: Shixiong Shang [sparkofwisdom.cl...@gmail.com]
Sent: Tuesday, March 11, 2014 9:15 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Neutron][IPv6] IRC meeting today?

Do we have IRC meeting today? Didn’t see anybody in the chat room…..:(

Shixiong


Shixiong Shang

!--- Stay Hungry, Stay Foolish ---!
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] Tempest review and development priorities until release

2014-03-11 Thread Kenichi Oomichi

Hi Sean,

> -Original Message-
> From: Sean Dague [mailto:s...@dague.net]
> Sent: Tuesday, March 11, 2014 10:06 PM
> To: OpenStack Development Mailing List
> Subject: [openstack-dev] [qa] Tempest review and development priorities until 
> release
> 
> Tempest has no feature freeze in the same way as the core projects, in a
> lot of ways some of our most useful effort happens right now, as
> projects shore up features within the tempest code.
> 
> That being said, the review queue remains reasonably large, so I would
> like to focus review attention on items that will make a material impact
> on the quality of the Icehouse release.
> 
> That means I'd like to *stop* doing patches and reviews that are
> internal refactorings. We can start doing those again in Juno. I know
> there were some client refactorings, and hacking cleanups in flight.
> Those should wait until Icehouse is released.
> 
> From my perspective the top priorities for things to be reviewed /
> developed are:
>  * Heat related tests (especially on the heat slow job) as we're now
> gating with that, but still only have 1 real test
>  * Changes to get us Neutron full support (I actually think the tempest
> side is complete, but just in case)
>  * Unit tests of Tempest function (so we know that we are doing the
> things we think)
>  * Bugs in Tempest itself
>  * The Keystone multi auth patches (so was can actually test v3)
>  * Any additional positive API / scenario tests for *integrated*
> projects (incubated projects are currently best effort).

I got it, and I'd like to clarify whether one task is acceptable or not.

In most test cases, Tempest does not check API response body(API attributes).
Now I am working for improving API attribute test coverage for Nova API[1].
I think the task is useful for the backward compatibility and finding some
latent bags(API sample files etc). In addition, this improvement is necessary
to prove the concept of Nova "v2.1" API because the we need to check v2.1 API
does not cause backward incompatibility issues.

Can we continue this improvement?
Of course, I will do review for the above areas(Heat, etc) also.


Thanks
Ken'ichi Ohmichi

---
[1]: https://blueprints.launchpad.net/tempest/+spec/nova-api-attribute-test

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara] Sahara (ex. Savanna) project renaming process [savanna]

2014-03-11 Thread Sergey Lukjanov
RE blueprints assignments - it looks like all bps have initial assignments.

On the renaming the main service code side Alex I. is contact person,
I'll help him with some setup stuff.

Additionally, you can find a bunch of my patches for external renaming
related changes -
https://review.openstack.org/#/q/status:open+topic:savanna-sahara+-savanna,n,z
and internal changes -
https://review.openstack.org/#/q/status:open+topic:savanna-sahara+savanna,n,z
(only open changes).

Thanks.

On Tue, Mar 11, 2014 at 5:33 PM, Sergey Lukjanov  wrote:
> All launchpad projects has been renamed keeping full path redirects.
> It means that you can still reference to the bugs and blueprints under
> the savanna launchpad project and it'll be redirected to the new
> sahara project.
>
> All savanna repositories will be renamed to sahara ones on Wednesday,
> March 12 between 12:00 to 12:30 UTC [0]
>
>
> [0] http://www.timeanddate.com/worldclock/fixedtime.html?iso=20140312T12&am=30
>
> On Sun, Mar 9, 2014 at 3:08 PM, Sergey Lukjanov  
> wrote:
>> Matt,
>>
>> thanks for moving etherpad notes to the blueprints. I've added some
>> notes and details to them and add some assignments to the blueprints
>> where we have no choice.
>>
>> https://blueprints.launchpad.net/savanna/+spec/savanna-renaming-ci -
>> Sergey Kolekonov
>> https://blueprints.launchpad.net/savanna/+spec/savanna-renaming-guestagent
>> - Dmitry Mescheryakov
>>
>> Thanks.
>>
>> On Sat, Mar 8, 2014 at 5:08 PM, Matthew Farrellee  wrote:
>>> On 03/07/2014 04:50 PM, Sergey Lukjanov wrote:

 Hey folks,

 we're now starting working on the project renaming. You can find
 details in the etherpad [0]. We'll move all work items to the
 blueprints, one blueprint per sub-project to well track progress and
 work items. The general blueprint is [1], it'll depend on all other
 blueprints and it's currently consists of general renaming tasks.

 Current plan is to assign each subproject blueprint to volunteer.
 Please, contact me and Matthew Farrellee if you'd like to take the
 renaming bp.

 Please, share your ideas/suggestions in ML or etherpad.

 [0] https://etherpad.openstack.org/p/savanna-renaming-process
 [1] https://blueprints.launchpad.net/openstack?searchtext=savanna-renaming

 Thanks.

 P.S. Please, prepend email topics with [sahara] and append [savanna]
 to the end of topic (like in this email) for the transition period.
>>>
>>>
>>> savann^wsahara team,
>>>
>>> i've separated out most of the activities that can happen in parallel,
>>> aligned them on repository boundaries, and filed blueprints for the efforts.
>>> now we need community members to take ownership (be the assignee) of the
>>> blueprints. taking ownership means you'll be responsible for the renaming in
>>> the repository, coordinating with other owners and getting feedback from the
>>> community about important questions (such as compatibility requirements).
>>>
>>> to take ownership, just go to the blueprint and assign it to yourself. if
>>> there is already an assignee, reach out to that person and offer them
>>> assistance.
>>>
>>> blueprints up for grabs -
>>>
>>> what: savanna^wsahara ci
>>> blueprint:
>>> https://blueprints.launchpad.net/savanna/+spec/savanna-renaming-ci
>>> comments: this should be taken by someone already familiar with the ci. i'd
>>> nominate skolekonov
>>>
>>> what: saraha puppet modules
>>> blueprint:
>>> https://blueprints.launchpad.net/savanna/+spec/savanna-renaming-puppet
>>> comments: this should be taken by someone who can validate the changes. i'd
>>> nominate sbadia or dizz
>>>
>>> what: sahara extras
>>> blueprint:
>>> https://blueprints.launchpad.net/savanna/+spec/savanna-renaming-extra
>>> comments: this could be taken by anyone
>>>
>>> what: sahara dib image elements
>>> blueprint:
>>> https://blueprints.launchpad.net/savanna/+spec/savanna-renaming-image-elements
>>> comments: this could be taken by anyone
>>>
>>> what: sahara python client
>>> blueprint:
>>> https://blueprints.launchpad.net/savanna/+spec/savanna-renaming-client
>>> comments: this should be done by someone w/ experience in the client. i'd
>>> nominate tmckay
>>>
>>> what: sahara horizon plugin
>>> blueprint:
>>> https://blueprints.launchpad.net/savanna/+spec/savanna-renaming-dashboard
>>> comments: this will require experience and care. i'd nominate croberts
>>>
>>> what: sahara guestagent
>>> blueprint:
>>> https://blueprints.launchpad.net/savanna/+spec/savanna-renaming-guestagent
>>> comments: i'd nominate dmitrymex
>>>
>>> what: sahara section of openstack wiki
>>> blueprint:
>>> https://blueprints.launchpad.net/savanna/+spec/savanna-renaming-wiki
>>> comments: this could be taken by anyone
>>>
>>> what: sahara service
>>> blueprint:
>>> https://blueprints.launchpad.net/savanna/+spec/savanna-renaming-service
>>> comments: this requires experience, care and is a lot of work. i'd nominate
>>> alazarev & aignatov to tag tea

Re: [openstack-dev] [nova][neutron]A Question about creating instance with duplication sg_name

2014-03-11 Thread mar...@redhat.com
On 11/03/14 10:20, Xurong Yang wrote:
> It's allowed to create duplicate sg with the same name.
> so exception happens when creating instance with the duplicate sg name.

Hi Xurong - fyi there is a review open which raises this particular
point at https://review.openstack.org/#/c/79270/2 (together with
associated bug).

imo we shouldn't be using 'name' to distinguish security groups - that's
what the UUID is for,

thanks, marios

> code following:
> 
> security_groups = kwargs.get('security_groups', [])
> security_group_ids = []
> 
> # TODO(arosen) Should optimize more to do direct query for security
> # group if len(security_groups) == 1
> if len(security_groups):
> search_opts = {'tenant_id': instance['project_id']}
> user_security_groups = neutron.list_security_groups(
> **search_opts).get('security_groups')
> 
> for security_group in security_groups:
> name_match = None
> uuid_match = None
> for user_security_group in user_security_groups:
> if user_security_group['name'] == security_group:
> if name_match:---exception happened here
> raise exception.NoUniqueMatch(
> _("Multiple security groups found matching"
>   " '%s'. Use an ID to be more specific.") %
>security_group)
> 
> name_match = user_security_group['id']
>   
> 
> so it's maybe improper to create instance with the sg name parameter.
> appreciation if any response.
> 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] [3rd party testing] Q&A meeting today at 14:00 EST / 18:00 UTC

2014-03-11 Thread Jeremy Stanley
On 2014-03-11 04:29:04 + (+), trinath.soman...@freescale.com wrote:
> +1
> 
> Attending

Note that announcement was for yesterday. Nobody showed up with
questions so it ended very early.
-- 
Jeremy Stanley

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara] Sahara (ex. Savanna) project renaming process [savanna]

2014-03-11 Thread Sergey Lukjanov
All launchpad projects has been renamed keeping full path redirects.
It means that you can still reference to the bugs and blueprints under
the savanna launchpad project and it'll be redirected to the new
sahara project.

All savanna repositories will be renamed to sahara ones on Wednesday,
March 12 between 12:00 to 12:30 UTC [0]


[0] http://www.timeanddate.com/worldclock/fixedtime.html?iso=20140312T12&am=30

On Sun, Mar 9, 2014 at 3:08 PM, Sergey Lukjanov  wrote:
> Matt,
>
> thanks for moving etherpad notes to the blueprints. I've added some
> notes and details to them and add some assignments to the blueprints
> where we have no choice.
>
> https://blueprints.launchpad.net/savanna/+spec/savanna-renaming-ci -
> Sergey Kolekonov
> https://blueprints.launchpad.net/savanna/+spec/savanna-renaming-guestagent
> - Dmitry Mescheryakov
>
> Thanks.
>
> On Sat, Mar 8, 2014 at 5:08 PM, Matthew Farrellee  wrote:
>> On 03/07/2014 04:50 PM, Sergey Lukjanov wrote:
>>>
>>> Hey folks,
>>>
>>> we're now starting working on the project renaming. You can find
>>> details in the etherpad [0]. We'll move all work items to the
>>> blueprints, one blueprint per sub-project to well track progress and
>>> work items. The general blueprint is [1], it'll depend on all other
>>> blueprints and it's currently consists of general renaming tasks.
>>>
>>> Current plan is to assign each subproject blueprint to volunteer.
>>> Please, contact me and Matthew Farrellee if you'd like to take the
>>> renaming bp.
>>>
>>> Please, share your ideas/suggestions in ML or etherpad.
>>>
>>> [0] https://etherpad.openstack.org/p/savanna-renaming-process
>>> [1] https://blueprints.launchpad.net/openstack?searchtext=savanna-renaming
>>>
>>> Thanks.
>>>
>>> P.S. Please, prepend email topics with [sahara] and append [savanna]
>>> to the end of topic (like in this email) for the transition period.
>>
>>
>> savann^wsahara team,
>>
>> i've separated out most of the activities that can happen in parallel,
>> aligned them on repository boundaries, and filed blueprints for the efforts.
>> now we need community members to take ownership (be the assignee) of the
>> blueprints. taking ownership means you'll be responsible for the renaming in
>> the repository, coordinating with other owners and getting feedback from the
>> community about important questions (such as compatibility requirements).
>>
>> to take ownership, just go to the blueprint and assign it to yourself. if
>> there is already an assignee, reach out to that person and offer them
>> assistance.
>>
>> blueprints up for grabs -
>>
>> what: savanna^wsahara ci
>> blueprint:
>> https://blueprints.launchpad.net/savanna/+spec/savanna-renaming-ci
>> comments: this should be taken by someone already familiar with the ci. i'd
>> nominate skolekonov
>>
>> what: saraha puppet modules
>> blueprint:
>> https://blueprints.launchpad.net/savanna/+spec/savanna-renaming-puppet
>> comments: this should be taken by someone who can validate the changes. i'd
>> nominate sbadia or dizz
>>
>> what: sahara extras
>> blueprint:
>> https://blueprints.launchpad.net/savanna/+spec/savanna-renaming-extra
>> comments: this could be taken by anyone
>>
>> what: sahara dib image elements
>> blueprint:
>> https://blueprints.launchpad.net/savanna/+spec/savanna-renaming-image-elements
>> comments: this could be taken by anyone
>>
>> what: sahara python client
>> blueprint:
>> https://blueprints.launchpad.net/savanna/+spec/savanna-renaming-client
>> comments: this should be done by someone w/ experience in the client. i'd
>> nominate tmckay
>>
>> what: sahara horizon plugin
>> blueprint:
>> https://blueprints.launchpad.net/savanna/+spec/savanna-renaming-dashboard
>> comments: this will require experience and care. i'd nominate croberts
>>
>> what: sahara guestagent
>> blueprint:
>> https://blueprints.launchpad.net/savanna/+spec/savanna-renaming-guestagent
>> comments: i'd nominate dmitrymex
>>
>> what: sahara section of openstack wiki
>> blueprint:
>> https://blueprints.launchpad.net/savanna/+spec/savanna-renaming-wiki
>> comments: this could be taken by anyone
>>
>> what: sahara service
>> blueprint:
>> https://blueprints.launchpad.net/savanna/+spec/savanna-renaming-service
>> comments: this requires experience, care and is a lot of work. i'd nominate
>> alazarev & aignatov to tag team it
>>
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> --
> Sincerely yours,
> Sergey Lukjanov
> Savanna Technical Lead
> Mirantis Inc.



-- 
Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][IPv6] IRC meeting today?

2014-03-11 Thread Shixiong Shang
Do we have IRC meeting today? Didn’t see anybody in the chat room…..:(

Shixiong


Shixiong Shang

!--- Stay Hungry, Stay Foolish ---!___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [qa] Tempest review and development priorities until release

2014-03-11 Thread Sean Dague
Tempest has no feature freeze in the same way as the core projects, in a
lot of ways some of our most useful effort happens right now, as
projects shore up features within the tempest code.

That being said, the review queue remains reasonably large, so I would
like to focus review attention on items that will make a material impact
on the quality of the Icehouse release.

That means I'd like to *stop* doing patches and reviews that are
internal refactorings. We can start doing those again in Juno. I know
there were some client refactorings, and hacking cleanups in flight.
Those should wait until Icehouse is released.

From my perspective the top priorities for things to be reviewed /
developed are:
 * Heat related tests (especially on the heat slow job) as we're now
gating with that, but still only have 1 real test
 * Changes to get us Neutron full support (I actually think the tempest
side is complete, but just in case)
 * Unit tests of Tempest function (so we know that we are doing the
things we think)
 * Bugs in Tempest itself
 * The Keystone multi auth patches (so was can actually test v3)
 * Any additional positive API / scenario tests for *integrated*
projects (incubated projects are currently best effort).

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Reminder - Weekly Project Meeting today at 21:00 UTC

2014-03-11 Thread Sean Dague
For today's weekly project meeting I'll be standing in for Thierry.
Agenda is here
https://wiki.openstack.org/wiki/Meetings/ProjectMeeting#Weekly_Project_meeting

I expect the bulk of the meeting will be checking in on where we stand
on FFEs that were granted, as those were all supposed to be in by the
meeting today.

For folks in the US (in all the places which do DST), remember, the
meeting time is in UTC, so now an hour later for us all.

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Suggestions for alarm improvements

2014-03-11 Thread Gordon Chung
i've created a bp to discuss whether moving the alarming into pipeline is 
feasible and can cover all the use cases for alarm. if we can find a 
solution that is a bit leaner than what we have and still provide same 
functionality coverage i don't see why we try it. it very well may be that 
what we have is the best solution.

https://blueprints.launchpad.net/ceilometer/+spec/alarm-pipelines

cheers,
gordon chung
openstack, ibm software standards___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack][Nova][Docker] Devstack with docker driver

2014-03-11 Thread urgensherpa
Hello!,

i can run docker containers and push it to docker io but i failed to push it
for local glance.and get the same error mentioned here.
Could you please show some more light on  how you resolved it. i started
settingup openstack and docker using devstack. 
here is my localrc 
FLOATING_RANGE=192.168.140.0/27
FIXED_RANGE=10.11.12.0/24
FIXED_NETWORK_SIZE=256
FLAT_INTERFACE=eth1
ADMIN_PASSWORD=g
MYSQL_PASSWORD=g
RABBIT_PASSWORD=g
SERVICE_PASSWORD=g
SERVICE_TOKEN=g
SCHEDULER=nova.scheduler.filter_scheduler.FilterScheduler
VIRT_DRIVER=docker
SCREEN_LOGDIR=$DEST/logs/screen
---
the machine im testing is on vmware ubuntu 13.01 with two nics  assuming
eth0 connected to internet and eth1 to local network.
---





--
View this message in context: 
http://openstack.10931.n7.nabble.com/Openstack-Nova-Docker-Devstack-with-docker-driver-tp28361p34845.html
Sent from the Developer mailing list archive at Nabble.com.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat]Policy on upgades required config changes

2014-03-11 Thread Sean Dague
On 03/11/2014 07:48 AM, Steven Hardy wrote:
> On Tue, Mar 11, 2014 at 07:04:32AM -0400, Sean Dague wrote:
>> On 03/04/2014 12:39 PM, Steven Hardy wrote:
>>> Hi all,
>>>
>>> As some of you know, I've been working on the instance-users blueprint[1].
>>>
>>> This blueprint implementation requires three new items to be added to the
>>> heat.conf, or some resources (those which create keystone users) will not
>>> work:
>>>
>>> https://review.openstack.org/#/c/73978/
>>> https://review.openstack.org/#/c/76035/
>>>
>>> So on upgrade, the deployer must create a keystone domain and domain-admin
>>> user, add the details to heat.conf, as already been done in devstack[2].
>>>
>>> The changes requried for this to work have already landed in devstack, but
>>> it was discussed to day and Clint suggested this may be unacceptable
>>> upgrade behavior - I'm not sure so looking for guidance/comments.
>>>
>>> My plan was/is:
>>> - Make devstack work
>>> - Talk to tripleo folks to assist in any transition (what prompted this
>>>   discussion)
>>> - Document the upgrade requirements in the Icehouse release notes so the
>>>   wider community can upgrade from Havana.
>>> - Try to give a heads-up to those maintaining downstream heat deployment
>>>   tools (e.g stackforge/puppet-heat) that some tweaks will be required for
>>>   Icehouse.
>>>
>>> However some have suggested there may be an openstack-wide policy which
>>> requires peoples old config files to continue working indefinitely on
>>> upgrade between versions - is this right?  If so where is it documented?
>>
>> This is basically enforced in code in grenade, the language for this
>> actually got lost in the project requirements discussion in the TC, I'll
>> bring that back in the post graduation requirements discussion we're
>> having again.
>>
>> The issue is - Heat still doesn't materially participate in grenade.
>> Heat is substantially far behind the other integrated projects in it's
>> integration with the upstream testing. Only monday did we finally start
>> gating on a real unit of work for Heat (the heat-slow jobs). If I was
>> letter grading projects right now on upstream testing I'd give Nova an
>> A, Neutron a C (still no full run, no working grenade), and Heat a D.
> 
> Thanks for this, I know we have a lot more work to do in tempest, but
> evidently grenade integration is something we should priotitize as soon as
> possible.  Any volunteers out there? :)
> 
>> So in short. Heat did the wrong thing. You should be able to use your
>> configs from the last release. This is what all the mature projects in
>> OpenStack do. In the event that you *have* to make a change like that it
>> requires an UpgradeImpact tag in the commit. And those should be limited
>> really aggressively. This is the whole point of the deprecation cycle.
> 
> Ok, got that message loud and clear now, thanks ;)
> 
> Do you have a link to docs which describe the deprecation cycle and
> openstack-wide policy for introducing backwards incompatible changes?
> 
> The thing I'm still not that clear on, is if we want to eventually require
> a specific config option, and we can't just have an upgrade requirement to
> add it as I was expecting - is it enough to just output a warning for one
> release cycle then require it?

If it has a sane default, so will just work for people, you can add it.
If not, there has to be *BIG RED FLAGS*. UpgradeImpact was designed for
that as an easy way for CD folks to know how bad a weekend they were
going to have.

You could also deprecate whatever the old method was, make the new
options optional, cross a cycle boundary, then move to the new method.

> Then I guess my question is how do we rationalize the requirements of
> trunk-chasing downstream users wrt the time based releases as part of the
> deprecation cycle policy?
> 
> i.e if we branch stable/icehouse then I immediately post a patch removing
> the deprecated fallback path, it may still break downstream users who don't
> care about the stable-branch process and I have no way of knowing (other
> than, as in this case, finding out too late when they shout at me..).

So I will not say the model is anything close to perfect, however we are
under freeze right now. So if the last patch before freeze specified
deprecation, and the first patch in new master was to remove the thing,
we're still talking about 6 weeks signaling in tree. For CDing folks
that should be sufficient.

I do think we probably need to move to release or time based deprecation
models. So what is intended by a 1 release deprecation is really 5 - 6
months. And what's intended by a 2 release deprecation is really 11 - 12
months.

That's probably a reasonable conversation all on it's own.

> Thanks for contributing to the discussion, hopefully it's not only me who's
> somewhat confused by the process, and the requirement to satisfy two quite
> different sets of release constraints for downstream deployers.
> 
> Perhaps we need a wiki page similar to 

Re: [openstack-dev] [Murano] New API methods for App Catalog UI

2014-03-11 Thread Alexander Tivelkov
Hi Georgy,

There was already a discussion of these APIs [1] about some time ago,
the draft for API has been proposed here [2], the etherpad for
discussion and feedback was created [3] and the direction was already
approved in the blueprint [4]. As far as I know, the work on this set
of APIs has already begun.
Please align your vision with this spec.
We may discuss it today on the weekly meeting in IRC.


[1] http://lists.openstack.org/pipermail/openstack-dev/2014-March/028886.html
[2] http://docs.muranorepositoryapi.apiary.io
[3] https://etherpad.openstack.org/p/muranorepository-api
[4] https://blueprints.launchpad.net/murano/+spec/murano-repository-api-v2
--
Regards,
Alexander Tivelkov


On Mon, Mar 10, 2014 at 7:21 PM, Georgy Okrokvertskhov
 wrote:
> Hi,
>
> Murano is moving towards App Catalog functionality and in order to support
> this new aspect in the UI we need to add new API methods to cover App
> Catalog operations. Currently the vision for App Catalog API is the
> following:
> 1) All App create operations will be covered by metadata repository API
> which will eventually be a part of Glance Artifacts functionality. New
> application creation will be technically a creation of a new artifact and
> uploading it to metadata repository. The sharing and distribution aspects
> will be covered by the same artifact repository functionality.
>
> 2) App Listing and App Catalog rendering will be covered by a new Murano
> API. The reason for that is to keep UI thin and keep package representation
> aspects out of the general artifacts repository.
>
> The list of new API functions is available here:
> https://etherpad.openstack.org/p/MuranoAppCatalogAPI
>
> This is a first draft to cover minimal UI rendering requirements.
>
> Thanks
> Georgy
>
> --
> Georgy Okrokvertskhov
> Architect,
> OpenStack Platform Products,
> Mirantis
> http://www.mirantis.com
> Tel. +1 650 963 9828
> Mob. +1 650 996 3284
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Cinder: AttributeError: No such RPC function 'create_export'

2014-03-11 Thread Deepak C Shetty

I spoke with Avishay on IRC and he gave me this link...

https://review.openstack.org/#/c/76471/

So this is a known issue and the fix is under works ^^

thanx,
deepak

On 03/11/2014 12:53 PM, Deepak C Shetty wrote:

Hi All,
I am using devstack with cinder git head @ 
f888e412b0d0fdb0426045a9c55e0be0390f842c


I am seeing the below error while trying to do cinder migrate for 
glusterfs backend. I don't think its backend specific tho' as the 
failure is in the common rpc layer of code.


http://paste.fedoraproject.org/84189/45169021/

Any pointers to get past this is appreciated.

thanx,
deepak

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat]Policy on upgades required config changes

2014-03-11 Thread Steven Hardy
On Tue, Mar 11, 2014 at 07:04:32AM -0400, Sean Dague wrote:
> On 03/04/2014 12:39 PM, Steven Hardy wrote:
> > Hi all,
> > 
> > As some of you know, I've been working on the instance-users blueprint[1].
> > 
> > This blueprint implementation requires three new items to be added to the
> > heat.conf, or some resources (those which create keystone users) will not
> > work:
> > 
> > https://review.openstack.org/#/c/73978/
> > https://review.openstack.org/#/c/76035/
> > 
> > So on upgrade, the deployer must create a keystone domain and domain-admin
> > user, add the details to heat.conf, as already been done in devstack[2].
> > 
> > The changes requried for this to work have already landed in devstack, but
> > it was discussed to day and Clint suggested this may be unacceptable
> > upgrade behavior - I'm not sure so looking for guidance/comments.
> > 
> > My plan was/is:
> > - Make devstack work
> > - Talk to tripleo folks to assist in any transition (what prompted this
> >   discussion)
> > - Document the upgrade requirements in the Icehouse release notes so the
> >   wider community can upgrade from Havana.
> > - Try to give a heads-up to those maintaining downstream heat deployment
> >   tools (e.g stackforge/puppet-heat) that some tweaks will be required for
> >   Icehouse.
> > 
> > However some have suggested there may be an openstack-wide policy which
> > requires peoples old config files to continue working indefinitely on
> > upgrade between versions - is this right?  If so where is it documented?
> 
> This is basically enforced in code in grenade, the language for this
> actually got lost in the project requirements discussion in the TC, I'll
> bring that back in the post graduation requirements discussion we're
> having again.
> 
> The issue is - Heat still doesn't materially participate in grenade.
> Heat is substantially far behind the other integrated projects in it's
> integration with the upstream testing. Only monday did we finally start
> gating on a real unit of work for Heat (the heat-slow jobs). If I was
> letter grading projects right now on upstream testing I'd give Nova an
> A, Neutron a C (still no full run, no working grenade), and Heat a D.

Thanks for this, I know we have a lot more work to do in tempest, but
evidently grenade integration is something we should priotitize as soon as
possible.  Any volunteers out there? :)

> So in short. Heat did the wrong thing. You should be able to use your
> configs from the last release. This is what all the mature projects in
> OpenStack do. In the event that you *have* to make a change like that it
> requires an UpgradeImpact tag in the commit. And those should be limited
> really aggressively. This is the whole point of the deprecation cycle.

Ok, got that message loud and clear now, thanks ;)

Do you have a link to docs which describe the deprecation cycle and
openstack-wide policy for introducing backwards incompatible changes?

The thing I'm still not that clear on, is if we want to eventually require
a specific config option, and we can't just have an upgrade requirement to
add it as I was expecting - is it enough to just output a warning for one
release cycle then require it?

Then I guess my question is how do we rationalize the requirements of
trunk-chasing downstream users wrt the time based releases as part of the
deprecation cycle policy?

i.e if we branch stable/icehouse then I immediately post a patch removing
the deprecated fallback path, it may still break downstream users who don't
care about the stable-branch process and I have no way of knowing (other
than, as in this case, finding out too late when they shout at me..).

Thanks for contributing to the discussion, hopefully it's not only me who's
somewhat confused by the process, and the requirement to satisfy two quite
different sets of release constraints for downstream deployers.

Perhaps we need a wiki page similar to the StableBranch page which spells
out the requirements for projects wrt trunk-chasing deployers, unless one
exists already?.

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Cinder] Feature about volume delete protection

2014-03-11 Thread Zhangleiqiang
> From: Huang Zhiteng [mailto:winsto...@gmail.com]
> Sent: Tuesday, March 11, 2014 5:37 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [Nova][Cinder] Feature about volume delete
> protection
> 
> On Tue, Mar 11, 2014 at 5:09 PM, Zhangleiqiang 
> wrote:
> >> From: Huang Zhiteng [mailto:winsto...@gmail.com]
> >> Sent: Tuesday, March 11, 2014 4:29 PM
> >> To: OpenStack Development Mailing List (not for usage questions)
> >> Subject: Re: [openstack-dev] [Nova][Cinder] Feature about volume
> >> delete protection
> >>
> >> On Tue, Mar 11, 2014 at 11:38 AM, Zhangleiqiang
> >>  wrote:
> >> > Hi all,
> >> >
> >> >
> >> >
> >> > Besides the "soft-delete" state for volumes, I think there is need
> >> > for introducing another "fake delete" state for volumes which have
> snapshot.
> >> >
> >> >
> >> >
> >> > Current Openstack refuses the delete request for volumes which have
> >> > snapshot. However, we will have no method to limit users to only
> >> > use the specific snapshot other than the original volume ,  because
> >> > the original volume is always visible for the users.
> >> >
> >> >
> >> >
> >> > So I think we can permit users to delete volumes which have
> >> > snapshots, and mark the volume as "fake delete" state. When all of
> >> > the snapshots of the volume have already deleted, the original
> >> > volume will be removed automatically.
> >> >
> >> Can you describe the actual use case for this?  I not sure I follow
> >> why operator would like to limit the owner of the volume to only use
> >> specific version of snapshot.  It sounds like you are adding another
> >> layer.  If that's the case, the problem should be solved at upper layer
> instead of Cinder.
> >
> > For example, one tenant's volume quota is five, and has 5 volumes and 1
> snapshot already. If the data in base volume of the snapshot is corrupted, the
> user will need to create a new volume from the snapshot, but this operation
> will be failed because there are already 5 volumes, and the original volume
> cannot be deleted, too.
> >
> Hmm, how likely is it the snapshot is still sane when the base volume is
> corrupted?  

If the snapshot of volume is COW, then the snapshot will be still sane when the 
base volume is corrupted.

> Even if this case is possible, I don't see the 'fake delete' proposal
> is the right way to solve the problem.  IMO, it simply violates what quota
> system is designed for and complicates quota metrics calculation (there would
> be actual quota which is only visible to admin/operator and an end-user facing
> quota).  Why not contact operator to bump the upper limit of the volume
> quota instead?

I had some misunderstanding on Cinder's snapshot. 
"Fake delete" is common if there is "chained snapshot" or "snapshot tree" 
mechanism. However in cinder, only volume can make snapshot but snapshot cannot 
make snapshot again. 

I agree with your bump upper limit method. 

Thanks for your explanation.


> >> >
> >> >
> >> >
> >> >
> >> > Any thoughts? Welcome any advices.
> >> >
> >> >
> >> >
> >> >
> >> >
> >> >
> >> >
> >> > --
> >> >
> >> > zhangleiqiang
> >> >
> >> >
> >> >
> >> > Best Regards
> >> >
> >> >
> >> >
> >> > From: John Griffith [mailto:john.griff...@solidfire.com]
> >> > Sent: Thursday, March 06, 2014 8:38 PM
> >> >
> >> >
> >> > To: OpenStack Development Mailing List (not for usage questions)
> >> > Subject: Re: [openstack-dev] [Nova][Cinder] Feature about volume
> >> > delete protection
> >> >
> >> >
> >> >
> >> >
> >> >
> >> >
> >> >
> >> > On Thu, Mar 6, 2014 at 9:13 PM, John Garbutt 
> >> wrote:
> >> >
> >> > On 6 March 2014 08:50, zhangyu (AI)  wrote:
> >> >> It seems to be an interesting idea. In fact, a China-based public
> >> >> IaaS, QingCloud, has provided a similar feature to their virtual
> >> >> servers. Within 2 hours after a virtual server is deleted, the
> >> >> server owner can decide whether or not to cancel this deletion and
> >> >> re-cycle that "deleted" virtual server.
> >> >>
> >> >> People make mistakes, while such a feature helps in urgent cases.
> >> >> Any idea here?
> >> >
> >> > Nova has soft_delete and restore for servers. That sounds similar?
> >> >
> >> > John
> >> >
> >> >
> >> >>
> >> >> -Original Message-
> >> >> From: Zhangleiqiang [mailto:zhangleiqi...@huawei.com]
> >> >> Sent: Thursday, March 06, 2014 2:19 PM
> >> >> To: OpenStack Development Mailing List (not for usage questions)
> >> >> Subject: [openstack-dev] [Nova][Cinder] Feature about volume
> >> >> delete protection
> >> >>
> >> >> Hi all,
> >> >>
> >> >> Current openstack provide the delete volume function to the user.
> >> >> But it seems there is no any protection for user's delete operation 
> >> >> miss.
> >> >>
> >> >> As we know the data in the volume maybe very important and valuable.
> >> >> So it's better to provide a method to the user to avoid the volume
> >> >> delete miss.
> >> >>
> >> >> Such as:
> >> >> We can provide a safe 

Re: [openstack-dev] [heat]Policy on upgades required config changes

2014-03-11 Thread Sean Dague
On 03/04/2014 12:39 PM, Steven Hardy wrote:
> Hi all,
> 
> As some of you know, I've been working on the instance-users blueprint[1].
> 
> This blueprint implementation requires three new items to be added to the
> heat.conf, or some resources (those which create keystone users) will not
> work:
> 
> https://review.openstack.org/#/c/73978/
> https://review.openstack.org/#/c/76035/
> 
> So on upgrade, the deployer must create a keystone domain and domain-admin
> user, add the details to heat.conf, as already been done in devstack[2].
> 
> The changes requried for this to work have already landed in devstack, but
> it was discussed to day and Clint suggested this may be unacceptable
> upgrade behavior - I'm not sure so looking for guidance/comments.
> 
> My plan was/is:
> - Make devstack work
> - Talk to tripleo folks to assist in any transition (what prompted this
>   discussion)
> - Document the upgrade requirements in the Icehouse release notes so the
>   wider community can upgrade from Havana.
> - Try to give a heads-up to those maintaining downstream heat deployment
>   tools (e.g stackforge/puppet-heat) that some tweaks will be required for
>   Icehouse.
> 
> However some have suggested there may be an openstack-wide policy which
> requires peoples old config files to continue working indefinitely on
> upgrade between versions - is this right?  If so where is it documented?

This is basically enforced in code in grenade, the language for this
actually got lost in the project requirements discussion in the TC, I'll
bring that back in the post graduation requirements discussion we're
having again.

The issue is - Heat still doesn't materially participate in grenade.
Heat is substantially far behind the other integrated projects in it's
integration with the upstream testing. Only monday did we finally start
gating on a real unit of work for Heat (the heat-slow jobs). If I was
letter grading projects right now on upstream testing I'd give Nova an
A, Neutron a C (still no full run, no working grenade), and Heat a D.

So in short. Heat did the wrong thing. You should be able to use your
configs from the last release. This is what all the mature projects in
OpenStack do. In the event that you *have* to make a change like that it
requires an UpgradeImpact tag in the commit. And those should be limited
really aggressively. This is the whole point of the deprecation cycle.

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Cinder: AttributeError: No such RPC function 'create_export'

2014-03-11 Thread Deepak C Shetty
I think you are referrign to backend-assisted migration, I am referring 
to the generic one (with the support put forth by Avishay of IBM)


The generic flow of migration shoudl work as far as backend provides 
support for

1) create volume
2) attach/detach volume

It may not be ideal, but should work using the 'dd' to do the copy of 
the data between src and dest volume. I am currently looking at this 
generic migrate only


On 03/11/2014 02:50 PM, Swapnil Kulkarni wrote:

Hi Deepak,

When you say you are using glusterfs as backend, you are using glusterfs
driver, is it correct?

Best Regards,
Swapnil Kulkarni
irc : coolsvap


On Tue, Mar 11, 2014 at 2:17 PM, Deepak C Shetty mailto:deepa...@redhat.com>> wrote:

Swapnil,
 The failure is not in the glsuter specific part of code
IIUC its in the rpc/dispatcher area.. so shouldn't be gluster specific


On 03/11/2014 01:06 PM, Swapnil Kulkarni wrote:

Hi Deepak,

I believe the migrate_volume is not implemented in glusterfs which
causes above error. I have seen similar errors earlier. Currently
implementing the migrate volume and testing it. I will push it
upstream
once successfully tested.

Best Regards,
Swapnil Kulkarni
irc : coolsvap
swapnilkulkarni2...@gmail.com

>
+91-87960 10622(c)
http://in.linkedin.com/in/__coolsvap

*"It's better to SHARE"*



On Tue, Mar 11, 2014 at 12:53 PM, Deepak C Shetty
mailto:deepa...@redhat.com>
>> wrote:

 Hi All,
  I am using devstack with cinder git head @
 f888e412b0d0fdb0426045a9c55e0be0390f842c


 I am seeing the below error while trying to do cinder
migrate for
 glusterfs backend. I don't think its backend specific tho'
as the
 failure is in the common rpc layer of code.

http://paste.fedoraproject.org/84189/45169021/

 >

 Any pointers to get past this is appreciated.

 thanx,
 deepak

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 >

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

>





_
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.__org

http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-dev



_
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.__org

http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-dev 





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron]A Question about creating instance with duplication sg_name

2014-03-11 Thread Lingxian Kong
Hi Xurong:

If Neutron is used for security-group functionality, do not come back to
Nova for that. The security-group in Nova is just for backward
compatiblity, IMHO.


2014-03-11 16:20 GMT+08:00 Xurong Yang :

> It's allowed to create duplicate sg with the same name.
> so exception happens when creating instance with the duplicate sg name.
> code following:
> 
> security_groups = kwargs.get('security_groups', [])
> security_group_ids = []
>
> # TODO(arosen) Should optimize more to do direct query for security
> # group if len(security_groups) == 1
> if len(security_groups):
> search_opts = {'tenant_id': instance['project_id']}
> user_security_groups = neutron.list_security_groups(
> **search_opts).get('security_groups')
>
> for security_group in security_groups:
> name_match = None
> uuid_match = None
> for user_security_group in user_security_groups:
> if user_security_group['name'] == security_group:
> if name_match:---exception happened here
> raise exception.NoUniqueMatch(
> _("Multiple security groups found matching"
>   " '%s'. Use an ID to be more specific.") %
>security_group)
>
> name_match = user_security_group['id']
>   
>
> so it's maybe improper to create instance with the sg name parameter.
> appreciation if any response.
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
*---*
*Lingxian Kong*
Huawei Technologies Co.,LTD.
IT Product Line CloudOS PDU
China, Xi'an
Mobile: +86-18602962792
Email: konglingx...@huawei.com; anlin.k...@gmail.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat]Policy on upgades required config changes

2014-03-11 Thread Steven Hardy
Hi Keith & Clint,

On Tue, Mar 11, 2014 at 05:05:21AM +, Keith Bray wrote:
> I want to echo Clint's responses... We do run close to Heat master here at
> Rackspace, and we'd be happy to set up a non-voting job to notify when a
> review would break Heat on our cloud if that would be beneficial.  Some of
> the breaks we have seen have been things that simply weren't caught in
> code review (a human intensive effort), were specific to the way we
> configure Heat for large-scale cloud use, applicable to the entire Heat
> project, and not necessarily service provider specific.

I appreciate the feedback and I've certainly learned something during
this process and will endeavor to provide uniformly backwards compatible
changes in future.  I certainly agree we can do things better next time :)

Hopefully you can appreciate that the auth related features I've been
working on have been a large and difficult undertaking, and that once the
transitional pain has passed will bring considerable benefits for both
users and deployers.

One frustration I have is lack of review feedback for most of the
instance-users and v3 keystone work (except for a small and dedicated
subset of the heat-core team, thanks!).  So my feedback to you is if you're
running close to master, we really really need your help during the review
process, to avoid post-merge stress for everyone :)

Re gate CI - it sounds like a great idea, voting and non-voting feedback is
hugely valuable in addition to human reviewer feedback, so hopefully we can
work towards getting such tests in place.

Anyway, apologies again for any inconvenience, hopefully all is working OK
now with the fallback patch I provided.

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] tgt restart fails in Cinder startup "start: job failed to start"

2014-03-11 Thread Roey Chen
Forwarding the answer to the relevant mailing lists:

---

Hi,

Hope this could help,

I've encountered this issue myself not to long ago on Ubuntu 12.04 host,
it didn't happen again after messing with the Kernel Semaphore Limits 
parameters [1]:

Adding this [2] line to `/etc/sysctl.conf` seems to do the trick.


- Roey


[1] http://paste.openstack.org/show/73086/
[2] http://paste.openstack.org/show/73082/


From: Sukhdev Kapur [mailto:sukhdevka...@gmail.com]
Sent: Monday, March 10, 2014 5:56 PM
To: Dane Leblanc (leblancd)
Cc: OpenStack Development Mailing List (not for usage questions); 
openstack-in...@lists.openstack.org; openstack...@lists.openstack.org
Subject: Re: [OpenStack-Infra] tgt restart fails in Cinder startup "start: job 
failed to start"

I see the same issue. This issue has crept in during the latest flurry of 
check-ins. I started noticing this issue a day or two before the Icehouse 
Feature Freeze deadline.

I tried restarting tgt as well, but, it does not help.

However, rebooting the VM helps clear it up.

Has anybody else seen it as well? Does anybody have a solution for it?

Thanks
-Sukhdev




On Mon, Mar 10, 2014 at 8:37 AM, Dane Leblanc (leblancd) 
mailto:lebla...@cisco.com>> wrote:
I don't know if anyone can give me some troubleshooting advice with this issue.

I'm seeing an occasional problem whereby after several DevStack 
unstack.sh/stack.sh cycles, the tgt daemon (tgtd) 
fails to start during Cinder startup.  Here's a snippet from the stack.sh log:

2014-03-10 07:09:45.214 | Starting Cinder
2014-03-10 07:09:45.215 | + return 0
2014-03-10 07:09:45.216 | + sudo rm -f /etc/tgt/conf.d/stack.conf
2014-03-10 07:09:45.217 | + _configure_tgt_for_config_d
2014-03-10 07:09:45.218 | + [[ ! -d /etc/tgt/stack.d/ ]]
2014-03-10 07:09:45.219 | + is_ubuntu
2014-03-10 07:09:45.220 | + [[ -z deb ]]
2014-03-10 07:09:45.221 | + '[' deb = deb ']'
2014-03-10 07:09:45.222 | + sudo service tgt restart
2014-03-10 07:09:45.223 | stop: Unknown instance:
2014-03-10 07:09:45.619 | start: Job failed to start
jenkins@neutronpluginsci:~/devstack$ 2014-03-10 07:09:45.621 | + exit_trap
2014-03-10 07:09:45.622 | + local r=1
2014-03-10 07:09:45.623 | ++ jobs -p
2014-03-10 07:09:45.624 | + jobs=
2014-03-10 07:09:45.625 | + [[ -n '' ]]
2014-03-10 07:09:45.626 | + exit 1

If I try to restart tgt manually without success:

jenkins@neutronpluginsci:~$ sudo service tgt restart
stop: Unknown instance:
start: Job failed to start
jenkins@neutronpluginsci:~$ sudo tgtd
librdmacm: couldn't read ABI version.
librdmacm: assuming: 4
CMA: unable to get RDMA device list
(null): iser_ib_init(3263) Failed to initialize RDMA; load kernel modules?
(null): fcoe_init(214) (null)
(null): fcoe_create_interface(171) no interface specified.
jenkins@neutronpluginsci:~$

The config in /etc/tgt is:

jenkins@neutronpluginsci:/etc/tgt$ ls -l
total 8
drwxr-xr-x 2 root root 4096 Mar 10 07:03 conf.d
lrwxrwxrwx 1 root root   30 Mar 10 06:50 stack.d -> 
/opt/stack/data/cinder/volumes
-rw-r--r-- 1 root root   58 Mar 10 07:07 targets.conf
jenkins@neutronpluginsci:/etc/tgt$ cat targets.conf
include /etc/tgt/conf.d/*.conf
include /etc/tgt/stack.d/*
jenkins@neutronpluginsci:/etc/tgt$ ls conf.d
jenkins@neutronpluginsci:/etc/tgt$ ls /opt/stack/data/cinder/volumes
jenkins@neutronpluginsci:/etc/tgt$

I don't know if there's any missing Cinder config in my DevStack localrc files. 
Here's one that I'm using:

MYSQL_PASSWORD=nova
RABBIT_PASSWORD=nova
SERVICE_TOKEN=nova
SERVICE_PASSWORD=nova
ADMIN_PASSWORD=nova
ENABLED_SERVICES=g-api,g-reg,key,n-api,n-crt,n-obj,n-cpu,n-cond,cinder,c-sch,c-api,c-vol,n-sch,n-novnc,n-xvnc,n-cauth,horizon,rabbit
enable_service mysql
disable_service n-net
enable_service q-svc
enable_service q-agt
enable_service q-l3
enable_service q-dhcp
enable_service q-meta
enable_service q-lbaas
enable_service neutron
enable_service tempest
VOLUME_BACKING_FILE_SIZE=2052M
Q_PLUGIN=cisco
declare -a Q_CISCO_PLUGIN_SUBPLUGINS=(openvswitch nexus)
declare -A 
Q_CISCO_PLUGIN_SWITCH_INFO=([10.0.100.243]=admin:Cisco12345:22:neutronpluginsci:1/9)
NCCLIENT_REPO=git://github.com/CiscoSystems/ncclient.git
PHYSICAL_NETWORK=physnet1
OVS_PHYSICAL_BRIDGE=br-eth1
TENANT_VLAN_RANGE=810:819
ENABLE_TENANT_VLANS=True
API_RATE_LIMIT=False
VERBOSE=True
DEBUG=True
LOGFILE=/opt/stack/logs/stack.sh.log
USE_SCREEN=True
SCREEN_LOGDIR=/opt/stack/logs

Here are links to a log showing another localrc file that I use, and the 
corresponding stack.sh log:

http://128.107.233.28:8080/job/neutron/1390/artifact/vpnaas_console_log.txt
http://128.107.233.28:8080/job/neutron/1390/artifact/vpnaas_stack_sh_log.txt

Does anyone have any advice on how to debug this, or recover from this (beyond 
rebooting the node)? Or am I missing any Cinder config?

Thanks in advance for any help on this!!!
Dane



___
OpenStack-Infra mailing list
openstack-in...@lists.openstack.org

  1   2   >