Re: [openstack-dev] [all] sample config files should be ignored in git...

2014-03-27 Thread Michael Chapman
On Thu, Mar 27, 2014 at 4:10 PM, Robert Collins
wrote:

> On 27 March 2014 17:30, Tom Fifield  wrote:
>
> >> Does anyone disagree?
> >
> > /me raises hand
> >
> > When I was an operator, I regularly referred to the sample config files
> > in the git repository.
> >
> > If there weren't generated versions of the sample config in the repo, I
> > would probably grep the code (not an ideal user experience!). Running
> > some random script that I don't know about the existence and might
> > depend on having something else installed of is probably not something
> > that would happen.
>
> So, I think its important you have sample configs to refer to.
>
> Do they need to be in the git repo?
>
> Note that because libraries now export config options (which is the
> root of this problem!) you cannot ever know from the source all the
> options for a service - you *must* know the library versions you are
> running, to interrogate them for their options.
>
> We can - and should - have a discussion about the appropriateness of
> the layering leak we have today, but in the meantime this is breaking
> multiple projects every time any shared library that uses oslo.config
> changes any config option... so we need to solve the workflow aspect.
>
> How about we make a copy of the latest config for each project for
> each series - e.g. trunk of everything, Icehouse of servers with trunk
> of everything else, etc and make that easily acccessible?
>
> -Rob
>
> --
> Robert Collins 
> Distinguished Technologist
> HP Converged Cloud
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


There are already some samples in the 'configuration reference' section of
docs.openstack, eg:

http://docs.openstack.org/havana/config-reference/content/ch_configuring-openstack-identity.html#sample-configuration-files

However the compute and image sections opt for a formatted table, and the
network section is more like an installation guide.

If the samples are to be removed from github, perhaps our configuration
reference section could be first and foremost the set of sample
configuration files for each project + plugin, rather than them being
merely a part of the reference doc as it currently exists.

I fairly consistently refer to the github copies of the samples. They also
allow me to refer to specific lines of the config when explaining concepts
over text. I am not against their removal, but if we were to remove them
I'd disappointed if I had to search very far on docs.openstack.org to get
to them, and I would want the raw files instead of something formatted.

 - Michael
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Add force detach to nova

2014-03-27 Thread Nikola Đipanov
On 03/27/2014 03:52 AM, Wanghao (S) wrote:
> Hi, all,
> 
>  
> 
> There is a use case: we have two nova components (call them nova A and
> nova B) and one cinder component. Attach a volume to an instance in nova
> A and then services of nova A become abnormal.
> 
> Because the volume also want to be used in nova B, so using cinder api
> "force detach volume" to free this volume. But when nova A is normal,
> nova can't detach this volume from instance by using nova api "detach
> volume" ,
> 
> as nova check the volume state must be "attached".
> 
>  
> 
> I think should we add "force detach" function to nova just like "attach"
> and "detach", because if using force detach volume in cinder, there is
> still some attach information in nova which can't be cleaned by using
> nova api "detach".
> 
>  
> 
> There is the BP link
> :https://blueprints.launchpad.net/nova/+spec/add-force-detach-to-nova
> 

Hi,

Please be aware that we are changing how BPs are done in Juno. You can
see more details in this email thread [1].

Also as mentioned on the bug that started this [2], the reason I think
this needs a BP is because there are edge cases to be discussed and
figured out - not only because we need to follow a process. Addressing
some of the concerns from the bug on the gerrit proposal would be great.

Thanks,

N.

[1]
http://lists.openstack.org/pipermail/openstack-dev/2014-March/030576.html
[2] https://bugs.launchpad.net/nova/+bug/1297127
>  
> 
> Any suggestion is great. THX~
> 
>  
> 
> Sorry for the first email without subject, please ignore it.
> 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][reviews] We're falling behind

2014-03-27 Thread Tomas Sedovic
On 25/03/14 21:17, Robert Collins wrote:
> TripleO has just seen an influx of new contributors. \o/. Flip side -
> we're now slipping on reviews /o\.
> 
> In the meeting today we had basically two answers: more cores, and
> more work by cores.
> 
> We're slipping by 2 reviews a day, which given 16 cores is a small amount.
> 
> I'm going to propose some changes to core in the next couple of days -
> I need to go and re-read a bunch of reviews first - but, right now we
> don't have a hard lower bound on the number of reviews we request
> cores commit to (on average).
> 
> We're seeing 39/day from the 16 cores - which isn't enough as we're
> falling behind. Thats 2.5 or so. So - I'd like to ask all cores to
> commit to doing 3 reviews a day, across all of tripleo (e.g. if your
> favourite stuff is all reviewed, find two new things to review even if
> outside comfort zone :)).

I've let my reviewing duties slip in the last few weeks. This doesn't
sound unreasonable so yeah, count me in.

> 
> And we always need more cores - so if you're not a core, this proposal
> implies that we'll be asking that you a) demonstrate you can sustain 3
> reviews a day on average as part of stepping up, and b) be willing to
> commit to that.
> 
> Obviously if we have enough cores we can lower the minimum commitment
> - so I don't think this figure should be fixed in stone.
> 
> And now - time for a loose vote - who (who is a tripleo core today)
> supports / disagrees with this proposal - lets get some consensus
> here.
> 
> I'm in favour, obviously :), though it is hard to put reviews ahead of
> direct itch scratching, its the only way to scale the project.
> 
> -Rob
> 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] sample config files should be ignored in git...

2014-03-27 Thread Andreas Jaeger
On 03/27/2014 06:10 AM, Robert Collins wrote:
> On 27 March 2014 17:30, Tom Fifield  wrote:
> 
>>> Does anyone disagree?
>>
>> /me raises hand
>>
>> When I was an operator, I regularly referred to the sample config files
>> in the git repository.
>>
>> If there weren't generated versions of the sample config in the repo, I
>> would probably grep the code (not an ideal user experience!). Running
>> some random script that I don't know about the existence and might
>> depend on having something else installed of is probably not something
>> that would happen.
> 
> So, I think its important you have sample configs to refer to.
> 
> Do they need to be in the git repo?
> 
> Note that because libraries now export config options (which is the
> root of this problem!) you cannot ever know from the source all the
> options for a service - you *must* know the library versions you are
> running, to interrogate them for their options.

And how shall we document this properly in the manuals?

> We can - and should - have a discussion about the appropriateness of
> the layering leak we have today, but in the meantime this is breaking
> multiple projects every time any shared library that uses oslo.config
> changes any config option... so we need to solve the workflow aspect.

Please together with the documentation team.

> How about we make a copy of the latest config for each project for
> each series - e.g. trunk of everything, Icehouse of servers with trunk
> of everything else, etc and make that easily acccessible?


Andreas
-- 
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
  SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Jeff Hawn,Jennifer Guild,Felix Imendörffer,HRB16746 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][scheduler] Availability Zones and Host aggregates..

2014-03-27 Thread Sylvain Bauza

Le 27/03/2014 00:16, Sangeeta Singh a écrit :

Hi,

To update the thread the initial problem that I mentioned that when I 
add a host to multiple availability zone(AZ) and then do a
"nova boot" without specifying a AZ expecting the default zone to be 
picked up.


This is due to the bug [1] as mentioned by Vish. I have updated the 
bug with the problem.


The validation fails during instance create due to the [1]



Yup, I understood the issue, as the name of the AZ is consequently 
different from the default one.


I still need to jump on unittests and see what needs to be changed, but 
apart from that, the change by itself should be quick to do.


-Sylvain



Thanks,
Sangeeta

[1] https://bugs.launchpad.net/nova/+bug/1277230
From: Sylvain Bauza >
Reply-To: "OpenStack Development Mailing List (not for usage 
questions)" >

Date: Wednesday, March 26, 2014 at 1:34 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: Re: [openstack-dev] [nova][scheduler] Availability Zones and 
Host aggregates..


I can't agree more on this. Although the name sounds identical to AWS, 
Nova AZs are *not* for segregating compute nodes, but rather exposing 
to users a certain sort of grouping.
Please see this pointer for more info if needed : 
http://russellbryantnet.wordpress.com/2013/05/21/availability-zones-and-host-aggregates-in-openstack-compute-nova/


Regarding the bug mentioned by Vish [1], I'm the owner of it. I took 
it a while ago, but things and priorities changed so I can take a look 
over it this week and hope to deliver a patch by next week.


Thanks,
-Sylvain

[1] https://bugs.launchpad.net/nova/+bug/1277230




2014-03-26 19:00 GMT+01:00 Chris Friesen >:


On 03/26/2014 11:17 AM, Khanh-Toan Tran wrote:

I don't know why you need a
compute node that belongs to 2 different availability-zones. Maybe
I'm wrong but for me it's logical that availability-zones do not
share the same compute nodes. The "availability-zones" have
the role
of partition your compute nodes into "zones" that are physically
separated (in large term it would require separation of physical
servers, networking equipments, power sources, etc). So that when
user deploys 2 VMs in 2 different zones, he knows that these
VMs do
not fall into a same host and if some zone falls, the others
continue
working, thus the client will not lose all of his VMs.


See Vish's email.

Even under the original meaning of availability zones you could
realistically have multiple orthogonal availability zones based on
"room", or "rack", or "network", or "dev" vs "production", or even
"has_ssds" and a compute node could reasonably be part of several
different zones because they're logically in different namespaces.

Then an end-user could boot an instance, specifying "networkA",
"dev", and "has_ssds" and only hosts that are part of all three
zones would match.

Even if they're not used for orthogonal purposes, multiple
availability zones might make sense.  Currently availability zones
are the only way an end-user has to specify anything about the
compute host he wants to run on.  So it's not entirely surprising
that people might want to overload them for purposes other than
physical partitioning of machines.

Chris


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] Short vids showing current UI state

2014-03-27 Thread mar...@redhat.com
Hi, we made a couple of short videos for an internal 'show and tell what
I'm currently working on' for colleagues - they show master
tuskar/tuskar-ui/horizon as of ~Tuesday this week:

Node Profile config @
https://www.youtube.com/watch?v=Ranfkx34dhg
Shows definition of Node Profiles for each of compute, control and
block-store. Assignment of these to the relevant Roles to prepare a deploy.
-
Nodes overview and deploy start @
https://www.youtube.com/watch?v=s2DAngZ8__E
Shows overview of registered and available baremetal nodes (these are
poseur in the vid) and then launch the deploy.

Thought these may be interesting to anyone that hasn't seen/setup the UI
recently,

marios

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Keystone] Icehouse RC1 available

2014-03-27 Thread Thierry Carrez
Hello everyone,

Like during the Havana cycle, Keystone is again the first project to
publish a release candidate in preparation for the Icehouse release !
Congratulations to the Keystone development team for reaching that
milestone first. 52 bugs were fixed in Keystone since feature freeze, 3
weeks ago.

The RC1 is available for download at:
https://launchpad.net/keystone/icehouse/icehouse-rc1

Unless release-critical issues are found that warrant a release
candidate respin, this RC1 will be formally released as the 2014.1 final
version on April 17. You are therefore strongly encouraged to test and
validate this tarball !

Alternatively, you can directly test the milestone-proposed branch at:
https://github.com/openstack/keystone/tree/milestone-proposed

If you find an issue that could be considered release-critical, please
file it at:

https://bugs.launchpad.net/keystone/+filebug

and tag it *icehouse-rc-potential* to bring it to the release crew's
attention.

Note that the "master" branch of Keystone is now open for Juno
development, and feature freeze restrictions no longer apply there.

Regards,

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] sample config files should be ignored in git...

2014-03-27 Thread Flavio Percoco

On 27/03/14 18:10 +1300, Robert Collins wrote:

On 27 March 2014 17:30, Tom Fifield  wrote:


Does anyone disagree?


/me raises hand

When I was an operator, I regularly referred to the sample config files
in the git repository.

If there weren't generated versions of the sample config in the repo, I
would probably grep the code (not an ideal user experience!). Running
some random script that I don't know about the existence and might
depend on having something else installed of is probably not something
that would happen.


So, I think its important you have sample configs to refer to.

Do they need to be in the git repo?

Note that because libraries now export config options (which is the
root of this problem!) you cannot ever know from the source all the
options for a service - you *must* know the library versions you are
running, to interrogate them for their options.

We can - and should - have a discussion about the appropriateness of
the layering leak we have today, but in the meantime this is breaking
multiple projects every time any shared library that uses oslo.config
changes any config option... so we need to solve the workflow aspect.

How about we make a copy of the latest config for each project for
each series - e.g. trunk of everything, Icehouse of servers with trunk
of everything else, etc and make that easily acccessible?


I'd agree with the original proposal if - and only if - something like
what Robert proposed here is done.

I'd say the config file could be generated for each milestone cut and
live in the milestone branch.

As Tom pointed out, referring to the sample configs is very useful
from many points of view (operations, support, development etc).

Cheers,
Flavio

--
@flaper87
Flavio Percoco


pgp5FItgZgpGb.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [marconi] sample config files should be ignored in git...

2014-03-27 Thread Flavio Percoco

On 26/03/14 18:28 +, Kurt Griffiths wrote:

Team, what do you think about doing this for Marconi? It looks like we
indeed have a sample checked in:

https://review.openstack.org/#/c/83006/1/etc/marconi.conf.sample


Personally, I think we should keep the sample until generate_sample.sh
works on OS X (we could even volunteer to fix it); otherwise, people with
MBPs will be in a bit of a bind.



I replied to the original email with my thoughts. I don't think this
should be done until we have a place to put the generated sample files
for each milestone.

Cheers,
Flavio



---
Kurt G. | @kgriffs

On 3/26/14, 1:15 PM, "Russell Bryant"  wrote:


On 03/26/2014 02:10 PM, Clint Byrum wrote:

This is an issue that affects all of our git repos. If you are using
oslo.config, you will likely also be using the sample config generator.

However, for some reason we are all checking this generated file in.
This makes no sense, as we humans are not editting it, and it often
picks up config files from other things like libraries (keystoneclient
in particular). This has lead to breakage in the gate a few times for
Heat, perhaps for others as well.

I move that we all rm this file from our git trees, and start generating
it as part of the install/dist process (I have no idea how to do
this..). This would require:

- rm sample files and add them to .gitignore in all trees
- Removing check_uptodate.sh from all trees/tox.ini's
- Generating file during dist/install process.

Does anyone disagree?


This has been done in Nova, except we don't have it generated during
install.  We just have instructions and a tox target that will do it if
you choose to.

https://git.openstack.org/cgit/openstack/nova/tree/etc/nova/README-nova.co
nf.txt

Related, adding instructions to generate without tox:
https://review.openstack.org/#/c/82533/

--
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
@flaper87
Flavio Percoco


pgpDmaTg3ecZ9.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] sample config files should be ignored in git...

2014-03-27 Thread Dirk Müller
Hi,

>> When I was an operator, I regularly referred to the sample config files
>> in the git repository.

The sample config files in git repository are tremendeously useful for
any operator and OpenStack Packager. Having them generateable with a
tox line is very cumbersome.

As a minimum those config files should be part of the sdist tarball
(aka generated during sdist time).

> Do they need to be in the git repo?

IMHO yes, they should go alongside the code change.

> Note that because libraries now export config options (which is the
> root of this problem!) you cannot ever know from the source all the
> options for a service - you *must* know the library versions you are
> running, to interrogate them for their options.

The problem is that we hammer in all the libraries configuration
option into the main config file. if we'd have "include" support and
we'd just include the libraries config options that are generated as a
separate file (and possibly autogenerated) this problem would not
occur, and it would avoid the gate breakages.


Thanks,
Dirk

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Add bootable option to cinder create command.

2014-03-27 Thread Hiroyuki Eguchi
A Bootable Status is set to "True" automatically when user create a volume from 
a image.

But user have to set bootable status manually under the following situations.

 1.When user create a empty volume and install os in volume like this.

 $ cinder create 10
 $ nova boot --image [image_uuid(iso format)] --flavor 1 \
   --block-device-mapping vdb=[volume_uuid]:10::0 ubuntu_vm

 2.When user create a bootable volume from instance.

  
http://docs.openstack.org/grizzly/openstack-compute/admin/content/instance-creation.html#d6e6679


So I'm envisioning to add bootable option like this.

 $ cinder create --bootable true 10

If you have any comments or suggestion, please let me know.
And please let me know if there's any discussion about this.

--thanks
--hiroyuki

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][scheduler] Availability Zones and Host aggregates..

2014-03-27 Thread Khanh-Toan Tran


- Original Message -
> From: "Sangeeta Singh" 
> To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Sent: Wednesday, March 26, 2014 6:54:18 PM
> Subject: Re: [openstack-dev] [nova][scheduler] Availability Zones and Host 
> aggregates..
> 
> 
> 
> On 3/26/14, 10:17 AM, "Khanh-Toan Tran" 
> wrote:
> 
> >
> >
> >- Original Message -
> >> From: "Sangeeta Singh" 
> >> To: "OpenStack Development Mailing List (not for usage questions)"
> >>
> >> Sent: Tuesday, March 25, 2014 9:50:00 PM
> >> Subject: [openstack-dev] [nova][scheduler] Availability Zones and Host
> >>aggregates..
> >> 
> >> Hi,
> >> 
> >> The availability Zones filter states that theoretically a compute node
> >>can be
> >> part of multiple availability zones. I have a requirement where I need
> >>to
> >> make a compute node part to 2 AZ. When I try to create a host aggregates
> >> with AZ I can not add the node in two host aggregates that have AZ
> >>defined.
> >> However if I create a host aggregate without associating an AZ then I
> >>can
> >> add the compute nodes to it. After doing that I can update the
> >> host-aggregate an associate an AZ. This looks like a bug.
> >> 
> >> I can see the compute node to be listed in the 2 AZ with the
> >> availability-zone-list command.
> >> 
> >
> >Yes it appears a bug to me (apparently the AZ metadata indertion is
> >considered as a normal metadata so no check is done), and so does the
> >message in the AvailabilityZoneFilter. I don't know why you need a
> >compute node that belongs to 2 different availability-zones. Maybe I'm
> >wrong but for me it's logical that availability-zones do not share the
> >same compute nodes. The "availability-zones" have the role of partition
> >your compute nodes into "zones" that are physically separated (in large
> >term it would require separation of physical servers, networking
> >equipments, power sources, etc). So that when user deploys 2 VMs in 2
> >different zones, he knows that these VMs do not fall into a same host and
> >if some zone falls, the others continue working, thus the client will not
> >lose all of his VMs. It's smaller than Regions which ensure total
> >separation at the cost of low-layer connectivity and central management
> >(e.g. scheduling per region).
> >
> >See: http://www.linuxjournal.com/content/introduction-openstack
> >
> >The former purpose of regouping hosts with the same characteristics is
> >ensured by host-aggregates.
> >
> >> The problem that I have is that I can still not boot a VM on the
> >>compute node
> >> when I do not specify the AZ in the command though I have set the
> >>default
> >> availability zone and the default schedule zone in nova.conf.
> >> 
> >> I get the error ³ERROR: The requested availability zone is not
> >>available²
> >> 
> >> What I am  trying to achieve is have two AZ that the user can select
> >>during
> >> the boot but then have a default AZ which has the HV from both AZ1 AND
> >>AZ2
> >> so that when the user does not specify any AZ in the boot command I
> >>scatter
> >> my VM on both the AZ in a balanced way.
> >> 
> >
> >I do not understand your goal. When you create two availability-zones and
> >put ALL of your compute nodes into these AZs, then if you don't specifies
> >the AZ in your request, then AZFilter will automatically accept all hosts.
> >The defaut weigher (RalWeigher) will then distribute the workload fairely
> >among these nodes regardless of AZ it belongs to. Maybe it is what you
> >want?
> 
>   With Havana that does not happen as there is a concept of
> default_scheduler_zone which is none if not specified and when we specify
> one can only specify a since AZ whereas in my case I basically want the 2
> AZ that I create both to be considered default zones if nothing is
> specified.

If you look into the code of the AvailabilityFilter, you'll see that the filter 
automatically accepts host if there is NO availability-zone in the request, 
which is the case when user does not specify AZ. This is exactly what I see in 
my Openstack platform (Hanava stable). FYI, I didn't set up a default AZ in 
config. So whenever I creates several VMs without specifying an AZ, the 
scheduler spreads the VMs into all hosts regardless of their AZ.

What I think lacking is that user can not select a set of AZs instead of one or 
none right now.

> >
> >> Any pointers.
> >> 
> >> Thanks,
> >> Sangeeta
> >> 
> >> ___
> >> OpenStack-dev mailing list
> >> OpenStack-dev@lists.openstack.org
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >
> >___
> >OpenStack-dev mailing list
> >OpenStack-dev@lists.openstack.org
> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Re: [openstack-dev] [nova][scheduler] Availability Zones and Host aggregates..

2014-03-27 Thread Jérôme Gallard
Hi Toan,
Is what you say related to :
https://blueprints.launchpad.net/nova/+spec/schedule-set-availability-zones?


2014-03-27 10:37 GMT+01:00 Khanh-Toan Tran :

>
>
> - Original Message -
> > From: "Sangeeta Singh" 
> > To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> > Sent: Wednesday, March 26, 2014 6:54:18 PM
> > Subject: Re: [openstack-dev] [nova][scheduler] Availability Zones and
> Host aggregates..
> >
> >
> >
> > On 3/26/14, 10:17 AM, "Khanh-Toan Tran" 
> > wrote:
> >
> > >
> > >
> > >- Original Message -
> > >> From: "Sangeeta Singh" 
> > >> To: "OpenStack Development Mailing List (not for usage questions)"
> > >>
> > >> Sent: Tuesday, March 25, 2014 9:50:00 PM
> > >> Subject: [openstack-dev] [nova][scheduler] Availability Zones and Host
> > >>aggregates..
> > >>
> > >> Hi,
> > >>
> > >> The availability Zones filter states that theoretically a compute node
> > >>can be
> > >> part of multiple availability zones. I have a requirement where I need
> > >>to
> > >> make a compute node part to 2 AZ. When I try to create a host
> aggregates
> > >> with AZ I can not add the node in two host aggregates that have AZ
> > >>defined.
> > >> However if I create a host aggregate without associating an AZ then I
> > >>can
> > >> add the compute nodes to it. After doing that I can update the
> > >> host-aggregate an associate an AZ. This looks like a bug.
> > >>
> > >> I can see the compute node to be listed in the 2 AZ with the
> > >> availability-zone-list command.
> > >>
> > >
> > >Yes it appears a bug to me (apparently the AZ metadata indertion is
> > >considered as a normal metadata so no check is done), and so does the
> > >message in the AvailabilityZoneFilter. I don't know why you need a
> > >compute node that belongs to 2 different availability-zones. Maybe I'm
> > >wrong but for me it's logical that availability-zones do not share the
> > >same compute nodes. The "availability-zones" have the role of partition
> > >your compute nodes into "zones" that are physically separated (in large
> > >term it would require separation of physical servers, networking
> > >equipments, power sources, etc). So that when user deploys 2 VMs in 2
> > >different zones, he knows that these VMs do not fall into a same host
> and
> > >if some zone falls, the others continue working, thus the client will
> not
> > >lose all of his VMs. It's smaller than Regions which ensure total
> > >separation at the cost of low-layer connectivity and central management
> > >(e.g. scheduling per region).
> > >
> > >See: http://www.linuxjournal.com/content/introduction-openstack
> > >
> > >The former purpose of regouping hosts with the same characteristics is
> > >ensured by host-aggregates.
> > >
> > >> The problem that I have is that I can still not boot a VM on the
> > >>compute node
> > >> when I do not specify the AZ in the command though I have set the
> > >>default
> > >> availability zone and the default schedule zone in nova.conf.
> > >>
> > >> I get the error ³ERROR: The requested availability zone is not
> > >>available²
> > >>
> > >> What I am  trying to achieve is have two AZ that the user can select
> > >>during
> > >> the boot but then have a default AZ which has the HV from both AZ1 AND
> > >>AZ2
> > >> so that when the user does not specify any AZ in the boot command I
> > >>scatter
> > >> my VM on both the AZ in a balanced way.
> > >>
> > >
> > >I do not understand your goal. When you create two availability-zones
> and
> > >put ALL of your compute nodes into these AZs, then if you don't
> specifies
> > >the AZ in your request, then AZFilter will automatically accept all
> hosts.
> > >The defaut weigher (RalWeigher) will then distribute the workload
> fairely
> > >among these nodes regardless of AZ it belongs to. Maybe it is what you
> > >want?
> >
> >   With Havana that does not happen as there is a concept of
> > default_scheduler_zone which is none if not specified and when we specify
> > one can only specify a since AZ whereas in my case I basically want the 2
> > AZ that I create both to be considered default zones if nothing is
> > specified.
>
> If you look into the code of the AvailabilityFilter, you'll see that the
> filter automatically accepts host if there is NO availability-zone in the
> request, which is the case when user does not specify AZ. This is exactly
> what I see in my Openstack platform (Hanava stable). FYI, I didn't set up a
> default AZ in config. So whenever I creates several VMs without specifying
> an AZ, the scheduler spreads the VMs into all hosts regardless of their AZ.
>
> What I think lacking is that user can not select a set of AZs instead of
> one or none right now.
>
> > >
> > >> Any pointers.
> > >>
> > >> Thanks,
> > >> Sangeeta
> > >>
> > >> ___
> > >> OpenStack-dev mailing list
> > >> OpenStack-dev@lists.openstack.org
> > >> http://lists.openstack.org/cgi-bin/mailma

Re: [openstack-dev] [nova][scheduler] Availability Zones and Host aggregates..

2014-03-27 Thread Khanh-Toan Tran
No, what I mean is that user should be able to specify multiple AZs in his
request, something like:



nova  boot   --flavor 2  --image ubuntu   --availability-zone AZ1
--availability-zone AZ2  vm1





De : Jérôme Gallard [mailto:gallard.jer...@gmail.com]
Envoyé : jeudi 27 mars 2014 10:51
À : OpenStack Development Mailing List (not for usage questions)
Objet : Re: [openstack-dev] [nova][scheduler] Availability Zones and Host
aggregates..




Hi Toan,
Is what you say related to :
https://blueprints.launchpad.net/nova/+spec/schedule-set-availability-zone
s ?



2014-03-27 10:37 GMT+01:00 Khanh-Toan Tran
:



- Original Message -
> From: "Sangeeta Singh" 
> To: "OpenStack Development Mailing List (not for usage questions)"


> Sent: Wednesday, March 26, 2014 6:54:18 PM
> Subject: Re: [openstack-dev] [nova][scheduler] Availability Zones and
Host aggregates..
>
>
>
> On 3/26/14, 10:17 AM, "Khanh-Toan Tran" 
> wrote:
>
> >
> >
> >- Original Message -
> >> From: "Sangeeta Singh" 
> >> To: "OpenStack Development Mailing List (not for usage questions)"
> >>
> >> Sent: Tuesday, March 25, 2014 9:50:00 PM
> >> Subject: [openstack-dev] [nova][scheduler] Availability Zones and
Host
> >>aggregates..
> >>
> >> Hi,
> >>
> >> The availability Zones filter states that theoretically a compute
node
> >>can be
> >> part of multiple availability zones. I have a requirement where I
need
> >>to
> >> make a compute node part to 2 AZ. When I try to create a host
aggregates
> >> with AZ I can not add the node in two host aggregates that have AZ
> >>defined.
> >> However if I create a host aggregate without associating an AZ then I
> >>can
> >> add the compute nodes to it. After doing that I can update the
> >> host-aggregate an associate an AZ. This looks like a bug.
> >>
> >> I can see the compute node to be listed in the 2 AZ with the
> >> availability-zone-list command.
> >>
> >
> >Yes it appears a bug to me (apparently the AZ metadata indertion is
> >considered as a normal metadata so no check is done), and so does the
> >message in the AvailabilityZoneFilter. I don't know why you need a
> >compute node that belongs to 2 different availability-zones. Maybe I'm
> >wrong but for me it's logical that availability-zones do not share the
> >same compute nodes. The "availability-zones" have the role of partition
> >your compute nodes into "zones" that are physically separated (in large
> >term it would require separation of physical servers, networking
> >equipments, power sources, etc). So that when user deploys 2 VMs in 2
> >different zones, he knows that these VMs do not fall into a same host
and
> >if some zone falls, the others continue working, thus the client will
not
> >lose all of his VMs. It's smaller than Regions which ensure total
> >separation at the cost of low-layer connectivity and central management
> >(e.g. scheduling per region).
> >
> >See: http://www.linuxjournal.com/content/introduction-openstack
> >
> >The former purpose of regouping hosts with the same characteristics is
> >ensured by host-aggregates.
> >
> >> The problem that I have is that I can still not boot a VM on the
> >>compute node
> >> when I do not specify the AZ in the command though I have set the
> >>default
> >> availability zone and the default schedule zone in nova.conf.
> >>
> >> I get the error ³ERROR: The requested availability zone is not
> >>available²
> >>
> >> What I am  trying to achieve is have two AZ that the user can select
> >>during
> >> the boot but then have a default AZ which has the HV from both AZ1
AND
> >>AZ2
> >> so that when the user does not specify any AZ in the boot command I
> >>scatter
> >> my VM on both the AZ in a balanced way.
> >>
> >
> >I do not understand your goal. When you create two availability-zones
and
> >put ALL of your compute nodes into these AZs, then if you don't
specifies
> >the AZ in your request, then AZFilter will automatically accept all
hosts.
> >The defaut weigher (RalWeigher) will then distribute the workload
fairely
> >among these nodes regardless of AZ it belongs to. Maybe it is what you
> >want?
>
>   With Havana that does not happen as there is a concept of
> default_scheduler_zone which is none if not specified and when we
specify
> one can only specify a since AZ whereas in my case I basically want the
2
> AZ that I create both to be considered default zones if nothing is
> specified.

If you look into the code of the AvailabilityFilter, you'll see that the
filter automatically accepts host if there is NO availability-zone in the
request, which is the case when user does not specify AZ. This is exactly
what I see in my Openstack platform (Hanava stable). FYI, I didn't set up
a default AZ in config. So whenever I creates several VMs without
specifying an AZ, the scheduler spreads the VMs into all hosts regardless
of their AZ.

What I think lacking is that user can not select a set of AZs instead of
one or none right now.


> >
> >> Any pointers.
> >>
> >> Thanks,
> >> Sange

Re: [openstack-dev] [nova][scheduler] Availability Zones and Host aggregates..

2014-03-27 Thread Sylvain Bauza

Le 27/03/2014 10:37, Khanh-Toan Tran a écrit :


- Original Message -

From: "Sangeeta Singh" 
To: "OpenStack Development Mailing List (not for usage questions)" 

Sent: Wednesday, March 26, 2014 6:54:18 PM
Subject: Re: [openstack-dev] [nova][scheduler] Availability Zones and Host 
aggregates..



On 3/26/14, 10:17 AM, "Khanh-Toan Tran" 
wrote:



- Original Message -

From: "Sangeeta Singh" 
To: "OpenStack Development Mailing List (not for usage questions)"

Sent: Tuesday, March 25, 2014 9:50:00 PM
Subject: [openstack-dev] [nova][scheduler] Availability Zones and Host
aggregates..

Hi,

The availability Zones filter states that theoretically a compute node
can be
part of multiple availability zones. I have a requirement where I need
to
make a compute node part to 2 AZ. When I try to create a host aggregates
with AZ I can not add the node in two host aggregates that have AZ
defined.
However if I create a host aggregate without associating an AZ then I
can
add the compute nodes to it. After doing that I can update the
host-aggregate an associate an AZ. This looks like a bug.

I can see the compute node to be listed in the 2 AZ with the
availability-zone-list command.


Yes it appears a bug to me (apparently the AZ metadata indertion is
considered as a normal metadata so no check is done), and so does the
message in the AvailabilityZoneFilter. I don't know why you need a
compute node that belongs to 2 different availability-zones. Maybe I'm
wrong but for me it's logical that availability-zones do not share the
same compute nodes. The "availability-zones" have the role of partition
your compute nodes into "zones" that are physically separated (in large
term it would require separation of physical servers, networking
equipments, power sources, etc). So that when user deploys 2 VMs in 2
different zones, he knows that these VMs do not fall into a same host and
if some zone falls, the others continue working, thus the client will not
lose all of his VMs. It's smaller than Regions which ensure total
separation at the cost of low-layer connectivity and central management
(e.g. scheduling per region).

See: http://www.linuxjournal.com/content/introduction-openstack

The former purpose of regouping hosts with the same characteristics is
ensured by host-aggregates.


The problem that I have is that I can still not boot a VM on the
compute node
when I do not specify the AZ in the command though I have set the
default
availability zone and the default schedule zone in nova.conf.

I get the error ³ERROR: The requested availability zone is not
available²

What I am  trying to achieve is have two AZ that the user can select
during
the boot but then have a default AZ which has the HV from both AZ1 AND
AZ2
so that when the user does not specify any AZ in the boot command I
scatter
my VM on both the AZ in a balanced way.


I do not understand your goal. When you create two availability-zones and
put ALL of your compute nodes into these AZs, then if you don't specifies
the AZ in your request, then AZFilter will automatically accept all hosts.
The defaut weigher (RalWeigher) will then distribute the workload fairely
among these nodes regardless of AZ it belongs to. Maybe it is what you
want?

   With Havana that does not happen as there is a concept of
default_scheduler_zone which is none if not specified and when we specify
one can only specify a since AZ whereas in my case I basically want the 2
AZ that I create both to be considered default zones if nothing is
specified.

If you look into the code of the AvailabilityFilter, you'll see that the filter 
automatically accepts host if there is NO availability-zone in the request, 
which is the case when user does not specify AZ. This is exactly what I see in 
my Openstack platform (Hanava stable). FYI, I didn't set up a default AZ in 
config. So whenever I creates several VMs without specifying an AZ, the 
scheduler spreads the VMs into all hosts regardless of their AZ.

What I think lacking is that user can not select a set of AZs instead of one or 
none right now.


That's because this is not the goal of this filter to exclude AZs if 
none specified ;-)


If you want to isolate, there is another filter responsible for this [1]

IMHO, a filter should still be as simple as possible. That's only 
combination of filters which should match any needs.


[1] 
:https://github.com/openstack/nova/blob/a2b454c87863fbb4cf3ddaa5a5fd22841339bc8f/nova/scheduler/filters/aggregate_multitenancy_isolation.py


-Sylvain

Any pointers.

Thanks,
Sangeeta

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenS

Re: [openstack-dev] [Mistral] task update at end of handle_task in executor

2014-03-27 Thread Nikolay Makhotkin
>
> In case of async tasks, executor keeps the task status at RUNNING, and a
> 3rd party system will call convey_task_resutls on engine.


Yes, it is correct.  With this approach (in case sync task), we set task
state to SUCCESS if it returns a result, or ERROR if we can't see a result
and exception is raised.

 It is a bug and should be done before line 119:  self._do_task_action(
> db_task).


And also lines 120-123 (
https://github.com/stackforge/mistral/blob/master/mistral/engine/scalable/executor/server.py#L120-L123)
are incorrect since in _do_task_action updates the task state. But we have
two different types of task (async, sync) and I think we should update task
state to RUNNING before invoking _do_task_action and remove this lines -
https://github.com/stackforge/mistral/blob/master/mistral/engine/scalable/executor/server.py#L58-L61
.




On Thu, Mar 27, 2014 at 9:47 AM, Manas Kelshikar wrote:

> Yes. It is a bug and should be done before line 119:  self._do_task_action
> (db_task). It can definitely lead to bugs especially since
> _do_task_action itself updates the status.
>
>
>
>
> On Wed, Mar 26, 2014 at 8:46 PM, W Chan  wrote:
>
>> In addition, for sync tasks, it'll overwrite the task state from SUCCESS
>> to RUNNING.
>>
>>
>> On Wed, Mar 26, 2014 at 8:41 PM, Dmitri Zimine  wrote:
>>
>>> My understanding is: it's the engine which finalizes the task results,
>>> based on the status returned by the task via convey_task_result call.
>>>
>>>
>>> https://github.com/stackforge/mistral/blob/master/mistral/engine/abstract_engine.py#L82-L84
>>>
>>> https://github.com/stackforge/mistral/blob/master/mistral/engine/scalable/executor/server.py#L44-L66
>>>
>>> In case of async tasks, executor keeps the task status at RUNNING, and a
>>> 3rd party system will call convey_task_resutls on engine.
>>>
>>>
>>> https://github.com/stackforge/mistral/blob/master/mistral/engine/scalable/executor/server.py#L123
>>> ,
>>>
>>>
>>> This line however looks like a bug to me:  at best it doesnt do much and
>>> at worst it overwrites the ERROR previously set in here
>>> http://tinyurl.com/q5lps2h
>>>
>>> Nicolay, any better explanation?
>>>
>>>
>>> DZ>
>>>
>>> On Mar 26, 2014, at 6:20 PM, W Chan  wrote:
>>>
>>> Regarding
>>> https://github.com/stackforge/mistral/blob/master/mistral/engine/scalable/executor/server.py#L123,
>>> should the status be set to SUCCESS instead of RUNNING?  If not, can
>>> someone clarify why the task should remain RUNNING?
>>>
>>> Thanks.
>>> Winson
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Best Regards,
Nikolay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][scheduler] Availability Zones and Host aggregates..

2014-03-27 Thread Khanh-Toan Tran


> -Message d'origine-
> De : Sylvain Bauza [mailto:sylvain.ba...@bull.net]
> Envoyé : jeudi 27 mars 2014 11:05
> À : OpenStack Development Mailing List (not for usage questions)
> Objet : Re: [openstack-dev] [nova][scheduler] Availability Zones and Host
> aggregates..
>
> Le 27/03/2014 10:37, Khanh-Toan Tran a écrit :
> >
> > - Original Message -
> >> From: "Sangeeta Singh" 
> >> To: "OpenStack Development Mailing List (not for usage questions)"
> >> 
> >> Sent: Wednesday, March 26, 2014 6:54:18 PM
> >> Subject: Re: [openstack-dev] [nova][scheduler] Availability Zones and 
> >> Host
> aggregates..
> >>
> >>
> >>
> >> On 3/26/14, 10:17 AM, "Khanh-Toan Tran"
> >> 
> >> wrote:
> >>
> >>>
> >>> - Original Message -
>  From: "Sangeeta Singh" 
>  To: "OpenStack Development Mailing List (not for usage questions)"
>  
>  Sent: Tuesday, March 25, 2014 9:50:00 PM
>  Subject: [openstack-dev] [nova][scheduler] Availability Zones and
>  Host aggregates..
> 
>  Hi,
> 
>  The availability Zones filter states that theoretically a compute
>  node can be part of multiple availability zones. I have a
>  requirement where I need to make a compute node part to 2 AZ. When
>  I try to create a host aggregates with AZ I can not add the node in
>  two host aggregates that have AZ defined.
>  However if I create a host aggregate without associating an AZ then
>  I can add the compute nodes to it. After doing that I can update
>  the host-aggregate an associate an AZ. This looks like a bug.
> 
>  I can see the compute node to be listed in the 2 AZ with the
>  availability-zone-list command.
> 
> >>> Yes it appears a bug to me (apparently the AZ metadata indertion is
> >>> considered as a normal metadata so no check is done), and so does
> >>> the message in the AvailabilityZoneFilter. I don't know why you need
> >>> a compute node that belongs to 2 different availability-zones. Maybe
> >>> I'm wrong but for me it's logical that availability-zones do not
> >>> share the same compute nodes. The "availability-zones" have the role
> >>> of partition your compute nodes into "zones" that are physically
> >>> separated (in large term it would require separation of physical
> >>> servers, networking equipments, power sources, etc). So that when
> >>> user deploys 2 VMs in 2 different zones, he knows that these VMs do
> >>> not fall into a same host and if some zone falls, the others
> >>> continue working, thus the client will not lose all of his VMs. It's
> >>> smaller than Regions which ensure total separation at the cost of
> >>> low-layer connectivity and central management (e.g. scheduling per 
> >>> region).
> >>>
> >>> See: http://www.linuxjournal.com/content/introduction-openstack
> >>>
> >>> The former purpose of regouping hosts with the same characteristics
> >>> is ensured by host-aggregates.
> >>>
>  The problem that I have is that I can still not boot a VM on the
>  compute node when I do not specify the AZ in the command though I
>  have set the default availability zone and the default schedule
>  zone in nova.conf.
> 
>  I get the error ³ERROR: The requested availability zone is not
>  available²
> 
>  What I am  trying to achieve is have two AZ that the user can
>  select during the boot but then have a default AZ which has the HV
>  from both AZ1 AND
>  AZ2
>  so that when the user does not specify any AZ in the boot command I
>  scatter my VM on both the AZ in a balanced way.
> 
> >>> I do not understand your goal. When you create two
> >>> availability-zones and put ALL of your compute nodes into these AZs,
> >>> then if you don't specifies the AZ in your request, then AZFilter will
> automatically accept all hosts.
> >>> The defaut weigher (RalWeigher) will then distribute the workload
> >>> fairely among these nodes regardless of AZ it belongs to. Maybe it
> >>> is what you want?
> >>With Havana that does not happen as there is a concept of
> >> default_scheduler_zone which is none if not specified and when we
> >> specify one can only specify a since AZ whereas in my case I
> >> basically want the 2 AZ that I create both to be considered default
> >> zones if nothing is specified.
> > If you look into the code of the AvailabilityFilter, you'll see that the 
> > filter
> automatically accepts host if there is NO availability-zone in the 
> request, which
> is the case when user does not specify AZ. This is exactly what I see in 
> my
> Openstack platform (Hanava stable). FYI, I didn't set up a default AZ in 
> config. So
> whenever I creates several VMs without specifying an AZ, the scheduler 
> spreads
> the VMs into all hosts regardless of their AZ.
> >
> > What I think lacking is that user can not select a set of AZs instead of 
> > one or
> none right now.
>
> That's because this is not the goal of this filter to exclude AZs if none 
> specifie

[openstack-dev] dhcp port creation

2014-03-27 Thread hanish gogada
Hi all,

I tried out the following scenario on openstack grizzly, i created a
network and subnet on it. I attached this subnet to the router ( i did not
launch any vms on it). I restarted l3 and dhcp agent, this created a dhcp
port on that network. Though there is no functionality breakage, Is this
behavior expected.

 thanks & regards
hanish
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] [neutron] Neutron Full Parallel job - Last 4 days failures

2014-03-27 Thread Salvatore Orlando
On 26 March 2014 19:19, James E. Blair  wrote:

> Salvatore Orlando  writes:
>
> > On another note, we noticed that the duplicated jobs currently executed
> for
> > redundancy in neutron actually seem to point all to the same build id.
> > I'm not sure then if we're actually executing each job twice or just
> > duplicating lines in the jenkins report.
>
> Thanks for catching that, and I'm sorry that didn't work right.  Zuul is
> in fact running the jobs twice, but it is only looking at one of them
> when sending reports and (more importantly) decided whether the change
> has succeeded or failed.  Fixing this is possible, of course, but turns
> out to be a rather complicated change.  Since we don't make heavy use of
> this feature, I lean toward simply instantiating multiple instances of
> identically configured jobs and invoking them (eg "neutron-pg-1",
> "neutron-pg-2").
>
> Matthew Treinish has already worked up a patch to do that, and I've
> written a patch to revert the incomplete feature from Zuul.
>

That makes sense to me. I think it is just a matter about the results are
reported to gerrit since from what I gather in logstash the jobs are
executed twice for each new patchset or recheck.


For the status of the full job, I gave a look at the numbers reported by
Rossella.
All the bugs are already known; some of them are not even bug; others have
been recently fixed (given the time span of Rossella analysis and the fact
it covers also non-rebased patches it might be possible to have this kind
of false positive).

of all full job failures, 44% should be discarded.
Bug 1291611 (12%) is definitely not a neutron bug... hopefully.
Bug 1281969 (12%) is really too generic.
It bears the hallmark of bug1283522, and therefore the high number might be
due to the fact that trunk was plagued by this bug up to a few days before
the analysis.
However, it's worth noting that there is also another instance of "lock
timeout" which has caused 11 failures in full job in the past week.
A new bug has been filed for this issue:
https://bugs.launchpad.net/neutron/+bug/1298355
Bug 1294603 was related to a test now skipped. It is still being debated
whether the problem lies in test design, neutron LBaaS or neutron L3.

The following bugs seem not to be neutron bugs:
1290642, 1291920, 1252971, 1257885

Bug 1292242 appears to have been fixed while the analysis was going on
Bug 1277439 instead is already known to affects neutron jobs occasionally.

The actual state of the job is perhaps better than what the raw numbers
say. I would keep monitoring it, and then make it voting after the Icehouse
release is cut, so that we'll be able to deal with possible higher failure
rate in the "quiet" period of the release cycle.



> -Jim
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][scheduler] Availability Zones and Host aggregates..

2014-03-27 Thread Chris Friesen

On 03/27/2014 05:03 AM, Khanh-Toan Tran wrote:


Well, perhaps I didn't make it clearly enough. What I intended to say is
that user should be able to select a set of AZs in his request, something
like :

 nova  boot   --flavor 2  --image ubuntu   --availability-zone
Z1  --availability-zone AZ2  vm1


I think it would make more sense to make the availability-zone argument 
take a comma-separated list of zones.


nova boot --flavor 2 --image ubuntu --availability-zone AZ1,AZ2 vm1


Just to clarify, in a case like this we're talking about using the 
intersection of the two zones, right?  That's the only way that makes 
sense when using orthogonal zones like "hosts with fast CPUs" and "hosts 
with SSDs".


Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [depfreeze] [horizon] Exception request: python-keystoneclient>=0.7.0

2014-03-27 Thread Julie Pichon
Hi,

I would like to request a depfreeze exception to bump up the keystone
client requirement [1], in order to reenable the ability for users to
update their own password with Keystone v3 in Horizon in time for
Icehouse [2]. This capability is requested by end-users quite often but
had to be "deactivated" at the end of Havana due to some issues that are
now resolved, thanks to the latest keystone client release. Since this
is a library we control, hopefully this shouldn't cause too much trouble
for packagers.

Thank you for your consideration.

Julie


[1] https://review.openstack.org/#/c/83287/
[2] https://review.openstack.org/#/c/59918/

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Jenkins test logs and their retention period

2014-03-27 Thread Doug Hellmann
On Wed, Mar 26, 2014 at 2:54 PM, Joe Gordon  wrote:

>
>
>
> On Wed, Mar 26, 2014 at 9:51 AM, Doug Hellmann <
> doug.hellm...@dreamhost.com> wrote:
>
>>
>>
>>
>> On Tue, Mar 25, 2014 at 5:34 PM, Brant Knudson  wrote:
>>
>>>
>>>
>>>
>>> On Mon, Mar 24, 2014 at 5:49 AM, Sean Dague  wrote:
>>>
 ...

 Part of the challenge is turning off DEBUG is currently embedded in code
 in oslo log, which makes it kind of awkward to set sane log levels for
 included libraries because it requires an oslo round trip with code to
 all the projects to do it.


>>> Here's how it's done in Keystone:
>>> https://review.openstack.org/#/c/62068/10/keystone/config.py
>>>
>>> It's definitely awkward.
>>>
>>
>> https://bugs.launchpad.net/oslo/+bug/1297950
>>
>
> Currently when you enable debug logs in openstack, the root logger is set
> to debug and then we have to go and blacklist specific modules that we
> don't want to run on debug. What about instead adding an option to just set
> the openstack component at hand to debug log level and not the root logger?
> That way we won't have to keep maintaining a blacklist of modules that
> generate too many debug logs.
>

Doing that makes sense, too. Do we need a new option, or is there some
combination of existing options that we could interpret to mean "debug this
openstack app but not all of the libraries it is using"?

Doug



>
>
>>
>>
>> Doug
>>
>>
>>
>>>
>>>
>>> - Brant
>>>
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Neutron LBaaS, Libra and "managed services"

2014-03-27 Thread Susanne Balle
Geoff

I noticed the following two blueprints:


https://blueprints.launchpad.net/neutron/+spec/adv-services-in-vms


This blueprint defines a framework for creating, managing and deploying
Neutron advanced services implemented as virtual machines. The goal is to
enable advanced network services (e.g. Load Balancing, Security,
Monitoring) that may be supplied by third party vendors, are deployed as
virtual machines, and are launched and inserted into the tenant network on
demand.

https://blueprints.launchpad.net/neutron/+spec/dynamic-network-resource-mgmt


This blueprint proposes the addition to OpenStack of a framework for
dynamic network resource management (DNRM). This framework includes a new
OpenStack resource management and provisioning service, a refactored scheme
for Neutron API extensions, a policy-based resource allocation system, and
dynamic mapping of resources to plugins. It is intended to address a number
of use cases, including multivendor environments, policy-based resource
scheduling, and virtual appliance provisioning. We are proposing this as a
single blueprint in order to create an efficiently integrated
implementation.


the latter was submitted by you. This sounds like step in the right
direction and I would like to understand the designs/scope/limitation in a
little more details.


What is the status of your blueprint? Any early designs/use cases that you
would be willing to share?


Regards Susanne




On Tue, Mar 25, 2014 at 10:07 AM, Geoff Arnold wrote:

> There are (at least) two ways of expressing differentiation:
> - through an API extension, visible to the tenant
> - though an internal policy mechanism, with specific policies inferred
> from tenant or network characteristics
>
> Both have their place. Please don't fall into the trap of thinking that
> differentiation requires API extension.
>
> Sent from my iPhone - please excuse any typos or "creative" spelling
> corrections!
>
> On Mar 25, 2014, at 1:36 PM, Eugene Nikanorov 
> wrote:
>
> Hi John,
>
>
> On Tue, Mar 25, 2014 at 7:26 AM, John Dewey  wrote:
>
>>  I have a similar concern.  The underlying driver may support different
>> functionality, but the differentiators need exposed through the top level
>> API.
>>
> Not really, whole point of the service is to abstract the user from
> specifics of backend implementation. So for any feature there is a common
> API, not specific to any implementation.
>
> There probably could be some exception to this guide line that lays in the
> area of admin API, but that's yet to be discussed.
>
>>
>> I see the SSL work is well underway, and I am in the process of defining
>> L7 scripting requirements.  However, I will definitely need L7 scripting
>> prior to the API being defined.
>> Is this where vendor extensions come into play?  I kinda like the route
>> the Ironic guy safe taking with a "vendor passthru" API.
>>
> I may say that core team has rejected 'vendor extensions' idea due to
> potential non-uniform user API experience. That becomes even worse with
> flavors introduced, because users don't know what vendor is backing up the
> service they have created.
>
> Thanks,
> Eugene.
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [depfreeze] [horizon] Exception request: python-keystoneclient>=0.7.0

2014-03-27 Thread Mark McLoughlin
On Thu, 2014-03-27 at 13:53 +, Julie Pichon wrote:
> Hi,
> 
> I would like to request a depfreeze exception to bump up the keystone
> client requirement [1], in order to reenable the ability for users to
> update their own password with Keystone v3 in Horizon in time for
> Icehouse [2]. This capability is requested by end-users quite often but
> had to be "deactivated" at the end of Havana due to some issues that are
> now resolved, thanks to the latest keystone client release. Since this
> is a library we control, hopefully this shouldn't cause too much trouble
> for packagers.
> 
> Thank you for your consideration.
> 
> Julie
> 
> 
> [1] https://review.openstack.org/#/c/83287/
> [2] https://review.openstack.org/#/c/59918/

IMHO, it's hard to imagine that Icehouse requiring a more recent version
of keystoneclient being a problem or risk for anyone.

Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-qa] Graduation Requirements + Scope of Tempest

2014-03-27 Thread Malini Kamalambal

>
>The beginning of this thread largely came from the fact that Marconi
>clearly doing most of their QA not in an upstream way. To be integrated,
>that needs to change.


Marconi has very good test coverage within the project.
These tests guarantee functionality at a project level (i.e the API works
as defined in our API docs).
But having a good test coverage at the project level, no way implies that
we don't want to work with upstream.
We want to guarantee quality at every level & upstream integration will
continue to be one of our major areas of focus.
We have never considered the tests within the project and those in Tempest
as an 'either or'.
We need both - Both of these give valuable feedback , from different
perspectives.

>
>I've seen this go down with many projects now to the point where it's a
>normal 5 stages of grief thing.
>
> * Denial - we can totally do all this in our own tree, no need to work
>in Tempest
> * Anger - what, python*client shipped a new version and we're broken!
>how dare they? And why do I need to bother working outside my own git
>tree?
> * Bargaining - let me propose a whole other approach to testing that
>doesn't need Tempest
> * Depression - that's not going to work? why won't you let me gate all
>the projects on my own thing? ok, fine I'll work with Tempest.
> * Acceptance - oh, look at that, I just managed to block a Keystone
>change that would have broken me, but now I have a Tempest test that
>*they* have to pass as well.

Marconi team is not in any of the 'grief' stages ;)
We recognize the value of Tempest & enhancing test coverage in Tempest is
an important goal for us.

>
>Is Tempest a paragon of virtue? Far from it. But right now has very
>clearly shown it's value, and I think remains the right collaboration
>point to create the contract about what we all believe OpenStack is.

We all agree that Tempest is the 'right collaboration point to create the
contract about what we all believe OpenStack is'.
Making projects more accountable for quality will no way diminish the
value of Tempest.
On the other hand, Tempest will become more valuable than ever because of
the increased focus in integration testing.

We need the emphasis on quality to permeate at every level of Openstack.
This is a cultural change that needs to take place & openstack-qa should
be the drivers of this change.

Malini


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] sample config files should be ignored in git...

2014-03-27 Thread Doug Hellmann
On Thu, Mar 27, 2014 at 5:21 AM, Dirk Müller  wrote:

> Hi,
>
> >> When I was an operator, I regularly referred to the sample config files
> >> in the git repository.
>
> The sample config files in git repository are tremendeously useful for
> any operator and OpenStack Packager. Having them generateable with a
> tox line is very cumbersome.
>

> As a minimum those config files should be part of the sdist tarball
> (aka generated during sdist time).
>
> > Do they need to be in the git repo?
>
> IMHO yes, they should go alongside the code change.
>
> > Note that because libraries now export config options (which is the
> > root of this problem!) you cannot ever know from the source all the
> > options for a service - you *must* know the library versions you are
> > running, to interrogate them for their options.
>
> The problem is that we hammer in all the libraries configuration
> option into the main config file. if we'd have "include" support and
> we'd just include the libraries config options that are generated as a
> separate file (and possibly autogenerated) this problem would not
> occur, and it would avoid the gate breakages.
>

Do we need an "include" directive? We could use the existing --config-dir
option to read more than one file at runtime.

Doug



>
>
> Thanks,
> Dirk
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [depfreeze] [horizon] Exception request: python-keystoneclient>=0.7.0

2014-03-27 Thread Sergey Lukjanov
NOTE: It's approved now.

On Thu, Mar 27, 2014 at 6:12 PM, Mark McLoughlin  wrote:
> On Thu, 2014-03-27 at 13:53 +, Julie Pichon wrote:
>> Hi,
>>
>> I would like to request a depfreeze exception to bump up the keystone
>> client requirement [1], in order to reenable the ability for users to
>> update their own password with Keystone v3 in Horizon in time for
>> Icehouse [2]. This capability is requested by end-users quite often but
>> had to be "deactivated" at the end of Havana due to some issues that are
>> now resolved, thanks to the latest keystone client release. Since this
>> is a library we control, hopefully this shouldn't cause too much trouble
>> for packagers.
>>
>> Thank you for your consideration.
>>
>> Julie
>>
>>
>> [1] https://review.openstack.org/#/c/83287/
>> [2] https://review.openstack.org/#/c/59918/
>
> IMHO, it's hard to imagine that Icehouse requiring a more recent version
> of keystoneclient being a problem or risk for anyone.
>
> Mark.
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [depfreeze] [horizon] Exception request: python-keystoneclient>=0.7.0

2014-03-27 Thread Julie Pichon
On 27/03/14 14:46, Sergey Lukjanov wrote:
> NOTE: It's approved now.

Thanks everyone!

Julie

> 
> On Thu, Mar 27, 2014 at 6:12 PM, Mark McLoughlin  wrote:
>> On Thu, 2014-03-27 at 13:53 +, Julie Pichon wrote:
>>> Hi,
>>>
>>> I would like to request a depfreeze exception to bump up the keystone
>>> client requirement [1], in order to reenable the ability for users to
>>> update their own password with Keystone v3 in Horizon in time for
>>> Icehouse [2]. This capability is requested by end-users quite often but
>>> had to be "deactivated" at the end of Havana due to some issues that are
>>> now resolved, thanks to the latest keystone client release. Since this
>>> is a library we control, hopefully this shouldn't cause too much trouble
>>> for packagers.
>>>
>>> Thank you for your consideration.
>>>
>>> Julie
>>>
>>>
>>> [1] https://review.openstack.org/#/c/83287/
>>> [2] https://review.openstack.org/#/c/59918/
>>
>> IMHO, it's hard to imagine that Icehouse requiring a more recent version
>> of keystoneclient being a problem or risk for anyone.
>>
>> Mark.
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Dependency freeze exception for happybase (I would like version 0.8)

2014-03-27 Thread Thomas Goirand
Hi,

Writing this mail after TTX suggested it in the patch review.

After talking with some Ceilometer people, it appeared that the cap on
happybase (eg: <0.6) was done because of a bug upstream. This bug was,
apparently, fixed in version 0.8.

In Debian, we have already version 0.7 in Sid/Testing, and I uploaded
version 0.8 in Experimental. It would be complicated and not desirable
to revert to an earlier version, and I don't think I'd do that.

For this reason, I would like to ask for a freeze exception and allow
Happy base 0.8 to get in. There's 2 ways to do that:

-happybase>=0.4,<=0.6
+happybase>=0.8

or:

-happybase>=0.4,<=0.6
+happybase>=0.4,!=0.6,!=0.7

In this patch, I did the former:
https://review.openstack.org/#/c/82438/

however, I'd be ok to use the later.

I'd like to ask everyone's opinion here. Is it ok to do a freeze
exception in this case? If yes (please, everyone, agree! :) ), then
would >=0.8 or >=0.4,!=0.6,!=0.7 be better?

Cheers,

Thomas Goirand (zigo)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [depfreeze] [horizon] Exception request: python-keystoneclient>=0.7.0

2014-03-27 Thread Thierry Carrez
Sergey Lukjanov wrote:
> NOTE: It's approved now.

Yes, I just approved it. Note that it also contains an important
security fix (see OSSA-2014-007 just published).

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Congress] Policy types

2014-03-27 Thread Tim Hinrichs
Hi Prabhakar,

I looked into this a while back.  pydatalog is a cool project, and I'd like to 
find time to study some of its algorithms and architecture a bit more.  

The reason we rolled our own version of Datalog is that we knew we'd want the 
freedom to try out non-standard Datalog algorithms, and there are algorithms we 
will likely need that aren't usually included (query rewriting, skolemization, 
conversion to DNF, etc.).  I suppose we could have modified pydatalog, but 
given that it is basically an extension to Python, it seemed there would be 
significant overhead in figuring out the code, making sure to keep our changes 
compatible with the old, etc.  Rolling our own gives us the freedom to build 
exactly what we need with minimal distractions, which is important since we 
won't really know what we need until we start getting feedback from users.  

But like I said, there are probably some good ideas in there, so if the way 
they deal with builtins is useful to us, great!  I'd just keep in mind that the 
benefit of using Datalog instead of Python is mainly that it *limits* what 
policy authors can say (without limiting it too much).  

Tim

- Original Message -
| From: "prabhakar Kudva" 
| To: "OpenStack Development Mailing List (not for usage questions)" 

| Sent: Wednesday, March 26, 2014 9:04:22 AM
| Subject: Re: [openstack-dev] [Congress] Policy types
| 
| Hi Tim, All,
|  
| As I was preparing the background for the proposal for the
| __Congress_builtins__,
| came across this link which uses Datalog with Python, and provides
| data integration facilities through SQLAlchemy.  The tool seems to have been
| used (from the web page claim) in production:
| Just want to run it by everyone to see if this is connected or useful in our
| builtin
| capability, and anything we can glean from it:
|  
| 
https://urldefense.proofpoint.com/v1/url?u=https://pypi.python.org/pypi/pyDatalog&k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0A&r=%2FZ35AkRhp2kCW4Q3MPeE%2BxY2bqaf%2FKm29ZfiqAKXxeo%3D%0A&m=WvQzzQA4hxW34TMc13h1FgKQQSxHsa1RgBpgfI5lO2U%3D%0A&s=6028d309c9c709635828e7dd33c3f614db1b6989544e05ad6a21da2e646144fe
| 
|  
https://urldefense.proofpoint.com/v1/url?u=https://sites.google.com/site/pydatalog/&k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0A&r=%2FZ35AkRhp2kCW4Q3MPeE%2BxY2bqaf%2FKm29ZfiqAKXxeo%3D%0A&m=WvQzzQA4hxW34TMc13h1FgKQQSxHsa1RgBpgfI5lO2U%3D%0A&s=fb152fdc60b5925bb47145dd54973b8184a2b47fb51c28cc86afd6ecbe02b2c5
|  
| 
https://urldefense.proofpoint.com/v1/url?u=https://sites.google.com/site/pydatalog/home/datalog-applications&k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0A&r=%2FZ35AkRhp2kCW4Q3MPeE%2BxY2bqaf%2FKm29ZfiqAKXxeo%3D%0A&m=WvQzzQA4hxW34TMc13h1FgKQQSxHsa1RgBpgfI5lO2U%3D%0A&s=a4cb6eed4bf9d7d0a0bfa811c3c8d72f64b973924697f0e5e9e00681403a7664
|  
| Thanks,
|  
| Prabhakar
|  
| > Date: Tue, 18 Mar 2014 12:56:10 -0700
| > From: thinri...@vmware.com
| > To: openstack-dev@lists.openstack.org
| > CC: rajde...@vmware.com
| > Subject: Re: [openstack-dev] [Congress] Policy types
| > 
| > Hi Prabhakar,
| > 
| > Found time for a more detailed response.  Comments are inline.
| > 
| > Tim
| > 
| > - Original Message -
| > | From: "Tim Hinrichs" 
| > | To: "OpenStack Development Mailing List (not for usage questions)"
| > | 
| > | Sent: Tuesday, March 18, 2014 9:31:34 AM
| > | Subject: Re: [openstack-dev] [Congress] Policy types
| > | 
| > | Hi Prabhakar,
| > | 
| > | No IRC meeting this week.  Our IRC is every *other* week, and we had it
| > | last
| > | week.
| > | 
| > | Though there's been enough activity of late that maybe we should consider
| > | making it weekly.
| > | 
| > | I'll address the rest later.
| > | 
| > | Tim
| > | 
| > | - Original Message -
| > | | From: "prabhakar Kudva" 
| > | | To: "OpenStack Development Mailing List (not for usage questions)"
| > | | 
| > | | Sent: Monday, March 17, 2014 7:45:53 PM
| > | | Subject: Re: [openstack-dev] [Congress] Policy types
| > | | 
| > | | Hi Tim,
| > | |  
| > | | Definitely would like to learn more about the Data Integration
| > | | component
| > | | and how best to contribute. Can't find it in the latest git, perhaps we
| > | | can
| > | | discuss during the meeting tomorrow. Where can I find some code and/or
| > | | document?
| > 
| > There are no docs yet, and the only code in git is a quick-and-dirty
| > ActiveDirectory integration.  But conceptually there are two pieces of the
| > data integration story.
| > 
| > 1. We're representing the data stored in each cloud service (that is
| > pertinent to policy) as a collection of TABLES.  Each table is a
| > collection of rows that all have the same number of columns.  Each value
| > in the table is a scalar (number or string).
| > 
| > If you look at the last IRC, Rajdeep has started working on translating
| > Neutron data into this table format.  There are a couple of emails on the
| > mailing list as well where we had some discussion.  Here's his change set.
| > 
| > 
https://urldefense.proofpoint.com/v1/

[openstack-dev] [Horizon] pre-running lesscpy

2014-03-27 Thread Christian Berendt
Hi.

Is it possible to pre-run lesscpy? When accessing Horizon in my devstack
environment the first time lesscpy is always running several seconds in
the background.

Tried to run python manage.py compress (setting COMPRESS_OFFLINE=True in
the settings.py), but afterwards lesscpy is still running in the
background when accessing Horizon the first time.

Christian.

-- 
Christian Berendt
Cloud Computing Solution Architect
Mail: bere...@b1-systems.de

B1 Systems GmbH
Osterfeldstraße 7 / 85088 Vohburg / http://www.b1-systems.de
GF: Ralph Dehner / Unternehmenssitz: Vohburg / AG: Ingolstadt,HRB 3537

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Glance WSGI File Read Bug (Grizzly)

2014-03-27 Thread Álvaro López García
Hi there.

On Tue 17 Dec 2013 (21:23), Miller, Mark M (EB SW Cloud - R&D - Corvallis) 
wrote:
> I was able to pin down the image upload problem today:
> 
> The Store.add file input read loop using chunkreadable throws an error on the 
> very last read. Apparently the mod_wsgi.Input behaves differently than its 
> eventlet counterpart in that it throws an error if the requested data length 
> is greater than what is avalible. When I replaced the chunkreadable for loop 
> with a while loop that modified the size of the last data read request, it 
> works. Does anyone know if this is a code bug or rather a WSGI configuration 
> setting that I missed?

Just for the record, we have a similar problem in Havana, so a bug has
been filled for this [1].

[1] https://bugs.launchpad.net/glance/+bug/1298462

Regards,
-- 
Álvaro López García  al...@ifca.unican.es
Instituto de Física de Cantabria http://alvarolopez.github.io
Ed. Juan Jordá, Campus UC  tel: (+34) 942 200 969
Avda. de los Castros s/n
39005 Santander (SPAIN)
_
http://xkcd.com/571/

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano][Heat] MuranoPL questions?

2014-03-27 Thread Georgy Okrokvertskhov
On Wed, Mar 26, 2014 at 11:25 AM, Keith Bray wrote:

>
>
> On 3/25/14 11:55 AM, "Ruslan Kamaldinov"  wrote:
>
> >* Murano DSL will focus on:
> >  a. UI rendering
>
>
> One of the primary reasons I am opposed to using a different DSL/project
> to accomplish this is that the person authoring the HOT template is
> usually the system architect, and this is the same person who has the
> technical knowledge to know what technologies you can swap in/out and
> still have that system/component work, so they are also the person who
> can/should define the "rules" of what component building blocks can and
> can't work together.  There has been an overwhelmingly strong preference
> from the system architects/DevOps/ApplicationExperts I [1] have talked to
> for the ability to have control over those rules directly within the HOT
> file or immediately along-side the HOT file but feed the whole set of
> files to a single API endpoint.  I'm not advocating that this extra stuff
> be part of Heat Engine (I understand the desire to keep the orchestration
> engine clean)... But from a barrier to adoption point-of-view, the extra
> effort for the HOT author to learn another DSL and use yet another system
> (or even have to write multiple files) should not be underestimated.
> These people are not OpenStack developers, they are DevOps folks and
> Application Experts.  This is why the Htr[2] project was proposed and
> threads were started to add extra data to HOT template that Heat engine
> could essentially ignore, but would make defining UI rendering and
> component connectivity easy for the HOT author.
>

I think that this is a wrong way to go. First of all there is an issue with
separation of concerns as you will have one super-template which will
describe the whole world. UI part is only one of the use cases, but when
Murano will need some extra parameter will it go to HOT? When Solum will
need to define application build sequence to make an app binary from source
will this also go to HOT?

I think the reason for such talks was the fact that Heat engine accepts
only a single file as an input. Which is completely understandable as Heat
engine is designed to process a template which describes a set of resources
and their relations.

While we talk with our customers who want use Murano they are happy to have
multiple files keep each one small and simple. In Murano UI definition is a
separate file and application developer can have multiple UI file
definitions to quickly change look and feel without changing the deploymen
template part of application at all. Heat supports template nesting, this
is no obvious how the UI for this case will be rendered as this nesting
will be processed by Heat engine and final set of resources will be
produced by engine.

I don't see a big difference in learning between one huge DSL which covers
the all possible aspects and a set of small simpler DSLs focused on each
specific area. Having one big DSL is worse as the user can construct some
complex structures mixing the DSL functions from different are. It will be
a nightmare to create an engine which will validate and process such
template.


>
> I'm all for contributions to OpenStack, so I encourage the Murano team to
> continue doing its thing if they find it adds value to themselves or
> others. However, I'd like to see the Orchestration program support the
> surrounding things the users of the Heat engine want/need from their cloud
> system instead of having those needs met by separate projects seeking
> incubation. There are technical ways to keep the core engine "clean" while
> having the Orchestration Program API Service move up the stack in terms of
> cloud user experience.
>

I just think this is a conceptually wrong. The whole idea of OpenStack to
have a clean set of API\components focused on specific functionality. Cloud
users want to have VMs with attached volumes and networks but is does not
mean that it should be single service in OpenStack. This is a power of
OpenStack which shows that the proper separation of concern and having
multiple services is a right way which allow one to move forward fast
making the development process for each service very effective due to
narrow scope and functionality focus.

I am glad to hear that you want to have something higher up to stack that
the current available functionality. I think this support our observation
of a huge demand for such higher level functionality in OpenStack. At the
same time I am against the proposed way of doing that by extending the
Orchestration program mission moving it to upper levels. Having clean
layers responsible for particular area is a benefit and strong side of
OpenStack from its global architecture view point.


>
> >  b. HOT generation
> >  c. Setup other services (like put Mistral tasks to Mistral and bind
> > them with events)
> >
> >Speaking about new DSL for Murano. We're speaking about Application
> >Lifecycle
> >Management. There are a lot of existing 

[openstack-dev] [TripleO] [Horizon] Searching for a new name for Tuskar UI

2014-03-27 Thread Jaromir Coufal

Hi OpenStackers,

User interface which is managing the OpenStack Infrastructure is 
currently named Tuskar-UI because of historical reasons. Tuskar itself 
is a small service, which is giving logic into generating and managing 
Heat templates and helps user to model and manage his deployment. The 
user interface, which is the subject of this call, is based on TripleO 
approach and resembles OpenStack Dashboard (Horizon) with the way of how 
it consumes other services. The UI is consuming not just Tuskar API, but 
also Ironic (nova-baremetal), Nova (flavors), Ceilometer, etc in order 
to design, deploy, manage and monitor your OpenStack deployments. 
Because of this I find the name Tuskar-UI improper (it's more closer to 
say TripleO-UI) and I would like the community to help to find better 
name for it. After brainstorming, we can start voting on the final 
project's name.


https://etherpad.openstack.org/p/openstack-management-ui-names

Thanks
-- Jarda (jcoufal)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Feature about QEMU Assisted online-extend volume

2014-03-27 Thread Trump.Zhang
Online-extend volume feature aims to extend a cinder volume which is
in-use, and make the corresponding disk in instance extend without stop the
instance.


The background is that, John Griffith has proposed a BP ([1]) aimed to
provide an cinder extension to enable extend of in-use/attached volumes.
After discussing with Paul Marshall, the assignee of this BP, he only focus
on OpenVZ driver currently, so I want to take the work of libvirt/qemu
based on his current work.

A volume can be extended or not is determined by Cinder. However, if we
want the capacity of corresponding disk in instance extends, Nova must be
involved.

Libvirt provides "block_resize" interface for this situation. For QEMU, the
internal workflow for block_resize as follows:

1) Drain all IO of this disk from instance
2) If the backend of disk is a normal file, such as raw, qcow2, etc, qemu
will do the *extend* work
3) If the backend of disk is block device, qemu will first judge if there
is enough free space on the device, if only so, it will do the *extend*
work.

So I think the "online-extend" volume will need QEMU Assisted, which is
simlar to BP [2].

Do you think we should introduce this feature?

[1]
https://blueprints.launchpad.net/cinder/+spec/inuse-extend-volume-extension
[2] https://blueprints.launchpad.net/nova/+spec/qemu-assisted-snapshots
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano][Heat] MuranoPL questions?

2014-03-27 Thread Ruslan Kamaldinov
On Thu, Mar 27, 2014 at 7:42 PM, Georgy Okrokvertskhov
 wrote:
> Given that I don't see the huge overlap here with Murano functionality as
> even if Solum stated that as a part of solution Heat template will be
> generated it does not necessarily mean that Solum itself will do this. From
> what is listed on the Solum page, in Solum sense - ALM is a  way how the
> application build from source promoted between different CI\CD environments
> Dev, QA, Stage, Production. Solum can use other service to do this keeping
> its own focus on the target area. Specifically to the environments - Solum
> can use Murano environments which for Murano is just a logical unity of
> multiple applications. Solum can add CI\CD specific stuff on top of it
> keeping using Murano API for the environment management under the hood.
> Again, this is a typical OpenStack approach to have different services
> integrated to achieve the larger goal, keeping services itself very focused.


Folks,

I'd like to call for a cross-project work group to identify approaches for
application description and management in the OpenStack cloud. As this thread
shows there are several parties involved - Heat, Mistral, Murano, Solum (did I
miss anyone?), there is no clear vision among us where and how we should
describe things on top of Heat.

We could spend another couple of months in
debates, but I feel that focused group of dedicated people (i.e 2 from each
project) would progress much faster and will be much more productive.

What I'd suggest to expect from this joint group:
* Identify how different aspects of applications and their lifecycle can be
  described and how they can coexist in OpenStack
* Define a multi-layered structure to keep each layer with clear focus and set
  of responsibilities
* End goal of the work for this group will be a document with a clear vision of
  covering areas higher up to Heat stack and how OpenStack should address that.
  This vision is not clear now for TC and that is the reason they say that it is
  to big step which Murano did
* Agree on further direction
* Come to ATL summit, agree again and drink beer

Focused group would require additional communication channels:
* Calls (Google Hangouts for instance)
* Additional IRC meetings
* Group work on design documents


>From Murano project I'd like to propose the following participants:
* Stan Lagun (sla...@mirantis.com)
* Ruslan Kamaldinov (rkamaldi...@mirantis.com)

Do colleagues from Heat, Solum and Mistral feel the same way and would like to
support this movement and delegate their participants to this working group?
Is this idea viable?


--
Ruslan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Marconi] Why is marconi a queue implementation vs a provisioning API?

2014-03-27 Thread Matt Asay
In response to Gil Yehuda's comments on MongoDB and the AGPL (here 
http://lists.openstack.org/pipermail/openstack-dev/2014-March/030510.html), I 
understand the concern about the AGPL. But in this case it's completely, 
absolutely unfounded. As mentioned earlier, MongoDB Inc. wants people to use 
MongoDB, the project. That's why we wrapped the server code (AGPL) in an Apache 
license (drivers). Basically, for 99.999% of the world's population, you can 
use MongoDB under the cover of the Apache license. If you'd like more 
assurance, we're happy to provide it. 

We want people using the world's most popular NoSQL database with the world's 
most popular open source cloud (OpenStack). I think our track record on this is 
100% in the affirmative.___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][ML2][Ml2Plugin] Setting _original_network in NetworkContext:

2014-03-27 Thread Nader Lahouti
Hi Andre,

Thans for your reply.

There is no existing network. The scenario is for the first time that we
create a network with an extension. Consider, a mechanism driver adds an
attribute (through extensions) to the network resource. When user creates a
network, the attribute is set and it is present in the 'network' parameter,
when calling create_network() in Ml2Plugin.
But when create_network_pre/post_commit is called, the attribute won't be
available to the mechanism driver. Because the attribute is not included in
network object passed to MD - as I mentioned in previous email, the
'result' does not have the new attribute.


Thanks,
Nader.








On Wed, Mar 26, 2014 at 3:52 PM, Andre Pech wrote:

> Hi Nader,
>
> When I wrote this, the intention was that original_network only really
> makes sense during an update_network call (ie when there's an existing
> network that you are modifying). In a create_network call, the assumption
> is that no network exists yet, so there is no "original network" to set.
>
> Can you provide a bit more detail on the case where there's an existing
> network when create_network is called? Sorry, I didn't totally follow when
> this would happen.
>
> Thanks
> Andre
>
>
> On Tue, Mar 25, 2014 at 8:45 AM, Nader Lahouti wrote:
>
>> Hi All,
>>
>> In the current Ml2Plugin code when 'create_network' is called, as shown
>> below:
>>
>>
>>
>> def create_network(self, context, network)
>>
>> net_data = network['network']
>>
>> ...
>>
>> session = context.session
>>
>> with session.begin(subtransactions=True):
>>
>> self._ensure_default_security_group(context, tenant_id)
>>
>> result = super(Ml2Plugin, self).create_network(context,
>> network)
>> ...
>>
>> mech_context = driver_context.NetworkContext(self, context,
>> result)
>>
>> self.mechanism_manager.create_network_precommit(mech_context)
>>
>> ...
>>
>>
>>
>> the original_network parameter is not set (the default is None) when
>> instantiating NetworkContext, and as a result the mech_context has only the
>> value of network object returned from super(Ml2Plugin,
>> self).create_network().
>>
>> This causes issue when a mechanism driver needs to use the original
>> network parameters (given to the create_network), specially when extension
>> is used for the network resources.
>>
>> (The 'result' only has the network attributes without extension which is
>> used to set the '_network' in the NetwrokContext object).
>>
>> Even using  extension function registration using
>>
>> db_base_plugin_v2.NeutronDbPluginV2.register_dict_extend_funcs(...) won't
>> help as the network object that is passed to the registered function does
>> not include the extension parameters.
>>
>>
>> Is there any reason that the original_network is not set when
>> initializing the NetworkContext? Would that cause any issue to set it to
>> 'net_data' so that any mechanism driver can use original network parameters
>> as they are available when create_network is called?
>>
>>
>> Appreciate your comments.
>>
>>
>> Thanks,
>>
>> Nader.
>>
>>
>>
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] Refactor ISCSIDriver to support other iSCSI transports besides TCP

2014-03-27 Thread Shlomi Sasson
LIO already support iser (starting kernel 3.10), and its implementation can 
accept rdma or tcp in the same target

-Original Message-
From: Eric Harney [mailto:ehar...@redhat.com] 
Sent: Wednesday, March 26, 2014 20:22
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova][cinder] Refactor ISCSIDriver to support 
other iSCSI transports besides TCP

On 03/25/2014 11:07 AM, Shlomi Sasson wrote:

> I am not sure what will be the right approach to handle this, I already have 
> the code, should I open a bug or blueprint to track this issue?
> 
> Best Regards,
> Shlomi
> 
>

A blueprint around this would be appreciated.  I have had similar thoughts 
around this myself, that these should be options for the LVM iSCSI driver 
rather than different drivers.

These options also mirror how we can choose between tgt/iet/lio in the LVM 
driver today.  I've been assuming that RDMA support will be added to the LIO 
driver there at some point, and this seems like a nice way to enable that.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Feature about QEMU Assisted online-extend volume

2014-03-27 Thread Duncan Thomas
It sounds like a useful feature, and there are a growing number of
touch points for libvirt assisted cinder features. A summit session to
discuss how that interface should work (hopefully get a few nova folks
there as well, the interface has two ends) might be a good idea

On 27 March 2014 16:15, Trump.Zhang  wrote:
> Online-extend volume feature aims to extend a cinder volume which is in-use,
> and make the corresponding disk in instance extend without stop the
> instance.
>
>
> The background is that, John Griffith has proposed a BP ([1]) aimed to
> provide an cinder extension to enable extend of in-use/attached volumes.
> After discussing with Paul Marshall, the assignee of this BP, he only focus
> on OpenVZ driver currently, so I want to take the work of libvirt/qemu based
> on his current work.
>
> A volume can be extended or not is determined by Cinder. However, if we want
> the capacity of corresponding disk in instance extends, Nova must be
> involved.
>
> Libvirt provides "block_resize" interface for this situation. For QEMU, the
> internal workflow for block_resize as follows:
>
> 1) Drain all IO of this disk from instance
> 2) If the backend of disk is a normal file, such as raw, qcow2, etc, qemu
> will do the *extend* work
> 3) If the backend of disk is block device, qemu will first judge if there is
> enough free space on the device, if only so, it will do the *extend* work.
>
> So I think the "online-extend" volume will need QEMU Assisted, which is
> simlar to BP [2].
>
> Do you think we should introduce this feature?
>
> [1]
> https://blueprints.launchpad.net/cinder/+spec/inuse-extend-volume-extension
> [2] https://blueprints.launchpad.net/nova/+spec/qemu-assisted-snapshots
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Duncan Thomas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] Refactor ISCSIDriver to support other iSCSI transports besides TCP

2014-03-27 Thread Shlomi Sasson
Of course I'm aware of that.. I'm the one who pushed it there in the first 
place :)
But it was not the best way to handle this.. I think that the right/better 
approach is as suggested.

I'm planning to remove the existing ISERDriver code, this will eliminate 
significant code and class duplication, and will work with all the iSCSI 
vendors who supports both tcp and rdma without the need to modify their plug-in 
drivers.


From: John Griffith [mailto:john.griff...@solidfire.com]
Sent: Wednesday, March 26, 2014 22:47
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova][cinder] Refactor ISCSIDriver to support 
other iSCSI transports besides TCP



On Wed, Mar 26, 2014 at 12:18 PM, Eric Harney 
mailto:ehar...@redhat.com>> wrote:
On 03/25/2014 11:07 AM, Shlomi Sasson wrote:

> I am not sure what will be the right approach to handle this, I already have 
> the code, should I open a bug or blueprint to track this issue?
>
> Best Regards,
> Shlomi
>
>
A blueprint around this would be appreciated.  I have had similar
thoughts around this myself, that these should be options for the LVM
iSCSI driver rather than different drivers.

These options also mirror how we can choose between tgt/iet/lio in the
LVM driver today.  I've been assuming that RDMA support will be added to
the LIO driver there at some point, and this seems like a nice way to
enable that.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

I'm open to improving this, but I am curious you know there's an ISER subclass 
in iscsi for Cinder currently right?
http://goo.gl/kQJoDO
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] pre-running lesscpy

2014-03-27 Thread Sascha Peilicke
On Thursday 27 March 2014 16:32:37 Christian Berendt wrote:
> Hi.
> 
> Is it possible to pre-run lesscpy? When accessing Horizon in my devstack
> environment the first time lesscpy is always running several seconds in
> the background.

I've started to investigate Cython for lesscpy. That should speed things up. 
0.10.1 is slightly faster already, but it was deferred to Juno (See 
https://review.openstack.org/#/c/70619/)

> Tried to run python manage.py compress (setting COMPRESS_OFFLINE=True in
> the settings.py), but afterwards lesscpy is still running in the
> background when accessing Horizon the first time.

This is odd. I'll have a look...
-- 
Viele Grüße,
Sascha Peilicke

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Horizon] Searching for a new name for Tuskar UI

2014-03-27 Thread Dougal Matthews

On 27/03/14 15:56, Jaromir Coufal wrote:

Hi OpenStackers,

User interface which is managing the OpenStack Infrastructure is
currently named Tuskar-UI because of historical reasons. Tuskar itself
is a small service, which is giving logic into generating and managing
Heat templates and helps user to model and manage his deployment. The
user interface, which is the subject of this call, is based on TripleO
approach and resembles OpenStack Dashboard (Horizon) with the way of how
it consumes other services. The UI is consuming not just Tuskar API, but
also Ironic (nova-baremetal), Nova (flavors), Ceilometer, etc in order
to design, deploy, manage and monitor your OpenStack deployments.
Because of this I find the name Tuskar-UI improper (it's more closer to
say TripleO-UI) and I would like the community to help to find better
name for it. After brainstorming, we can start voting on the final
project's name.

https://etherpad.openstack.org/p/openstack-management-ui-names


Thanks for starting this.

As a side, but related note, I think we should rename the Tuskar client
to whatever name the Tuskar UI gets called. The client will eventually
have feature parity with the UI and thus will have the same naming
issues if it is to remain the "tuskarclient"

Dougal


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Cinder] Icehouse RC1 available

2014-03-27 Thread Thierry Carrez
Hello everyone,

Cinder just published its first Icehouse release candidate. Congrats to
all Cinder developers for reaching this key milestone. 51 bugs were
fixed in Cinder since feature freeze, 3 weeks ago.

The RC1 is available for download at:
https://launchpad.net/cinder/icehouse/icehouse-rc1

Unless release-critical issues are found that warrant a release
candidate respin, this RC1 will be formally released as the 2014.1 final
version on April 17. You are therefore strongly encouraged to test and
validate this tarball !

Alternatively, you can directly test the milestone-proposed branch at:
https://github.com/openstack/cinder/tree/milestone-proposed

If you find an issue that could be considered release-critical, please
file it at:

https://bugs.launchpad.net/cinder/+filebug

and tag it *icehouse-rc-potential* to bring it to the release crew's
attention.

Note that the "master" branch of Cinder is now open for Juno
development, and feature freeze restrictions no longer apply there.

Regards,

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][scheduler] Availability Zones and Host aggregates..

2014-03-27 Thread Day, Phil
Sorry if I'm coming late to this thread, but why would you define AZs to cover 
"othognal zones" ?

AZs are a very specific form of aggregate - they provide a particular isolation 
schematic between the hosts (i.e. physical hosts are never in more than one AZ) 
- hence the "availability" in the name.

AZs are built on aggregates, and yes aggregates can overlap and aggreagtes are 
used for scheduling.

So if you want to schedule on features as well as (or instead of) physical 
isolation, then you can already:

- Create an aggregate that contains "hosts with fast CPUs"
- Create another aggregate that includes "hosts with SSDs"
- Write (or configure in some cases) schedule filters that look at something in 
the request (such as schedule hint, an image property, or a flavor extra_spec) 
so that the scheduler can filter on those aggregates

nova boot --availability-zone az1 --scheduler-hint want-fast-cpu 
--scheduler-hint want-ssd  ...

nova boot --availability-zone az1 --flavor 1000
(where flavor 1000 has extra spec that says it needs fast cpu and ssd)

But there is no need that I can see to make AZs overlapping just to so the same 
thing - that would break what everyone (including folks used to working with 
AWS) expects from an AZ




> -Original Message-
> From: Chris Friesen [mailto:chris.frie...@windriver.com]
> Sent: 27 March 2014 13:18
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [nova][scheduler] Availability Zones and Host
> aggregates..
> 
> On 03/27/2014 05:03 AM, Khanh-Toan Tran wrote:
> 
> > Well, perhaps I didn't make it clearly enough. What I intended to say
> > is that user should be able to select a set of AZs in his request,
> > something like :
> >
> >  nova  boot   --flavor 2  --image ubuntu   --availability-zone
> > Z1  --availability-zone AZ2  vm1
> 
> I think it would make more sense to make the availability-zone argument
> take a comma-separated list of zones.
> 
> nova boot --flavor 2 --image ubuntu --availability-zone AZ1,AZ2 vm1
> 
> 
> Just to clarify, in a case like this we're talking about using the 
> intersection of
> the two zones, right?  That's the only way that makes sense when using
> orthogonal zones like "hosts with fast CPUs" and "hosts with SSDs".
> 
> Chris
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Horizon] Searching for a new name for Tuskar UI

2014-03-27 Thread Jiří Stránský

On 27.3.2014 18:21, Dougal Matthews wrote:

On 27/03/14 15:56, Jaromir Coufal wrote:

Hi OpenStackers,

User interface which is managing the OpenStack Infrastructure is
currently named Tuskar-UI because of historical reasons. Tuskar itself
is a small service, which is giving logic into generating and managing
Heat templates and helps user to model and manage his deployment. The
user interface, which is the subject of this call, is based on TripleO
approach and resembles OpenStack Dashboard (Horizon) with the way of how
it consumes other services. The UI is consuming not just Tuskar API, but
also Ironic (nova-baremetal), Nova (flavors), Ceilometer, etc in order
to design, deploy, manage and monitor your OpenStack deployments.
Because of this I find the name Tuskar-UI improper (it's more closer to
say TripleO-UI) and I would like the community to help to find better
name for it. After brainstorming, we can start voting on the final
project's name.

https://etherpad.openstack.org/p/openstack-management-ui-names


Thanks for starting this.

As a side, but related note, I think we should rename the Tuskar client
to whatever name the Tuskar UI gets called. The client will eventually
have feature parity with the UI and thus will have the same naming
issues if it is to remain the "tuskarclient"

Dougal


It might be good to do a similar thing as Keystone does. We could keep 
python-tuskarclient focused only on Python bindings for Tuskar (but keep 
whatever CLI we already implemented there, for backwards compatibility), 
and implement CLI as a plugin to OpenStackClient. E.g. when you want to 
access Keystone v3 API features (e.g. domains resource), then 
python-keystoneclient provides only Python bindings, it no longer 
provides CLI.


I think this is a nice approach because it allows the python-*client to 
stay thin for including within Python apps, and there's a common 
pluggable CLI for all projects (one top level command for the user). At 
the same time it would solve our naming problems (tuskarclient would 
stay, because it would be focused on Tuskar only) and we could reuse the 
already implemented other OpenStackClient plugins for anything on 
undercloud.


We previously raised that OpenStackClient has more plugins (subcommands) 
that we need on undercloud and that could confuse users, but i'd say it 
might not be as troublesome to justify avoiding the OpenStackClient way. 
(Even if we decide that this is a big problem after all and OSC plugin 
is not enough, we should still probably aim for separating TripleO CLI 
and Tuskarclient in the future.)


Jirka

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][scheduler] Availability Zones and Host aggregates..

2014-03-27 Thread Day, Phil
> 
> The need arises when you need a way to use both the zones to be used for
> scheduling when no specific zone is specified. The only way to do that is
> either have a AZ which is a superset of the two AZ or the other way could be
> if the default_scheduler_zone can take a list of zones instead of just 1.

If you don't configure a default_schedule_zone, and don't specify an 
availability_zone to the request  - then I thought that would make the AZ 
filter in effect ignore AZs for that request.  Isn't that want you need ?

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Horizon] Searching for a new name for Tuskar UI

2014-03-27 Thread Jay Dobies

It might be good to do a similar thing as Keystone does. We could keep
python-tuskarclient focused only on Python bindings for Tuskar (but keep
whatever CLI we already implemented there, for backwards compatibility),
and implement CLI as a plugin to OpenStackClient. E.g. when you want to
access Keystone v3 API features (e.g. domains resource), then
python-keystoneclient provides only Python bindings, it no longer
provides CLI.


+1

I've always liked the idea of separating out the bindings from the CLI 
itself.




I think this is a nice approach because it allows the python-*client to
stay thin for including within Python apps, and there's a common
pluggable CLI for all projects (one top level command for the user). At
the same time it would solve our naming problems (tuskarclient would
stay, because it would be focused on Tuskar only) and we could reuse the
already implemented other OpenStackClient plugins for anything on
undercloud.

We previously raised that OpenStackClient has more plugins (subcommands)
that we need on undercloud and that could confuse users, but i'd say it
might not be as troublesome to justify avoiding the OpenStackClient way.
(Even if we decide that this is a big problem after all and OSC plugin
is not enough, we should still probably aim for separating TripleO CLI
and Tuskarclient in the future.)

Jirka

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][scheduler] Availability Zones and Host aggregates..

2014-03-27 Thread Chris Friesen

On 03/27/2014 11:48 AM, Day, Phil wrote:

Sorry if I'm coming late to this thread, but why would you define AZs
to cover "othognal zones" ?


See Vish's first message.


AZs are a very specific form of aggregate - they provide a particular
isolation schematic between the hosts (i.e. physical hosts are never
in more than one AZ) - hence the "availability" in the name.


That's why I specified orthogonal.  If you're looking at different 
resources then it makes sense to have one host be in different AZs 
because the AZs are essentially in different namespaces.


So you could have "hosts in server room A" vs "hosts in server room B". 
 Or "hosts on network switch A" vs "hosts on network switch B".  Or 
"hosts with SSDs" vs "hosts with disks".  Then you could specify you 
want to boot an instance in server room A, on switch B, on a host with SSDs.



AZs are built on aggregates, and yes aggregates can overlap and
aggreagtes are used for scheduling.

So if you want to schedule on features as well as (or instead of)
physical isolation, then you can already:

- Create an aggregate that contains "hosts with fast CPUs" - Create
another aggregate that includes "hosts with SSDs" - Write (or
configure in some cases) schedule filters that look at something in
the request (such as schedule hint, an image property, or a flavor
extra_spec) so that the scheduler can filter on those aggregates

nova boot --availability-zone az1 --scheduler-hint want-fast-cpu
--scheduler-hint want-ssd  ...


Does this actually work?  The docs only describe setting the metadata on 
the flavor, not as part of the boot command.



nova boot --availability-zone az1 --flavor 1000 (where flavor 1000
has extra spec that says it needs fast cpu and ssd)

But there is no need that I can see to make AZs overlapping just to
so the same thing - that would break what everyone (including folks
used to working with AWS) expects from an AZ



As an admin user you can create arbitrary host aggregates, assign 
metadata, and have flavors with extra specs to look for that metadata.


But as far as I know there is no way to match host aggregate information 
on a per-instance basis.


Also, unless things have changed since I looked at it last as a regular 
user you can't create new flavors so the only way to associate an 
instance with a host aggregate is via an availability zone.


Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Marconi] Backend options [was Re: Why is marconi a queue implementation vs a provisioning API?]

2014-03-27 Thread Kurt Griffiths
Matt Asay wrote:
> We want people using the world's most popular NoSQL database with the
>world's most popular open source cloud (OpenStack). I think our track
>record on this is 100% in the affirmative.

So, I think it is pretty clear that there are lots of people who would
like to use MongoDB and aren’t concerned about the way it is licensed.
However, there are also lots of people who would prefer another
production-ready option.

I think Marconi has room for 2-3 more drivers that are supported by the
team for production deployments. Two of the most promising candidates are
Redis and AMQP (specific broker TBD). Cassandra has also been proposed in
the past, but I don't think it’s a viable option due to the way deletes
are implemented[1].

If anyone has some other options that you think could be a good fit,
please make some suggestions and help us determine the future of
Marconi.

---
Kurt G. | @kgriffs

[1]: http://goo.gl/k7Bbv1

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [MagnetoDB] Best practices for uploading large amounts of data

2014-03-27 Thread Illia Khudoshyn
Hi, Openstackers,

I'm currently working on adding bulk data load functionality to MagnetoDB.
This functionality implies inserting huge amounts of data (billions of
rows, gigabytes of data). The data being uploaded is a set of JSON's (for
now). The question I'm interested in is a way of data transportation. For
now I do streaming HTTP POST request from the client side with
gevent.pywsgi on the server side.

Could anybody suggest any (better?) approach for the transportation, please?
What are best practices for that.

Thanks in advance.

-- 

Best regards,

Illia Khudoshyn,
Software Engineer, Mirantis, Inc.



38, Lenina ave. Kharkov, Ukraine

www.mirantis.com 

www.mirantis.ru



Skype: gluke_work

ikhudos...@mirantis.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [marconi] sample config files should be ignored in git...

2014-03-27 Thread Kurt Griffiths
P.S. - Any particular reason this script wasn’t written in Python? Seems
like that would avoid a lot of cross-platform gotchyas.

On 3/26/14, 11:48 PM, "Sergey Lukjanov"  wrote:

>FWIW It's working on OS X, but gnu-getopt should be installed.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [marconi] sample config files should be ignored in git...

2014-03-27 Thread Kurt Griffiths
Ah, that is good to know. Is this documented somewhere?

On 3/26/14, 11:48 PM, "Sergey Lukjanov"  wrote:

>FWIW It's working on OS X, but gnu-getopt should be installed.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][scheduler] Availability Zones and Host aggregates..

2014-03-27 Thread Day, Phil
> -Original Message-
> From: Vishvananda Ishaya [mailto:vishvana...@gmail.com]
> Sent: 26 March 2014 20:33
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [nova][scheduler] Availability Zones and Host
> aggregates..
> 
> 
> On Mar 26, 2014, at 11:40 AM, Jay Pipes  wrote:
> 
> > On Wed, 2014-03-26 at 09:47 -0700, Vishvananda Ishaya wrote:
> >> Personally I view this as a bug. There is no reason why we shouldn't
> >> support arbitrary grouping of zones. I know there is at least one
> >> problem with zones that overlap regarding displaying them properly:
> >>
> >> https://bugs.launchpad.net/nova/+bug/1277230
> >>
> >> There is probably a related issue that is causing the error you see
> >> below. IMO both of these should be fixed. I also think adding a
> >> compute node to two different aggregates with azs should be allowed.
> >>
> >> It also might be nice to support specifying multiple zones in the
> >> launch command in these models. This would allow you to limit booting
> >> to an intersection of two overlapping zones.
> >>
> >> A few examples where these ideas would be useful:
> >>
> >> 1. You have 3 racks of servers and half of the nodes from each rack
> >> plugged into a different switch. You want to be able to specify to
> >> spread across racks or switches via an AZ. In this model you could
> >> have a zone for each switch and a zone for each rack.
> >>
> >> 2. A single cloud has 5 racks in one room in the datacenter and 5
> >> racks in a second room. You'd like to give control to the user to
> >> choose the room or choose the rack. In this model you would have one
> >> zone for each room, and smaller zones for each rack.
> >>
> >> 3. You have a small 3 rack cloud and would like to ensure that your
> >> production workloads don't run on the same machines as your dev
> >> workloads, but you also want to use zones spread workloads across the
> >> three racks. Similarly to 1., you could split your racks in half via
> >> dev and prod zones. Each one of these zones would overlap with a rack
> >> zone.
> >>
> >> You can achieve similar results in these situations by making small
> >> zones (switch1-rack1 switch1-rack2 switch1-rack3 switch2-rack1
> >> switch2-rack2 switch2-rack3) but that removes the ability to decide
> >> to launch something with less granularity. I.e. you can't just
> >> specify 'switch1' or 'rack1' or 'anywhere'
> >>
> >> I'd like to see all of the following work nova boot ... (boot anywhere)
> >> nova boot -availability-zone switch1 ... (boot it switch1 zone) nova
> >> boot -availability-zone rack1 ... (boot in rack1 zone) nova boot
> >> -availability-zone switch1,rack1 ... (boot
> >
> > Personally, I feel it is a mistake to continue to use the Amazon
> > concept of an availability zone in OpenStack, as it brings with it the
> > connotation from AWS EC2 that each zone is an independent failure
> > domain. This characteristic of EC2 availability zones is not enforced
> > in OpenStack Nova or Cinder, and therefore creates a false expectation
> > for Nova users.
> >
> > In addition to the above problem with incongruent expectations, the
> > other problem with Nova's use of the EC2 availability zone concept is
> > that availability zones are not hierarchical -- due to the fact that
> > EC2 AZs are independent failure domains. Not having the possibility of
> > structuring AZs hierarchically limits the ways in which Nova may be
> > deployed -- just see the cells API for the manifestation of this
> > problem.
> >
> > I would love it if the next version of the Nova and Cinder APIs would
> > drop the concept of an EC2 availability zone and introduce the concept
> > of a generic region structure that can be infinitely hierarchical in
> > nature. This would enable all of Vish's nova boot commands above in an
> > even simpler fashion. For example:
> >
> > Assume a simple region hierarchy like so:
> >
> >  regionA
> >  /  \
> > regionBregionC
> >
> > # User wants to boot in region B
> > nova boot --region regionB
> > # User wants to boot in either region B or region C nova boot --region
> > regionA
> 
> I think the overlapping zones allows for this and also enables additional use
> cases as mentioned in my earlier email. Hierarchical doesn't work for the
> rack/switch model. I'm definitely +1 on breaking from the amazon usage of
> availability zones but I'm a bit leery to add another parameter to the create
> request. It is also unfortunate that region already has a meaning in the
> amazon world which will add confusion.
> 
> Vish
>
Ok, got far enough back down my stack to understand the drive here, and I kind 
of understand the use case, but I think what's missing is that currently we 
only allow for one group of availability zones.

I can see why you would want them to overlap in a certain way - i.e. a "rack 
based" zone could overlap with a "switch based" zone - but I still don't want 
any overlap wit

Re: [openstack-dev] [Murano][Heat] MuranoPL questions?

2014-03-27 Thread Adrian Otto


> On Mar 27, 2014, at 11:27 AM, "Ruslan Kamaldinov"  
> wrote:
> 
> On Thu, Mar 27, 2014 at 7:42 PM, Georgy Okrokvertskhov
>  wrote:
>> Given that I don't see the huge overlap here with Murano functionality as
>> even if Solum stated that as a part of solution Heat template will be
>> generated it does not necessarily mean that Solum itself will do this. From
>> what is listed on the Solum page, in Solum sense - ALM is a  way how the
>> application build from source promoted between different CI\CD environments
>> Dev, QA, Stage, Production. Solum can use other service to do this keeping
>> its own focus on the target area. Specifically to the environments - Solum
>> can use Murano environments which for Murano is just a logical unity of
>> multiple applications. Solum can add CI\CD specific stuff on top of it
>> keeping using Murano API for the environment management under the hood.
>> Again, this is a typical OpenStack approach to have different services
>> integrated to achieve the larger goal, keeping services itself very focused.
> 
> 
> Folks,
> 
> I'd like to call for a cross-project work group to identify approaches for
> application description and management in the OpenStack cloud. As this thread
> shows there are several parties involved - Heat, Mistral, Murano, Solum (did I
> miss anyone?), there is no clear vision among us where and how we should
> describe things on top of Heat.
> 
> We could spend another couple of months in
> debates, but I feel that focused group of dedicated people (i.e 2 from each
> project) would progress much faster and will be much more productive.
> 
> What I'd suggest to expect from this joint group:
> * Identify how different aspects of applications and their lifecycle can be
>  described and how they can coexist in OpenStack
> * Define a multi-layered structure to keep each layer with clear focus and set
>  of responsibilities
> * End goal of the work for this group will be a document with a clear vision 
> of
>  covering areas higher up to Heat stack and how OpenStack should address that.
>  This vision is not clear now for TC and that is the reason they say that it 
> is
>  to big step which Murano did
> * Agree on further direction
> * Come to ATL summit, agree again and drink beer
> 
> Focused group would require additional communication channels:
> * Calls (Google Hangouts for instance)
> * Additional IRC meetings
> * Group work on design documents
> 
> 
> From Murano project I'd like to propose the following participants:
> * Stan Lagun (sla...@mirantis.com)
> * Ruslan Kamaldinov (rkamaldi...@mirantis.com)
> 
> Do colleagues from Heat, Solum and Mistral feel the same way and would like to
> support this movement and delegate their participants to this working group?
> Is this idea viable?

I will participate on behalf of Solum: Adrian Otto 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] dhcp port creation

2014-03-27 Thread Carl Baldwin
Hanish,

I have observed this behavior as well. Without digging in to recall
the details, I believe that when the DHCP agent restarts, it makes a
call that schedules any unscheduled networks to it.  Upon scheduling
the network to the agent, the port is created by the agent.

Without restarting the DHCP agent, the normal behavior is to wait
until a VM port is spawned on the network to schedule the network to
an agent.

I think the answer to your question is that this is expected but may
be surprising to some.

Carl

On Thu, Mar 27, 2014 at 6:30 AM, hanish gogada
 wrote:
> Hi all,
>
> I tried out the following scenario on openstack grizzly, i created a network
> and subnet on it. I attached this subnet to the router ( i did not launch
> any vms on it). I restarted l3 and dhcp agent, this created a dhcp port on
> that network. Though there is no functionality breakage, Is this behavior
> expected.
>
>  thanks & regards
> hanish
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][scheduler] Availability Zones and Host aggregates..

2014-03-27 Thread Day, Phil
> -Original Message-
> From: Chris Friesen [mailto:chris.frie...@windriver.com]
> Sent: 27 March 2014 18:15
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [nova][scheduler] Availability Zones and Host
> aggregates..
> 
> On 03/27/2014 11:48 AM, Day, Phil wrote:
> > Sorry if I'm coming late to this thread, but why would you define AZs
> > to cover "othognal zones" ?
> 
> See Vish's first message.
> 
> > AZs are a very specific form of aggregate - they provide a particular
> > isolation schematic between the hosts (i.e. physical hosts are never
> > in more than one AZ) - hence the "availability" in the name.
> 
> That's why I specified orthogonal.  If you're looking at different resources
> then it makes sense to have one host be in different AZs because the AZs are
> essentially in different namespaces.
> 
> So you could have "hosts in server room A" vs "hosts in server room B".
>   Or "hosts on network switch A" vs "hosts on network switch B".  Or "hosts
> with SSDs" vs "hosts with disks".  Then you could specify you want to boot an
> instance in server room A, on switch B, on a host with SSDs.
> 
> > AZs are built on aggregates, and yes aggregates can overlap and
> > aggreagtes are used for scheduling.
> >
> > So if you want to schedule on features as well as (or instead of)
> > physical isolation, then you can already:
> >
> > - Create an aggregate that contains "hosts with fast CPUs" - Create
> > another aggregate that includes "hosts with SSDs" - Write (or
> > configure in some cases) schedule filters that look at something in
> > the request (such as schedule hint, an image property, or a flavor
> > extra_spec) so that the scheduler can filter on those aggregates
> >
> > nova boot --availability-zone az1 --scheduler-hint want-fast-cpu
> > --scheduler-hint want-ssd  ...
> 
> Does this actually work?  The docs only describe setting the metadata on the
> flavor, not as part of the boot command.
> 
If you want to be able to pass it in as explicit hints then you need to write a 
filter to cope with that hint- I was using it as an example of the kind of 
relationship between hints and aggregate filtering 
The more realistic example for this kind of attribute is to make it part of the 
flavor and use the aggregate_instance_extra_spec filter - which does exactly 
this kind of filtering (for overlapping aggregates)


> > nova boot --availability-zone az1 --flavor 1000 (where flavor 1000 has
> > extra spec that says it needs fast cpu and ssd)
> >
> > But there is no need that I can see to make AZs overlapping just to so
> > the same thing - that would break what everyone (including folks used
> > to working with AWS) expects from an AZ
> 
> 
> As an admin user you can create arbitrary host aggregates, assign metadata,
> and have flavors with extra specs to look for that metadata.
> 
> But as far as I know there is no way to match host aggregate information on a
> per-instance basis.

Matching aggregate information on a per-instance basis is what the scheduler 
filters do.

Well yes  it is down to the admin to decide what groups are going to be 
available, how to map them into aggregates, how to map that into flavors (which 
are often the link to a charging mechanism) - but once they've done that then 
the user can work within those bounds by choosing the correct flavor, image, 
etc.
> 
> Also, unless things have changed since I looked at it last as a regular user 
> you
> can't create new flavors so the only way to associate an instance with a host
> aggregate is via an availability zone.

Well it depends on the roles you want to assign to your users really and how 
you set up your policy file, but in general you don't want users defining 
flavors, the cloud admin defines the flavors based on what makes sense from 
their environment.

> 
> Chris
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][scheduler] Availability Zones and Host aggregates..

2014-03-27 Thread Chris Friesen

On 03/27/2014 12:28 PM, Day, Phil wrote:


Personally I'm a bit worried about users having too fine a
granularity over where they place a sever - AZs are generally few and
big so you can afford to allow this and not have capacity issues, but
if I had to expose 40 different rack based zones it would be pretty
hard to stop everyone pilling into the first or last - when really
want they want to say is "not the same as"   or "the same as"  -
which makes me wonder if this is really the right way to go.It
feels more like what we really want is some form of affinity and
anti-affinity rules  rather than an explicit choice of a particular
group.


I suspect in many cases server groups with affinity rules would go a 
long way, but currently the server group policies only select based on 
compute node.


It'd be nice to be able to do a heat template where you could specify 
things like "put these three servers on separate hosts from each other, 
and these other two servers on separate hosts from each other (but maybe 
on the same hosts as the first set of servers), and they all have to be 
on the same network segment because they talk to each other a lot and I 
want to minimize latency, and they all need access to the same shared 
instance storage for live migration".


Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] sample config files should be ignored in git...

2014-03-27 Thread Joe Gordon
On Wed, Mar 26, 2014 at 11:53 PM, Michael Chapman  wrote:

> On Thu, Mar 27, 2014 at 4:10 PM, Robert Collins  > wrote:
>
>> On 27 March 2014 17:30, Tom Fifield  wrote:
>>
>> >> Does anyone disagree?
>> >
>> > /me raises hand
>> >
>> > When I was an operator, I regularly referred to the sample config files
>> > in the git repository.
>> >
>> > If there weren't generated versions of the sample config in the repo, I
>> > would probably grep the code (not an ideal user experience!). Running
>> > some random script that I don't know about the existence and might
>> > depend on having something else installed of is probably not something
>> > that would happen.
>>
>> So, I think its important you have sample configs to refer to.
>>
>> Do they need to be in the git repo?
>>
>> Note that because libraries now export config options (which is the
>> root of this problem!) you cannot ever know from the source all the
>> options for a service - you *must* know the library versions you are
>> running, to interrogate them for their options.
>>
>> We can - and should - have a discussion about the appropriateness of
>> the layering leak we have today, but in the meantime this is breaking
>> multiple projects every time any shared library that uses oslo.config
>> changes any config option... so we need to solve the workflow aspect.
>>
>> How about we make a copy of the latest config for each project for
>> each series - e.g. trunk of everything, Icehouse of servers with trunk
>> of everything else, etc and make that easily acccessible?
>>
>> -Rob
>>
>> --
>> Robert Collins 
>> Distinguished Technologist
>> HP Converged Cloud
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> There are already some samples in the 'configuration reference' section of
> docs.openstack, eg:
>
>
> http://docs.openstack.org/havana/config-reference/content/ch_configuring-openstack-identity.html#sample-configuration-files
>
> However the compute and image sections opt for a formatted table, and the
> network section is more like an installation guide.
>
> If the samples are to be removed from github, perhaps our configuration
> reference section could be first and foremost the set of sample
> configuration files for each project + plugin, rather than them being
> merely a part of the reference doc as it currently exists.
>
> I fairly consistently refer to the github copies of the samples. They also
> allow me to refer to specific lines of the config when explaining concepts
> over text. I am not against their removal, but if we were to remove them
> I'd disappointed if I had to search very far on docs.openstack.org to get
> to them, and I would want the raw files instead of something formatted.
>

++, This sounds like a good approach, If we make the config samples very
easy to find on docs.openstack.org and make them automatically regenerate
to be up to date then the workflow for everyone who used to use the in tree
samples only changes slightly, just look at at new website for the same
thing.


>
>
>  - Michael
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] sample config files should be ignored in git...

2014-03-27 Thread Joe Gordon
On Thu, Mar 27, 2014 at 2:21 AM, Dirk Müller  wrote:

> Hi,
>
> >> When I was an operator, I regularly referred to the sample config files
> >> in the git repository.
>
> The sample config files in git repository are tremendeously useful for
> any operator and OpenStack Packager. Having them generateable with a
> tox line is very cumbersome.
>
>
Why is it cumbersome? We do the same thing.


> As a minimum those config files should be part of the sdist tarball
> (aka generated during sdist time).
>
> > Do they need to be in the git repo?
>
> IMHO yes, they should go alongside the code change.
>

Why? We don't include any other automatically generated files (or at least
try not too).

What about if you could just go to docs.openstack.org and find them with a
single click?


>
> > Note that because libraries now export config options (which is the
> > root of this problem!) you cannot ever know from the source all the
> > options for a service - you *must* know the library versions you are
> > running, to interrogate them for their options.
>
> The problem is that we hammer in all the libraries configuration
> option into the main config file. if we'd have "include" support and
> we'd just include the libraries config options that are generated as a
> separate file (and possibly autogenerated) this problem would not
> occur, and it would avoid the gate breakages.
>
>
> Thanks,
> Dirk
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Jenkins test logs and their retention period

2014-03-27 Thread Joe Gordon
On Thu, Mar 27, 2014 at 6:53 AM, Doug Hellmann
wrote:

>
>
>
> On Wed, Mar 26, 2014 at 2:54 PM, Joe Gordon  wrote:
>
>>
>>
>>
>> On Wed, Mar 26, 2014 at 9:51 AM, Doug Hellmann <
>> doug.hellm...@dreamhost.com> wrote:
>>
>>>
>>>
>>>
>>> On Tue, Mar 25, 2014 at 5:34 PM, Brant Knudson  wrote:
>>>



 On Mon, Mar 24, 2014 at 5:49 AM, Sean Dague  wrote:

> ...
>
> Part of the challenge is turning off DEBUG is currently embedded in
> code
> in oslo log, which makes it kind of awkward to set sane log levels for
> included libraries because it requires an oslo round trip with code to
> all the projects to do it.
>
>
 Here's how it's done in Keystone:
 https://review.openstack.org/#/c/62068/10/keystone/config.py

 It's definitely awkward.

>>>
>>> https://bugs.launchpad.net/oslo/+bug/1297950
>>>
>>
>> Currently when you enable debug logs in openstack, the root logger is set
>> to debug and then we have to go and blacklist specific modules that we
>> don't want to run on debug. What about instead adding an option to just set
>> the openstack component at hand to debug log level and not the root logger?
>> That way we won't have to keep maintaining a blacklist of modules that
>> generate too many debug logs.
>>
>
> Doing that makes sense, too. Do we need a new option, or is there some
> combination of existing options that we could interpret to mean "debug this
> openstack app but not all of the libraries it is using"?
>
>
I'm not sure if we need a new option or re-use the existing ones. But the
current config options are somewhat confusing. We have separate debug and
verbose options.



> Doug
>
>
>
>>
>>
>>>
>>>
>>> Doug
>>>
>>>
>>>


 - Brant


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][scheduler] Availability Zones and Host aggregates..

2014-03-27 Thread Sangeeta Singh


On 3/27/14, 11:03 AM, "Day, Phil"  wrote:

>> 
>> The need arises when you need a way to use both the zones to be
>>used for
>> scheduling when no specific zone is specified. The only way to do that
>>is
>> either have a AZ which is a superset of the two AZ or the other way
>>could be
>> if the default_scheduler_zone can take a list of zones instead of just
>>1.
>
>If you don't configure a default_schedule_zone, and don't specify an
>availability_zone to the request  - then I thought that would make the AZ
>filter in effect ignore AZs for that request.  Isn't that want you need ?


 No what I want is a default_schedule_zone that uses hosts from two other
AZs but in my deployment I might have other AZs defined as well which I
want to be filtered out when the boot command does not specify a AZ.

Thanks,
Sangeeta

>
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-qa] Graduation Requirements + Scope of Tempest

2014-03-27 Thread Joe Gordon
On Wed, Mar 26, 2014 at 9:02 PM, Maru Newby  wrote:

>
> On Mar 26, 2014, at 12:59 PM, Joe Gordon  wrote:
>
> >
> >
> >
> > On Tue, Mar 25, 2014 at 1:49 AM, Maru Newby  wrote:
> >
> > On Mar 21, 2014, at 9:01 AM, David Kranz  wrote:
> >
> > > On 03/20/2014 04:19 PM, Rochelle.RochelleGrober wrote:
> > >>
> > >>> -Original Message-
> > >>> From: Malini Kamalambal [mailto:malini.kamalam...@rackspace.com]
> > >>> Sent: Thursday, March 20, 2014 12:13 PM
> > >>>
> > >>> 'project specific functional testing' in the Marconi context is
> > >>> treating
> > >>> Marconi as a complete system, making Marconi API calls & verifying
> the
> > >>> response - just like an end user would, but without keystone. If one
> of
> > >>> these tests fail, it is because there is a bug in the Marconi code ,
> > >>> and
> > >>> not because its interaction with Keystone caused it to fail.
> > >>>
> > >>> "That being said there are certain cases where having a project
> > >>> specific
> > >>> functional test makes sense. For example swift has a functional test
> > >>> job
> > >>> that
> > >>> starts swift in devstack. But, those things are normally handled on a
> > >>> per
> > >>> case
> > >>> basis. In general if the project is meant to be part of the larger
> > >>> OpenStack
> > >>> ecosystem then Tempest is the place to put functional testing. That
> way
> > >>> you know
> > >>> it works with all of the other components. The thing is in openstack
> > >>> what
> > >>> seems
> > >>> like a project isolated functional test almost always involves
> another
> > >>> project
> > >>> in real use cases. (for example keystone auth with api requests)
> > >>>
> > >>> "
> >
> >
> >
> > >>>
> > >>> One of the concerns we heard in the review was 'having the functional
> > >>> tests elsewhere (I.e within the project itself) does not count and
> they
> > >>> have to be in Tempest'.
> > >>> This has made us as a team wonder if we should migrate all our
> > >>> functional
> > >>> tests to Tempest.
> > >>> But from Matt's response, I think it is reasonable to continue in our
> > >>> current path & have the functional tests in Marconi coexist  along
> with
> > >>> the tests in Tempest.
> > >>>
> > >> I think that what is being asked, really is that the functional tests
> could be a single set of tests that would become a part of the tempest
> repository and that these tests would have an ENV variable as part of the
> configuration that would allow either "no Keystone" or "Keystone" or some
> such, if that is the only configuration issue that separates running the
> tests isolated vs. integrated.  The functional tests need to be as much as
> possible a single set of tests to reduce duplication and remove the
> likelihood of two sets getting out of sync with each other/development.  If
> they only run in the integrated environment, that's ok, but if you want to
> run them isolated to make debugging easier, then it should be a
> configuration option and a separate test job.
> > >>
> > >> So, if my assumptions are correct, QA only requires functional tests
> for integrated runs, but if the project QAs/Devs want to run isolated for
> dev and devtest purposes, more power to them.  Just keep it a single set of
> functional tests and put them in the Tempest repository so that if a
> failure happens, anyone can find the test and do the debug work without
> digging into a separate project repository.
> > >>
> > >> Hopefully, the tests as designed could easily take a new
> configuration directive and a short bit of work with OS QA will get the
> integrated FTs working as well as the isolated ones.
> > >>
> > >> --Rocky
> > > This issue has been much debated. There are some active members of our
> community who believe that all the functional tests should live outside of
> tempest in the projects, albeit with the same idea that such tests could be
> run either as part of today's "real" tempest runs or mocked in various ways
> to allow component isolation or better performance. Maru Newby posted a
> patch with an example of one way to do this but I think it expired and I
> don't have a pointer.
> >
> > I think the best place for functional api tests to be maintained is in
> the projects themselves.  The domain expertise required to write api tests
> is likely to be greater among project resources, and they should be tasked
> with writing api tests pre-merge.  The current 'merge-first, test-later'
> procedure of maintaining api tests in the Tempest repo makes that
> impossible.  Worse, the cost of developing functional api tests is higher
> in the integration environment that is the Tempest default.
> >
> >
> > If an API is made and documented properly what domain expertise would be
> needed to use it? The opposite is true for tempest and the tests
> themselves. The tempest team focuses on just tests so they know how to
> write good tests and are able to leverage common underlying framework code.
>
> Given that documentation is typically finalized only late in 

Re: [openstack-dev] [Murano][Heat] MuranoPL questions?

2014-03-27 Thread Adrian Otto
Keith is my co-worker. I deeply respect his opinion, and agree with his 
perspective with respect to devops users. That's exactly the persona that 
OpenStack appeals to today. However, devops is not the only perspective to 
consider.

OpenStack has not yet crossed the barrier into attracting Application 
Developers en-masse. The Application Developer persona has a different 
perspective, and would prefer not to use a DSL at all when they are building 
applications based on common design patterns. One such example is the 
three-tier-web application. Solum intends to address these use patterns using 
sensible selectable defaults such that most application developers do not need 
to use a DSL at all to run apps on OpenStack. They instead use parameters to 
select well understood and well documented patterns. We will generate HOT files 
as outputs and feed them into Heat for orchestration.

For the smaller minority of application developers who do want a DSL to 
describe their app topology, we can offer them a choice:

1) Use the Heat DSL, and describe it in terms of infra resources.
2) Use an application-centric DSL that does not directly pertain to the 
resources in the Heat DSL.

In cases where #2 is used, #1 will probably also be used as a complimentary 
input. There are reasons for having other DSL options that allow modeling of 
things that are not infrastructure resources. We would be fools to think that 
HOT is the only home for all that. HOT is about orchestration, not universal 
entity modeling and management. Devops users will naturally select HOT, not any 
alternate DSL. With that said, Solum aims to use HOT to the fullest extent. We 
may also offer to add features to it. Some things still do not fit there.

Rather than debating the technical merits of a new DSL, and how it could be 
accomplished by tweaking existing projects, it would be wise for us to ask (and 
listen) carefully about WHY the alternate approach is desired. Some of it can 
certainly be addressed by HOT, and should. Some of it has no business in the 
orchestration system at all. Let's not quickly dismiss alternate approaches in 
cases where they do not overlap, or where the style and approach are 
essentially the same.

Example: We have numerous programming languages today. Each one exists for a 
reason. Understanding those reasons and selecting the right tool for the job is 
a key to success as a computer scientist.

I look forward to further discussions with the Heat team, and other StackForge 
projects to work to find more common ground and identify those areas where we 
should splinter off to innovate. Based on my in-person discussions with Georgy 
from Mirantis this week, I am convinced that they do intend to use Heat to the 
extent practical in Murano. I am continuing to keep an open mind about the 
desire to have other DSL systems that work on different planes and for 
different reasons than Heat.

Adrian

> On Mar 26, 2014, at 1:27 PM, "Keith Bray"  wrote:
> 
>> On 3/25/14 11:55 AM, "Ruslan Kamaldinov"  wrote:
>> 
>> * Murano DSL will focus on:
>> a. UI rendering
> 
> One of the primary reasons I am opposed to using a different DSL/project
> to accomplish this is that the person authoring the HOT template is
> usually the system architect, and this is the same person who has the
> technical knowledge to know what technologies you can swap in/out and
> still have that system/component work, so they are also the person who
> can/should define the "rules" of what component building blocks can and
> can't work together.  There has been an overwhelmingly strong preference
> from the system architects/DevOps/ApplicationExperts I [1] have talked to
> for the ability to have control over those rules directly within the HOT
> file or immediately along-side the HOT file but feed the whole set of
> files to a single API endpoint.  I'm not advocating that this extra stuff
> be part of Heat Engine (I understand the desire to keep the orchestration
> engine clean)... But from a barrier to adoption point-of-view, the extra
> effort for the HOT author to learn another DSL and use yet another system
> (or even have to write multiple files) should not be underestimated.
> These people are not OpenStack developers, they are DevOps folks and
> Application Experts.  This is why the Htr[2] project was proposed and
> threads were started to add extra data to HOT template that Heat engine
> could essentially ignore, but would make defining UI rendering and
> component connectivity easy for the HOT author.
> 
> I'm all for contributions to OpenStack, so I encourage the Murano team to
> continue doing its thing if they find it adds value to themselves or
> others. However, I'd like to see the Orchestration program support the
> surrounding things the users of the Heat engine want/need from their cloud
> system instead of having those needs met by separate projects seeking
> incubation. There are technical ways to keep the core engine "clean" while
>

[openstack-dev] [Marconi] Backend options [was Re: Why is marconi a queue implementation vs a provisioning API?]

2014-03-27 Thread Chad Lung
> I think Marconi has room for 2-3 more drivers that are supported by the
> team for production deployments. Two of the most promising candidates are
> Redis and AMQP (specific broker TBD). Cassandra has also been proposed in
> the past, but I don't think it?s a viable option due to the way deletes
> are implemented[1].
>
> If anyone has some other options that you think could be a good fit,
> please make some suggestions and help us determine the future of
> Marconi.
>
> ---
> Kurt G. | @kgriffs
>


Hi Kurt,

Has the Marconi team looked at Riak?

Thanks,

Chad Lung
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Horizon] Searching for a new name for Tuskar UI

2014-03-27 Thread Tzu-Mainn Chen
> Hi OpenStackers,
> 
> User interface which is managing the OpenStack Infrastructure is
> currently named Tuskar-UI because of historical reasons. Tuskar itself
> is a small service, which is giving logic into generating and managing
> Heat templates and helps user to model and manage his deployment. The
> user interface, which is the subject of this call, is based on TripleO
> approach and resembles OpenStack Dashboard (Horizon) with the way of how
> it consumes other services. The UI is consuming not just Tuskar API, but
> also Ironic (nova-baremetal), Nova (flavors), Ceilometer, etc in order
> to design, deploy, manage and monitor your OpenStack deployments.
> Because of this I find the name Tuskar-UI improper (it's more closer to
> say TripleO-UI) and I would like the community to help to find better
> name for it. After brainstorming, we can start voting on the final
> project's name.
> 
> https://etherpad.openstack.org/p/openstack-management-ui-names
> 
> Thanks
> -- Jarda (jcoufal)

Thanks for starting this thread!  I wonder if it might make sense to have
some of this discussion during the weekly horizon meeting, as that might
help clarify whether there are existing or desired policies around UI
naming.

Mainn

> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] sample config files should be ignored in git...

2014-03-27 Thread Sean Dague
On 03/27/2014 02:54 PM, Joe Gordon wrote:
> 
> 
> 
> On Thu, Mar 27, 2014 at 2:21 AM, Dirk Müller  > wrote:
> 
> Hi,
> 
> >> When I was an operator, I regularly referred to the sample config
> files
> >> in the git repository.
> 
> The sample config files in git repository are tremendeously useful for
> any operator and OpenStack Packager. Having them generateable with a
> tox line is very cumbersome.
> 
> 
> Why is it cumbersome? We do the same thing.

Because we've already got a working tox environment. Which includes
knowing, in advance that you can't just pip install tox (as 1.7.x is
broken), and that you need to have postgresql and mysql and libffi dev
packages installed, and a C compiler.

Start with a pristine Linux it is a lot manual steps you have to go
through to get a working config out of tox -e genconfig.

So I think it's a fair concern that we did just move a burden back onto
users because we dug a hole by letting libraries declare arbitrary
required variables in our config files.

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][scheduler] Availability Zones and Host aggregates..

2014-03-27 Thread Chris Friesen

On 03/27/2014 12:49 PM, Day, Phil wrote:

-Original Message- From: Chris Friesen
[mailto:chris.frie...@windriver.com]



On 03/27/2014 11:48 AM, Day, Phil wrote:



nova boot --availability-zone az1 --scheduler-hint want-fast-cpu
--scheduler-hint want-ssd  ...


Does this actually work?  The docs only describe setting the
metadata on the flavor, not as part of the boot command.


If you want to be able to pass it in as explicit hints then you need
to write a filter to cope with that hint- I was using it as an
example of the kind of relationship between hints and aggregate
filtering The more realistic example for this kind of attribute is to
make it part of the flavor and use the aggregate_instance_extra_spec
filter - which does exactly this kind of filtering (for overlapping
aggregates)


I'll admit that I don't have a lot of experience as an end-user of 
OpenStack, so maybe that colours my judgement.


To me it seems quite limiting that if you want the scheduler to match 
against multiple host aggregate extra specs then you need to create a 
new flavor.


If a regular user could do something like what you specify in your 
example, I think that would make a lot of sense.


Chris


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] proxying SSL traffic for API requests

2014-03-27 Thread Nathan Kinder
On 03/26/2014 09:51 AM, Clint Byrum wrote:
> Excerpts from Chris Jones's message of 2014-03-26 06:58:59 -0700:
>> Hi
>>
>> We don't have a strong attachment to stunnel though, I quickly dropped it in 
>> front of our CI/CD undercloud and Rob wrote the element so we could repeat 
>> the deployment.
>>
>> In the fullness of time I would expect there to exist elements for several 
>> SSL terminators, but we shouldn't necessarily stick with stunnel because it 
>> happened to be the one I was most familiar with :)
>>
>> I would think that an httpd would be a good option to go with as the 
>> default, because I tend to think that we'll need an httpd running/managing 
>> the python code by default.
>>
> 
> I actually think that it is important to separate SSL termination from
> the app server. In addition to reasons of scale (SSL termination scales
> quite a bit differently than app serving), there is a security implication
> in having the private SSL keys on the same box that runs the app.

There is also a security implication in having network traffic from the
SSL terminator to the application in the clear.  If the app is
compromised, one could just read all incoming traffic anyway since it is
not encrypted.

> 
> So if we use apache for running the python app servers, that is not a
> reason to also use apache for SSL. Quite the opposite I think.
> 
> As far as "which is best".. there are benefits and drawbacks for all of
> them, and it is modular enough that we can just stick with stunnel and
> users who find problems with it can switch it out without too much hassle.
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Swift] new core members

2014-03-27 Thread Luse, Paul E
Congrats guys!!

-Original Message-
From: John Dickinson [mailto:m...@not.mn] 
Sent: Thursday, March 27, 2014 1:34 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Swift] new core members

I'm pleased to announce that Alistair Coles and Christian Schwede have both 
joined the Swift core team. They have both been very active in the Swift 
community, contributing both code and reviews. Both Alistair and Christian work 
with large-scale production Swift clusters, and I'm happy to have them on the 
core team.

Alistair and Christian, thanks for your work in the community and for taking on 
more responsibility. I'm glad we'll be able to continue to work together on 
Swift.

--John





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Swift] new core members

2014-03-27 Thread John Dickinson
I'm pleased to announce that Alistair Coles and Christian Schwede have both 
joined the Swift core team. They have both been very active in the Swift 
community, contributing both code and reviews. Both Alistair and Christian work 
with large-scale production Swift clusters, and I'm happy to have them on the 
core team.

Alistair and Christian, thanks for your work in the community and for taking on 
more responsibility. I'm glad we'll be able to continue to work together on 
Swift.

--John






signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] stevedore 0.15 released

2014-03-27 Thread Doug Hellmann
stevedore 0.15 is available on pypi now and should sync to our mirror
shortly.

What's New?

 * Only log errors from loading plugins if no error handler callback
   is provided.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] sample config files should be ignored in git...

2014-03-27 Thread Joe Gordon
On Thu, Mar 27, 2014 at 1:11 PM, Sean Dague  wrote:

> On 03/27/2014 02:54 PM, Joe Gordon wrote:
> >
> >
> >
> > On Thu, Mar 27, 2014 at 2:21 AM, Dirk Müller  > > wrote:
> >
> > Hi,
> >
> > >> When I was an operator, I regularly referred to the sample config
> > files
> > >> in the git repository.
> >
> > The sample config files in git repository are tremendeously useful
> for
> > any operator and OpenStack Packager. Having them generateable with a
> > tox line is very cumbersome.
> >
> >
> > Why is it cumbersome? We do the same thing.
>
> Because we've already got a working tox environment. Which includes
> knowing, in advance that you can't just pip install tox (as 1.7.x is
> broken), and that you need to have postgresql and mysql and libffi dev
> packages installed, and a C compiler.
>
> Start with a pristine Linux it is a lot manual steps you have to go
> through to get a working config out of tox -e genconfig.
>
> So I think it's a fair concern that we did just move a burden back onto
> users because we dug a hole by letting libraries declare arbitrary
> required variables in our config files.
>

Good answer.


>
> -Sean
>
> --
> Sean Dague
> Samsung Research America
> s...@dague.net / sean.da...@samsung.com
> http://dague.net
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][scheduler] Availability Zones and Host aggregates..

2014-03-27 Thread Duncan Thomas
On Mar 26, 2014 6:46 PM, "Jay Pipes"  wrote:

> Personally, I feel it is a mistake to continue to use the Amazon concept
> of an availability zone in OpenStack, as it brings with it the
> connotation from AWS EC2 that each zone is an independent failure
> domain. This characteristic of EC2 availability zones is not enforced in
> OpenStack Nova or Cinder, and therefore creates a false expectation for
> Nova users.

I think this is backwards training, personally. I think azs as separate
failure domains were done like that for a reason by amazon, and make good
sense. What we've done is overload that with cells, aggregates etc which
should have a better interface and are a different concept. Redefining well
understood terms because they don't suite your current implementation is a
slippery slope, and overloading terms that already have a meaning in the
industry in just annoying.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] sample config files should be ignored in git...

2014-03-27 Thread Clint Byrum
Excerpts from Sean Dague's message of 2014-03-27 13:11:57 -0700:
> On 03/27/2014 02:54 PM, Joe Gordon wrote:
> > 
> > 
> > 
> > On Thu, Mar 27, 2014 at 2:21 AM, Dirk Müller  > > wrote:
> > 
> > Hi,
> > 
> > >> When I was an operator, I regularly referred to the sample config
> > files
> > >> in the git repository.
> > 
> > The sample config files in git repository are tremendeously useful for
> > any operator and OpenStack Packager. Having them generateable with a
> > tox line is very cumbersome.
> > 
> > 
> > Why is it cumbersome? We do the same thing.
> 
> Because we've already got a working tox environment. Which includes
> knowing, in advance that you can't just pip install tox (as 1.7.x is
> broken), and that you need to have postgresql and mysql and libffi dev
> packages installed, and a C compiler.
> 
> Start with a pristine Linux it is a lot manual steps you have to go
> through to get a working config out of tox -e genconfig.
> 
> So I think it's a fair concern that we did just move a burden back onto
> users because we dug a hole by letting libraries declare arbitrary
> required variables in our config files.
> 

This is pretty standard in the open source world. Git trees do not have
all of the things that the user needs. Git trees have all the things
that the project provides.

If this were autotools we'd have people run 'autoreconf' and/or 'make
doc'. That would likely involve installing autotools, and might also
require some libraries to be present to build tools that are used to
generate things.

I would have a hard time supporting users who don't read the README
before trying to make use of a git tree to use/install/configure a piece
of software. As long as those steps are spelled out, and the releases
contain this generated file, I'm +1 on removing it from git.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] proxying SSL traffic for API requests

2014-03-27 Thread Clint Byrum
Excerpts from Nathan Kinder's message of 2014-03-27 13:25:02 -0700:
> On 03/26/2014 09:51 AM, Clint Byrum wrote:
> > Excerpts from Chris Jones's message of 2014-03-26 06:58:59 -0700:
> >> Hi
> >>
> >> We don't have a strong attachment to stunnel though, I quickly dropped it 
> >> in front of our CI/CD undercloud and Rob wrote the element so we could 
> >> repeat the deployment.
> >>
> >> In the fullness of time I would expect there to exist elements for several 
> >> SSL terminators, but we shouldn't necessarily stick with stunnel because 
> >> it happened to be the one I was most familiar with :)
> >>
> >> I would think that an httpd would be a good option to go with as the 
> >> default, because I tend to think that we'll need an httpd running/managing 
> >> the python code by default.
> >>
> > 
> > I actually think that it is important to separate SSL termination from
> > the app server. In addition to reasons of scale (SSL termination scales
> > quite a bit differently than app serving), there is a security implication
> > in having the private SSL keys on the same box that runs the app.
> 
> There is also a security implication in having network traffic from the
> SSL terminator to the application in the clear.  If the app is
> compromised, one could just read all incoming traffic anyway since it is
> not encrypted.
> 

Reading all incoming traffic is a given if the app is compromised in
the same way that one might compromise the secret keys. Terminator to
app server encryption is only to prevent evil on your internal network.
That is contained to your own network and thus can be measured and
controlled.

However, you don't want an attacker who has compromised your app to be
able to go off and setup their own version of your app using your private
key and some simple MITM techniques in a place where you cannot detect
or control it at all.

That's not to say that terminator<->app encryption is not a good idea
too. But that should be using a separate set of encryption keys to
mitigate the impact of a compromise to, again, your own network.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] [bug] "nova server-group-list" doesn't show any members

2014-03-27 Thread Chris Friesen
I've filed this as a bug (https://bugs.launchpad.net/nova/+bug/1298494) 
but I thought I'd post it here as well to make sure it got visibility.


If I create a server group, then boot a server as part of the group, 
then run "nova server-group-list" it doesn't show the server as being a 
member of the group.


The problem seems to be with the filter passed in to 
instance_obj.InstanceList.get_by_filters() in 
api.openstack.compute.contrib.server_groups.ServerGroupController._format_server_group(). 



I traced it down as far as 
db.sqlalchemy.api.instance_get_all_by_filters(). Before this line the 
query output looks good:


query_prefix = regex_filter(query_prefix, models.Instance, filters)

but after that line there are no instances left in the filter results.


If I change the filter to use

'deleted': False

instead of

'deleted_at': None

then it works as expected.


The leads to a couple of questions:

1) There is a column "deleted_at" in the database table, why can't we 
filter on it?

2) How did this get submitted when it doesn't work?

Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [bug] "nova server-group-list" doesn't show any members

2014-03-27 Thread Chris Friesen

On 03/27/2014 03:57 PM, Chris Friesen wrote:


If I change the filter to use

'deleted': False

instead of

'deleted_at': None

then it works as expected.


The leads to a couple of questions:

1) There is a column "deleted_at" in the database table, why can't we
filter on it?


I wonder if maybe the problem is that you can't pattern match against 
"None".


Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] pbr 0.8.0 released

2014-03-27 Thread Doug Hellmann
version 0.8.0 of pbr is available now on pypi and should make it into our
mirror shortly.

0.8.0
-

* Use unicode_literals import instead of u'unicode' notation
* Remove pip version specifier
* Make tools/integration.sh take a branch
* Fixes blocking issue on Windows
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [bug] "nova server-group-list" doesn't show any members

2014-03-27 Thread Chris Friesen

On 03/27/2014 03:57 PM, Chris Friesen wrote:


The leads to a couple of questions:

1) There is a column "deleted_at" in the database table, why can't we
filter on it?
2) How did this get submitted when it doesn't work?


I've updated to the current codebase in devstack and I'm still seeing 
the problem.


Interestingly, unit test 
nova.tests.api.openstack.compute.contrib.test_server_groups.ServerGroupTest.test_display_members 
passes just fine, and it seems to be running the same sqlalchemy code.


Is this a case where sqlite behaves differently from mysql?

Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][qa][all] Home of rendered specs

2014-03-27 Thread Joe Gordon
Hi All,

Now that nova and qa are beginning to use specs repos [0][1]. Instead of
being forced to read raw RST or relying on github [3],  we want a domain
where we can publish the fully rendered sphinxdocs based specs (rendered
with oslosphinx of course). So how about:

  specs.openstack.org/$project

specs instead of docs because docs.openstack.org should only contain what
is actually implemented so keeping specs in another subdomain is an attempt
to avoid confusion as we don't expect every approved blueprint to get
implemented.


Best,
Joe


[0] http://git.openstack.org/cgit/openstack/nova-specs/
[1] http://git.openstack.org/cgit/openstack/qa-specs/
[3] https://github.com/openstack/nova-specs/blob/master/specs/template.rst
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [bug] "nova server-group-list" doesn't show any members

2014-03-27 Thread Chris Friesen

On 03/27/2014 04:47 PM, Chris Friesen wrote:


Interestingly, unit test
nova.tests.api.openstack.compute.contrib.test_server_groups.ServerGroupTest.test_display_members
passes just fine, and it seems to be running the same sqlalchemy code.

Is this a case where sqlite behaves differently from mysql?


Sorry to keep replying to myself, but this might actually hit us other 
places.


Down in db/sqlalchemy/api.py we end up calling


query = query.filter(column_attr.op(db_regexp_op)('None'))


When using mysql, it looks like a regexp comparison of the string 'None' 
against a NULL field fails to match.


Since sqlite doesn't have its own regexp function we provide one in 
openstack/common/db/sqlalchemy/session.py.  In the buggy case we end up 
calling it as regexp('None', None), where the types are "unicode" and 
"NoneType".  However, we end up converting the second arg to text type 
before calling reg.search() on it, so it matches.


Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Mistral] How Mistral handling long running delegate tasks

2014-03-27 Thread Dmitri Zimine
Following up on http://tinyurl.com/l8gtmsw and http://tinyurl.com/n3v9lt8:  
this explains how Mistral handles long running delegate tasks. Note that a 
'passive' workflow engine can handle both normal tasks and delegates the same 
way. I'll also put that on ActionDesign wiki, after discussion.

Diagram: 
https://docs.google.com/a/stackstorm.com/drawings/d/147_EpdatpN_sOLQ0LS07SWhaC3N85c95TkKMAeQ_a4c/edit?usp=sharing

1. On start(workflow), engine creates a new workflow execution, computes the 
first batch of tasks, sends them to ActionRunner [1].
2. ActionRunner creates an action and calls action.run(input)
3. Action does the work (compute (10!)), produce the results,  and return the 
results to executor. If it returns, status=SUCCESS. If it fails it throws 
exception, status=ERROR.
4. ActionRunner notifies Engine that the task is complete task_done(execution, 
task, status, results)[2]
5. Engine computes the next task(s) ready to trigger, according to control flow 
and data flow, and sends them to ActionRunner.
6. Like step 2: ActionRunner calls the action's run(input)
7. A delegate action doesn't produce results: it calls out the 3rd party 
system, which is expected to make a callback to a workflow service with the 
results. It returns to ActionRunner without results, "immediately".  
8. ActionRunner marks status=RUNNING [?]
9. 3rd party system takes 'long time' == longer then any system component can 
be assumed to stay alive. 
10. 3rd party component calls Mistral WebHook which resolves to 
engine.task_complete(workbook, id, status, results)  

Comments: 
* One Engine handles multiple executions of multiple workflows. It exposes two 
main operations: start(workflow) and task_complete(execution, task, status, 
results), and responsible for defining the next batch of tasks based on control 
flow and data flow. Engine is passive - it runs in a hosts' thread. Engine and 
ActionRunner communicate via task queues asynchronously, for details, see  
https://wiki.openstack.org/wiki/Mistral/POC 

* Engine doesn't distinct sync and async actions, it doesn't deal with Actions 
at all. It only reacts to task completions, handling the results, updating the 
state, and queuing next set of tasks.

* Only Action can know and define if it is a delegate or not. Some protocol 
required to let ActionRunner know that the action is not returning the results 
immediately. A convention of returning None may be sufficient. 

* Mistral exposes  engine.task_done in the REST API so 3rd party systems can 
call a web hook.

DZ.

[1]  I use ActionRunner instead of Executor (current name) to avoid confusion: 
it is Engine which is responsible for executions, and ActionRunner only runs 
actions. We should rename it in the code.

[2] I use task_done for briefly and out of pure spite, in the code it is 
conveny_task_results.___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] How Mistral handling long running delegate tasks

2014-03-27 Thread Joshua Harlow
Thanks for the description!

The steps here seem very much like what a taskflow engine does (which is good).

To connect this to how I think could work in taskflow.

  1.  Someone creates tasks/flows describing the work to-be-done (converting a 
DSL -> taskflow tasks/flows/retry[1] objects…)
  2.  On execute(workflow) engine creates a new workflow execution, computes 
the first batch of tasks, creates executor for those tasks (remote, local…) and 
executes those tasks.
  3.  Waits for response back from 
futures returned 
from executor.
  4.  Receives futures responses (or receives new response DELAY, for example), 
or exceptions…
  5.  Continues sending out batches of tasks that can be still be executing 
(aka tasks that don't have dependency on output of delayed tasks).
  6.  If any delayed tasks after repeating #2-5 as many times as it can, the 
engine will shut itself down (see http://tinyurl.com/l3x3rrb).
  7.  On delay task finishing some API/webhook/other (the mechanism imho 
shouldn't be tied to webhooks, at least not in taskflow, but should be left up 
to the user of taskflow to decide how to accomplish this) will be/must be 
responsible for resuming the engine and setting the result for the previous 
delayed task.
  8.  Repeat 2 -> 7 until all tasks have executed/failed.
  9.  Profit!

This seems like it could be accomplished, although there are race conditions in 
the #6 (what if multiple delayed requests are received at the same time)? What 
locking is done to ensure that this doesn't cause conflicts? Does the POC solve 
that part (no simultaneous step #5 from below)? There was a mention of a 
watch-dog (ideally to ensure that delayed tasks can't just sit around forever), 
was that implemented?

[1] https://wiki.openstack.org/wiki/TaskFlow#Retries (new feature!)

From: Dmitri Zimine mailto:d...@stackstorm.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Thursday, March 27, 2014 at 4:43 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: [openstack-dev] [Mistral] How Mistral handling long running delegate 
tasks

Following up on http://tinyurl.com/l8gtmsw and http://tinyurl.com/n3v9lt8: this 
explains how Mistral handles long running delegate tasks. Note that a 'passive' 
workflow engine can handle both normal tasks and delegates the same way. I'll 
also put that on ActionDesign wiki, after discussion.

Diagram:
https://docs.google.com/a/stackstorm.com/drawings/d/147_EpdatpN_sOLQ0LS07SWhaC3N85c95TkKMAeQ_a4c/edit?usp=sharing

1. On start(workflow), engine creates a new workflow execution, computes the 
first batch of tasks, sends them to ActionRunner [1].
2. ActionRunner creates an action and calls action.run(input)
3. Action does the work (compute (10!)), produce the results,  and return the 
results to executor. If it returns, status=SUCCESS. If it fails it throws 
exception, status=ERROR.
4. ActionRunner notifies Engine that the task is complete task_done(execution, 
task, status, results)[2]
5. Engine computes the next task(s) ready to trigger, according to control flow 
and data flow, and sends them to ActionRunner.
6. Like step 2: ActionRunner calls the action's run(input)
7. A delegate action doesn't produce results: it calls out the 3rd party 
system, which is expected to make a callback to a workflow service with the 
results. It returns to ActionRunner without results, "immediately".
8. ActionRunner marks status=RUNNING [?]
9. 3rd party system takes 'long time' == longer then any system component can 
be assumed to stay alive.
10. 3rd party component calls Mistral WebHook which resolves to 
engine.task_complete(workbook, id, status, results)

Comments:
* One Engine handles multiple executions of multiple workflows. It exposes two 
main operations: start(workflow) and task_complete(execution, task, status, 
results), and responsible for defining the next batch of tasks based on control 
flow and data flow. Engine is passive - it runs in a hosts' thread. Engine and 
ActionRunner communicate via task queues asynchronously, for details, see  
https://wiki.openstack.org/wiki/Mistral/POC

* Engine doesn't distinct sync and async actions, it doesn't deal with Actions 
at all. It only reacts to task completions, handling the results, updating the 
state, and queuing next set of tasks.

* Only Action can know and define if it is a delegate or not. Some protocol 
required to let ActionRunner know that the action is not returning the results 
immediately. A convention of returning None may be sufficient.

* Mistral exposes  engine.task_done in the REST API so 3rd party systems can 
call a web hook.

DZ.

[1]  I use ActionRunner instead of Executor (current name) to avoid confusion: 
it is Engine which is responsible for executions, and ActionRunner only runs 
actions. We should rename it in

[openstack-dev] Dealing with changes of plans, rejections and other annoyances

2014-03-27 Thread Stefano Maffulli
Hello folks,

we've been hearing a lot about the incredible growth of the OpenStack
community but we don't hear much about the incredible amount of pressure
that comes with such growth. One huge problem that growth brings with is
that new comers have not had time to assimilate the culture of OpenStack
community. If you look at the stats and notice that only a small
percentage of the original developers is still committing code [1].

This is translating in more and more people (tens per week) getting
closer to OpenStack and being greeted in ways that can be too easily
misunderstood. We've already introduced friendly measures to gerrit so
that the first time one submits a change for review is greeted with a
nice email. We're also starting a new program to teach newcomers how to
be a better upstream contributor. I think these are only partial
measures and we all as a community need to collaborate on being better
at dealing with our growth.

I think we have a nice problem (growth is good) and we should address it
before it gets too big and unmanageable . I have filed this session for
the Design Summit and I sincerely hope we'll find time to discuss more
together in Atlanta[2]:

http://summit.openstack.org/cfp/details/171

Ideally I would like to have in the same room to discuss people who have
had bad/good experiences with their first commits, people who still
haven't committed the code they wanted to commit, those afraid of -2 and
those who love them, those that made grandiose plans and merged and
those whose plans were squashed... And I would like to have all PTLs
there too.

What do you think?

Best regards,
Stef


[1]
http://blog.bitergia.com/2014/03/24/measuring-demographics-opensatck-case-study/
[2]
http://www.openstack.org/blog/2014/03/openstack-upstream-training-in-atlanta/

-- 
Ask and answer questions on https://ask.openstack.org

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Nominations for OpenStack PTLs (Program Technical Leads) are now open

2014-03-27 Thread Anita Kuno
Nominations for OpenStack PTLs (Program Technical Leads) are now open
and will remain open until 05:59 UTC, April 4, 2014.

To announce your candidacy please start a new openstack-dev at
lists.openstack.org mailing list thread with the program name as a tag,
example [Glance] PTL Candidacy with the body as your announcement of intent.

I'm sure the electorate would appreciate a bit of information about why
you would make a great PTL and the direction you would like to take the
program, though it is not required for eligibility.

In order to be an eligible candidate (and be allowed to vote) in a given
PTL election, you need to have contributed an accepted patch to one of
the corresponding program's projects[0] during the Havana-Icehouse
timeframe (April 4, 2013 06:00 UTC to April 4, 2014 05:59 UTC).

We need to elect PTLs for 21 programs this round:
*  Compute (Nova) - one position
*  Object Storage (Swift) - one position
*  Image Service (Glance) - one position
*  Identity (Keystone) - one position
*  Dashboard (Horizon) - one position
*  Networking (Neutron) - one position
*  Block Storage (Cinder) - one position
*  Metering/Monitoring (Ceilometer) - one position
*  Orchestration (Heat) - one position
*  Database Service (Trove) - one position
*  Bare metal (Ironic) - one position
*  Common Libraries (Oslo) - one position
*  Infrastructure - one position
*  Documentation - one position
*  Quality Assurance (QA) - one position
*  Deployment (TripleO) - one position
*  Devstack (DevStack) - one position
*  Release cycle management  - one position
*  Queue service (Marconi) - one position
*  Data Processing Service (Sahara) - one position
*  Key Management Service (Barbican) - one position

Additional information about the nomination process can be found here:
https://wiki.openstack.org/wiki/PTL_Elections_March/April_2014

As Tristan and I confirm candidates, we will reply to each email thread
with "confirmed" and add each candidate's name to the list of confirmed
candidates on the above wiki page.

Elections will begin on April 4, 2014 after 06:00 utc (as soon as we get
each election set up we will start it, it will probably be a staggered
start) and run until 1300 utc April 11, 2014.

The electorate is requested to confirm their email address in gerrit,
review.openstack.org > Settings > Contact Information >  Preferred
Email, prior to April 4, 2014 05:59 UTC so that the emailed  ballots are
mailed to the correct email address.

Happy running,
Anita Kuno (anteaya)

[0]
http://git.openstack.org/cgit/openstack/governance/tree/reference/programs.yaml

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] ML2 Type driver for supporting network overlays, with more than 4K seg

2014-03-27 Thread Padmanabhan Krishnan
Hi Mathieu,
Thanks for your reply.
Yes, 
even i think the type driver code for tunnels can remain the same since 
the segment/tunnel allocation is not going to change. But some 
distinction has to be given in the naming or by adding another tunnel 
parameter to signify a network overlay. 
For tunnels 
type, br-tun is created. For regular VLAN, br-ex/br-eth has the uplink 
also as its member port. For this, I was thinking, it's easier if we 
don't even create br-tun or VXLAN/GRE end-points since the compute nodes
 (data network in Openstack) is connected through the external fabric. We will 
just
 have the br-eth/r-ex and its port connecting to the fabric just like if the 
type was VLAN. If we had 
to do this, the changes has to be in  neutron agent code. 
Is this the right way to go or any
 suggestions?

Thanks,
Paddu




On Wednesday, March 26, 2014 11:28 AM, Padmanabhan Krishnan  
wrote:
 
Hi Mathieu,
Thanks for your reply.
Yes, even i think the type driver code for tunnels can remain the same since 
the segment/tunnel allocation is not going to change. But some distinction has 
to be given in the naming or by adding another tunnel parameter to signify a 
network overlay. 
For tunnels type, br-tun is created. For regular VLAN, br-ex/br-eth has the 
uplink also as its member port. For this, I was thinking, it's easier if we 
don't even create br-tun or VXLAN/GRE end-points since the compute nodes (data 
network in Openstack) is throug the external fabric. We will just have the 
br-eth/r-ex and its port connecting to the fabric. If we had to do this, the 
changes has to be in  neutron agent code. 
Is this the right way to go or any
 suggestions?

Thanks,
Paddu





On Wednesday, March 26, 2014 1:53 AM, Mathieu Rohon  
wrote:
 
Hi,

thanks for this very interesting use case!
May be you can still use VXLAN or GRE for tenant networks, to bypass
the 4k limit of vlans. then you would have to send packets to the vlan
tagged interface, with the tag assigned by the VDP protocol, and this
traffic would be encapsulated inside the segment to be carried inside
the network fabric. Of course you will have to take care about
 MTU.
The only thing you have to consider is to be sure that the default
route between VXLan endpoints go through your vlan tagged interface.



Best,
Mathieu

On Tue, Mar 25, 2014 at 12:13 AM, Padmanabhan Krishnan  wrote:
> Hello,
> I have a topology where my Openstack compute nodes are connected to the
> external switches. The fabric comprising of the switches support more than
> 4K segments. So, i should be able to create more than 4K networks in
> Openstack. But, the VLAN to be used for communication with the switches is
> assigned by the switches using 802.1QBG (VDP) protocol. This can be thought
> of as a network overlay. The VM's sends .1q frames to the switches and the
> switches associate it to the segment (VNI in case of VXLAN).
> My question is:
> 1. I cannot use
 a type driver of VLAN because of the 4K limitation. I cannot
> use a type driver of VXLAN or GRE because that may mean host based overlay.
> Is there an integrated type driver i can use like an "external network" for
> achieving the above?
> 2. The Openstack module running in the compute should communicate with VDP
> module (lldpad) running there.
> In the computes, i see that ovs_neutron_agent.py is the one programming the
> flows. Here, for the new type driver, should i add a special case to
> provision_local_vlan() for communicating with lldpad for retrieving the
> provider VLAN? If there was a type driver component running in each
> computes, i would have added another one for my purpose. Since, the ML2
> architecture has its mechanism/type driver modules in the controller only, i
> can only make changes here.
>
> Please let me know if there's already an
 implementation for my above
> requirements. If not, should i create a blue-print?
>
> Thanks,
> Paddu
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Dealing with changes of plans, rejections and other annoyances

2014-03-27 Thread Anita Kuno
On 03/27/2014 08:12 PM, Stefano Maffulli wrote:
> Hello folks,
> 
> we've been hearing a lot about the incredible growth of the OpenStack
> community but we don't hear much about the incredible amount of pressure
> that comes with such growth. One huge problem that growth brings with is
> that new comers have not had time to assimilate the culture of OpenStack
> community. If you look at the stats and notice that only a small
> percentage of the original developers is still committing code [1].
> 
> This is translating in more and more people (tens per week) getting
> closer to OpenStack and being greeted in ways that can be too easily
> misunderstood. We've already introduced friendly measures to gerrit so
> that the first time one submits a change for review is greeted with a
> nice email. We're also starting a new program to teach newcomers how to
> be a better upstream contributor. I think these are only partial
> measures and we all as a community need to collaborate on being better
> at dealing with our growth.
> 
> I think we have a nice problem (growth is good) and we should address it
> before it gets too big and unmanageable . I have filed this session for
> the Design Summit and I sincerely hope we'll find time to discuss more
> together in Atlanta[2]:
> 
> http://summit.openstack.org/cfp/details/171
> 
> Ideally I would like to have in the same room to discuss people who have
> had bad/good experiences with their first commits, people who still
> haven't committed the code they wanted to commit, those afraid of -2 and
> those who love them, those that made grandiose plans and merged and
> those whose plans were squashed... And I would like to have all PTLs
> there too.
> 
> What do you think?
> 
> Best regards,
> Stef
> 
> 
> [1]
> http://blog.bitergia.com/2014/03/24/measuring-demographics-opensatck-case-study/
> [2]
> http://www.openstack.org/blog/2014/03/openstack-upstream-training-in-atlanta/
> 
Hi Stef:

I'm not sure how broad to interpret your "What do you think?" so I will
offer my thoughts and if that isn't what you meant please correct me.

I looked at the info wiki page for the training. I think one of the
things that creates a gap in those in OpenStack and some of those
wanting to be in OpenStack is the background with which they are
approaching the project.

By background I specifically mean an understanding of opensource and
what that means as an internal metric for behaviour.

I spent time in Neutron and I burnt out. I tried as best I could to work
towards a cohesive workflow based on my best understanding of opensource
activity. The level of comprehension of opensource and its processes
exceeded my ability to offer support for other's learning as well as
accomplish anything personally meaningful in my work.

When those with the internal understanding outnumber those without by a
significant enough ratio, the information comes in myriad unspoken ways.
When I was learning other tasks, camera assistant on films comes to
mind, my ratio was 4:1 - I had 4 people with more knowledge than me
whose job it was directly or indirectly to ensure I learned both the
technical and cultural knowledge necessary to be an asset on set when I
was in training. Part of what I learned was how to train others.

When those with the internal understanding have less than an optimal
ratio to those not having the understanding (I don't know what that
ratio is) then they do the best they can but the amount of newcomers
become overwhelming. The frustrated newcomers then move off in their own
direction when they don't like the speed of development. This can cause
difficulties.

Back to my point, when newcomers come into an opensource project with no
understanding of what opensource means or how to communicate in an
opensource project, that creates additional learning demand that might
not have been in evidence previously.

I wonder if in addition to large numbers of new contributors, there is
an expectation from new contributors that was not in evidence
previously. I also wonder if the new contributors are coming to
openstack with a different understanding of what it means to contribute
to openstack, compared to new contributors a year ago.

I am glad that the newcomers training includes "how to review a patch".
I wonder if there should be any part that includes how to
help/support/teach others in the training. If we teach supporting the
growth of others as an expectation, then the developers can scale
better, at least that has been my experience.

Those are my thoughts, thanks for broaching the topic, Stef,
Anita.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO][Ironic][CI] current status

2014-03-27 Thread Robert Collins
We've recently added jobs for Ironic seed and undercloud support, and
landed TripleO support for Ironic.

Cores - please treat the *seed* ironic job as voting the same as all
our other jobs.

However, the ironic undercloud job currentl fails, because of
https://bugs.launchpad.net/openstack-ci/+bug/1298731 - until that is
fixed and we've redeployed the testenvs Ironic can't manage the
emulated baremetal machines in the test cluster - so please treat the
*undercloud* ironic job as non-voting - as soon as this is fixed
someone will send an update here letting everyone know.

There is not currently an Ironic based overcloud job; we may add one in future.

-Rob

-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-qa] Graduation Requirements + Scope of Tempest

2014-03-27 Thread Christopher Yeoh
On Wed, 26 Mar 2014 21:02:58 -0700
Maru Newby  wrote:
> > 
> > If an API is made and documented properly what domain expertise
> > would be needed to use it? The opposite is true for tempest and the
> > tests themselves. The tempest team focuses on just tests so they
> > know how to write good tests and are able to leverage common
> > underlying framework code.
> 
> Given that documentation is typically finalized only late in the
> cycle, are you suggesting that we forego api testing until possibly
> well after the code has been written?  Plus, it is a rare api that
> doesn't have to evolve in response to real-world experience with
> early implementations.  The sooner we can write functional api tests,
> the sooner we can identify shortcomings that need to be addressed -
> and the less costly they will be to fix.

So although "proper" documentation may only be finalized late in the
cycle, there really should be a specification of the API and how
it behaves done before the implementation is finished. 

At least in Nova land I think the lack of this has been a major cause
of features having trouble merging and also us having flaws in both our
implementation (semantic behaviour which was accidental rather than
designed) and testing (incomplete coverage).

Also we have API unit testing in Nova but the tempest API testing still
ends up picking up more issues (perhaps because its generally not the
person writing the code who ends up writing the tests). I think it also
increases the chance that a backwards incompatible
API changes will get picked up. 

Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] sample config files should be ignored in git...

2014-03-27 Thread Tom Fifield

On 28/03/14 04:58, Joe Gordon wrote:




On Thu, Mar 27, 2014 at 1:11 PM, Sean Dague mailto:s...@dague.net>> wrote:

On 03/27/2014 02:54 PM, Joe Gordon wrote:
 >
 >
 >
 > On Thu, Mar 27, 2014 at 2:21 AM, Dirk Müller mailto:d...@dmllr.de>
 > >> wrote:
 >
 > Hi,
 >
 > >> When I was an operator, I regularly referred to the sample
config
 > files
 > >> in the git repository.
 >
 > The sample config files in git repository are tremendeously
useful for
 > any operator and OpenStack Packager. Having them generateable
with a
 > tox line is very cumbersome.
 >
 >
 > Why is it cumbersome? We do the same thing.

Because we've already got a working tox environment. Which includes
knowing, in advance that you can't just pip install tox (as 1.7.x is
broken), and that you need to have postgresql and mysql and libffi dev
packages installed, and a C compiler.

Start with a pristine Linux it is a lot manual steps you have to go
through to get a working config out of tox -e genconfig.

So I think it's a fair concern that we did just move a burden back onto
users because we dug a hole by letting libraries declare arbitrary
required variables in our config files.


Good answer.


+1


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Nova API meeting

2014-03-27 Thread Kenichi Oomichi

Hi,

> -Original Message-
> From: Christopher Yeoh [mailto:cbky...@gmail.com]
> Sent: Thursday, March 27, 2014 9:16 AM
> To: openstack-dev@lists.openstack.org
> Subject: [openstack-dev] Nova API meeting
> 
> Hi,
> 
> Just a reminder that the weekly Nova API meeting is being held tomorrow
> Friday UTC .
> 
> We encourage cloud operators and those who use the REST API such as
> SDK developers and others who and are interested in the future of the
> API to participate.
> 
> In other timezones the meeting is at:
> 
> EST 20:00 (Thu)
> Japan 09:00 (Fri)
> China 08:00 (Fri)
> ACDT 10:30 (Fri)
> 
> The proposed agenda and meeting details are here:
> 
> https://wiki.openstack.org/wiki/Meetings/NovaAPI
> 
> Please feel free to add items to the agenda.

Thanks for participating this meeting[1].

I have gotten some tasks in the meeting and I'd like to inform the status.

* Important patches list for PoC of v2.1 API
  I have written the list as
  "For PoC, The patches are necessary to be reviewed with priority:"
  on https://etherpad.openstack.org/p/NovaV2OnV3POC

* Check not only response body but also response header with Tempest
  I have created the patch(https://review.openstack.org/#/c/83661/) for it.


Thanks
Ken'ichi Ohmichi

---
[1]: 
http://eavesdrop.openstack.org/meetings/nova_api/2014/nova_api.2014-03-28-00.00.html

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO][CI] all overcloud jobs failing

2014-03-27 Thread Robert Collins
Swift changed the permissions on the swift ring object file which
broke tripleo deployments of swift. (root:root mode 0600 files are not
readable by the 'swift' user). We've got a patch in flight
(https://review.openstack.org/#/c/83645/) that will fix this, but
until that lands please don't spend a lot of time debugging why your
overcloud tests fail :). (Also please don't land any patch that might
affect the undercloud functionality or overcloud until the fix is
landed).

Btw Swift folk - 'check experimental' runs the tripleo jobs in all
projects, so if you any concerns about impacting deployments - please
run 'check experimental' before approving things ;)

-Rob

-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev