[openstack-dev] [Openstack] [openstack] [murano] Deployment of Environment gives [yaql.exceptions.YaqlExecutionException]: Unable to run values error

2015-10-11 Thread Sumanth Sathyanarayana
Hi,

I am trying to add a simple app like MySql into a new enviroment from the
murano dashboard and trying to deploy the environment and I get the error -
"[yaql.exceptions.YaqlExecutionException]: Unable to run values"

I saw that this error/bug was already reported some time back and made sure
that my code has the changes mentioned in:
*https://review.openstack.org/#/c/119072/1/meta/io.murano/Classes/resources/Instance.yaml

- Bug#1364446*
*&*

*https://bugs.launchpad.net/murano/+bug/1359225
*

*But I still am getting the error even after restarting the murano-engine
and murano-api services. If anyone has any suggestion on how to deploy a
new environment, it would be very helpful.*

*Thanks & Best Regards*
*Sumanth*
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova-scheduler] Scheduler sub-group meeting - Agenda 10/12

2015-10-11 Thread Dugger, Donald D
Meeting on #openstack-meeting-alt at 1400 UTC (8:00AM MDT)



1) Mitaka planning

2) Opens

--
Don Dugger
"Censeo Toto nos in Kansa esse decisse." - D. Gale
Ph: 303/443-3786

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] WARNING - breaking backwards compatibility in puppet-keystone

2015-10-11 Thread Gilles Dubreuil


On 08/10/15 07:17, Rich Megginson wrote:
> On 10/07/2015 03:54 PM, Matt Fischer wrote:
>>
>> I thought the agreement was that default would be assumed so that we
>> didn't break backwards compatibility?
>>
> 
> puppet-heat had already started using domains, and had already written
> their code based on the implementation where an unqualified name was
> allowed if it was unique among all domains.  That code will need to
> change to specify the domain.  Any other code that was already using
> domains (which I'm assuming is hardly any, if at all) will also need to
> change.
> 
> 

Patch for puppet-heat: https://review.openstack.org/232366

The indirection patch depends on it and both would have to be merged
together.


>> On Oct 7, 2015 10:35 AM, "Rich Megginson" > > wrote:
>>
>> tl;dr You must specify a domain when using domain scoped resources.
>>
>> If you are using domains with puppet-keystone, there is a proposed
>> patch that will break backwards compatibility.
>>
>> https://review.openstack.org/#/c/226624/ Replace indirection calls
>>
>> "Indirection calls are replaced with #fetch_project and
>> #fetch_user methods
>> using python-openstackclient (OSC).
>>
>> Also removes the assumption that if a resource is unique within a
>> domain space
>> then the domain doesn't have to be specified."
>>
>> It is the last part which is causing backwards compatibility to be
>> broken.  This patch requires that a domain scoped resource _must_
>> be qualified with the domain name if _not_ in the 'Default'
>> domain.  Previously, you did not have to qualify a resource name
>> with the domain if the name was unique in _all_ domains.  The
>> problem was this code relied heavily on puppet indirection, and
>> was complex and difficult to maintain.  We removed it in favor of
>> a very simple implementation: if the name is not qualified with a
>> domain, it must be in the 'Default' domain.
>>

Matt,

The current implementation is *real* pain and slowing us down.


>> Here is an example from puppet-heat - the 'heat_admin' user has
>> been created in the 'heat_stack' domain previously.
>>
>> ensure_resource('keystone_user_role', 
>> 'heat_admin@::heat_stack", {
>>   'roles' => ['admin'],
>> })
>>
>> This means "assign the user 'heat_admin' in the unspecified domain
>> to have the domain scoped role 'admin' in the 'heat_stack'
>> domain". It is a domain scoped role, not a project scoped role,
>> because in "@::heat_stack" there is no project, only a domain.
>> Note that the domain for the 'heat_admin' user is unspecified. In
>> order to specify the domain you must use
>> 'heat_admin::heat_stack@::heat_stack'. This is the recommended fix
>> - to fully qualify the user + domain.
>>
>> The breakage manifests itself like this, from the logs::
>>
>> 2015-10-02 06:07:39.574 | Debug: Executing '/usr/bin/openstack
>> user show --format shell heat_admin --domain Default'
>> 2015-10-02 06:07:40.505 | Error:
>> 
>> /Stage[main]/Heat::Keystone::Domain/Keystone_user_role[heat_admin@::heat]:
>> Could not evaluate: No user heat_admin with domain  found
>>
>> This is from the keystone_user_role code. Since the role user was
>> specified as 'heat_admin' with no domain, the keystone_user_role
>> code looks for 'heat_admin' in the 'Default' domain and can't find
>> it, and raises an error.
>>
>> Right now, the only way to specify the domain is by adding
>> '::domain_name' to the user name, as
>> 'heat_admin::heat_stack@::heat_stack'.  Sofer is working on a way
>> to add the domain name as a parameter of keystone_user_role -
>> https://review.openstack.org/226919 - so in the near future you
>> will be able to specify the resource like this:
>>
>>
>> ensure_resource('keystone_user_role', 
>> 'heat_admin@::heat_stack", {
>>   'roles' => ['admin'],
>>   'user_domain_name' => 'heat_stack',
>> })
>>
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet][keystone] Choose domain names with 'composite namevar' or 'meaningless name'?

2015-10-11 Thread Gilles Dubreuil


On 08/10/15 03:40, Rich Megginson wrote:
> On 10/07/2015 09:08 AM, Sofer Athlan-Guyot wrote:
>> Rich Megginson  writes:
>>
>>> On 10/06/2015 02:36 PM, Sofer Athlan-Guyot wrote:
 Rich Megginson  writes:

> On 09/30/2015 11:43 AM, Sofer Athlan-Guyot wrote:
>> Gilles Dubreuil  writes:
>>
>>> On 30/09/15 03:43, Rich Megginson wrote:
 On 09/28/2015 10:18 PM, Gilles Dubreuil wrote:
> On 15/09/15 19:55, Sofer Athlan-Guyot wrote:
>> Gilles Dubreuil  writes:
>>
>>> On 15/09/15 06:53, Rich Megginson wrote:
 On 09/14/2015 02:30 PM, Sofer Athlan-Guyot wrote:
> Hi,
>
> Gilles Dubreuil  writes:
>
>> A. The 'composite namevar' approach:
>>
>> keystone_tenant {'projectX::domainY': ... }
>>   B. The 'meaningless name' approach:
>>
>>keystone_tenant {'myproject': name='projectX',
>> domain=>'domainY',
>> ...}
>>
>> Notes:
>>   - Actually using both combined should work too with
>> the domain
>> supposedly overriding the name part of the domain.
>>   - Please look at [1] this for some background
>> between the two
>> approaches:
>>
>> The question
>> -
>> Decide between the two approaches, the one we would like to
>> retain for
>> puppet-keystone.
>>
>> Why it matters?
>> ---
>> 1. Domain names are mandatory in every user, group or
>> project.
>> Besides
>> the backward compatibility period mentioned earlier, where
>> no domain
>> means using the default one.
>> 2. Long term impact
>> 3. Both approaches are not completely equivalent which
>> different
>> consequences on the future usage.
> I can't see why they couldn't be equivalent, but I may be
> missing
> something here.
 I think we could support both.  I don't see it as an either/or
 situation.

>> 4. Being consistent
>> 5. Therefore the community to decide
>>
>> Pros/Cons
>> --
>> A.
> I think it's the B: meaningless approach here.
>
>>Pros
>>  - Easier names
> That's subjective, creating unique and meaningful name
> don't look
> easy
> to me.
 The point is that this allows choice - maybe the user
 already has some
 naming scheme, or wants to use a more "natural" meaningful
 name -
 rather
 than being forced into a possibly "awkward" naming scheme
 with "::"

   keystone_user { 'heat domain admin user':
 name => 'admin',
 domain => 'HeatDomain',
 ...
   }

   keystone_user_role {'heat domain admin
 user@::HeatDomain':
 roles => ['admin']
 ...
   }

>>Cons
>>  - Titles have no meaning!
 They have meaning to the user, not necessarily to Puppet.

>>  - Cases where 2 or more resources could exists
 This seems to be the hardest part - I still cannot figure
 out how
 to use
 "compound" names with Puppet.

>>  - More difficult to debug
 More difficult than it is already? :P

>>  - Titles mismatch when listing the resources
>> (self.instances)
>>
>> B.
>>Pros
>>  - Unique titles guaranteed
>>  - No ambiguity between resource found and their
>> title
>>Cons
>>  - More complicated titles
>> My vote
>> 
>> I would love to have the approach A for easier name.
>> But I've seen the challenge of maintaining the providers
>> behind the
>> curtains and the confusion it creates with name/titles and
>> when
>> not sure
>> about the domain we're dealing with.
>> Also I believe that supporting self.instances consistently
>> with
>> meaningful name is saner.
>> Therefore I vote B
> +1 for B.
>
> My view is that

Re: [openstack-dev] [Neutron][db]Neutron db revision fails

2015-10-11 Thread Zhi Chang
Thanks for your reply. What should I do if I want to create a new migration 
script?


Thanks
Zhi Chang
 
 
-- Original --
From:  "Vikram Choudhary";
Date:  Mon, Oct 12, 2015 12:22 PM
To:  "OpenStack Development Mailing List (not for usage 
questions)"; 

Subject:  Re: [openstack-dev] [Neutron][db]Neutron db revision fails

 

Hi Zhi,
 
We already have a defect logged for this issue.
 
https://bugs.launchpad.net/neutron/+bug/1503342
 
Thanks
 Vikram
 On Oct 12, 2015 8:10 AM, "Zhi Chang"  wrote:
Hi, everyone.
I install a devstack from the latest code. But there is an error when I 
create a db migration script. My migration shell is:
"neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file 
/etc/neutron/plugins/ml2/ml2_conf.ini revision -m "Just For Test""
The error shows:
  Running revision for neutron ...
  FAILED: Multiple heads are present; please specify the head revision on 
which the new revision should be based, or perform a merge.


Does my method wrong? Could someone helps me?


Thx
Zhi Chang


__
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][db]Neutron db revision fails

2015-10-11 Thread Vikram Choudhary
Hi Zhi,

We already have a defect logged for this issue.

https://bugs.launchpad.net/neutron/+bug/1503342

Thanks
Vikram
On Oct 12, 2015 8:10 AM, "Zhi Chang"  wrote:

> Hi, everyone.
> I install a devstack from the latest code. But there is an error when
> I create a db migration script. My migration shell is:
> "neutron-db-manage --config-file /etc/neutron/neutron.conf
> --config-file /etc/neutron/plugins/ml2/ml2_conf.ini revision -m "Just For
> Test""
> The error shows:
>   Running revision for neutron ...
>   FAILED: Multiple heads are present; please specify the head revision
> on which the new revision should be based, or perform a merge.
>
> Does my method wrong? Could someone helps me?
>
> Thx
> Zhi Chang
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ceilometer] Meter-list with multiple filters in simple query is not working

2015-10-11 Thread Srikanth Vavilapalli
Hi

Can anyone plz help me on how to specify a simple query with multiple values 
for a query field in a Ceilometer meter-list request? I need to fetch meters 
that belongs to more than one project id. I have tried the following query 
format, but only the last query value (in this case, project_id= 
d41cdd2ade394e599b40b9b50d9cd623) is used for filtering. Any help is 
appreciated here.

curl -H 'X-Auth-Token:' 
http://localhost:8777/v2/meters?q.field=project_id&q.op=eq&q.value=f28d2e522e1f466a95194c10869acd0c&q.field=project_id&q.op=eq&q.value=d41cdd2ade394e599b40b9b50d9cd623

Thanks
Srikanth
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][db]Neutron db revision fails

2015-10-11 Thread Zhi Chang
Hi, everyone.
I install a devstack from the latest code. But there is an error when I 
create a db migration script. My migration shell is:
"neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file 
/etc/neutron/plugins/ml2/ml2_conf.ini revision -m "Just For Test""
The error shows:
  Running revision for neutron ...
  FAILED: Multiple heads are present; please specify the head revision on 
which the new revision should be based, or perform a merge.


Does my method wrong? Could someone helps me?


Thx
Zhi Chang__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Security] Introducing Killick PKI

2015-10-11 Thread Adam Young

On 10/11/2015 06:50 PM, Robert Collins wrote:

On 9 October 2015 at 06:47, Adam Young  wrote:

On 10/08/2015 12:50 PM, Chivers, Doug wrote:

Hi All,

At a previous OpenStack Security Project IRC meeting, we briefly discussed
a lightweight traditional PKI using the Anchor validation functionality, for
use in internal deployments, as an alternative to things like MS ADCS. To
take this further, I have drafted a spec, which is in the security-specs
repo, and would appreciate feedback:

https://review.openstack.org/#/c/231955/

Regards

Doug

How is this better than Dogtag/FreeIPA?

DogTag is Tomcat yeah? Thats no exactly trivial to deploy - the spec
specifically calls out the desire to have a low-admin-overhead
solution. Perhaps DogTag/FreeIPA are that in the context of a RHEL
environment? I see that the dogtag-pki packages in Debian are up to
date - perhaps more discussion w/ops is needed?


Tomcat is trivial to deploy; it is in all the major distributions 
already. Dogtag is slightly more complex because it does things right 
WRT security hardening the Tomcat instance.  But the process is 
automated as part of the Dogtag code base.


A better bet is using Dogtag as installed with FreeIPA. It is supported 
in both Debian based and RPM based distributions.  The dev team is 
primarily Red Hat, with an Ubuntu packager dealing with the headaches of 
getting it installed there.  There is someone working on SuSE already as 
well.  FreeIPA gets us Dogtag, as well as Kerberos for Symmetric Key.


We have a demo of Using Kerberos to authenticate and encrypt the 
messaging backend (AMQP 1.0 Driver with Proton) and also for auth on all 
of the Web services.  I'll be one of the people demoing it at the Red 
Hat booth at Tokyo if you want to see it and ask questions directly.


For Self Signed certificates, we can use certmonger and the self-signed 
backend; we should be using Certmonger as the cert management client no 
matter what.  There was a Certmonger- Barbican plugin underway, but I do 
not know the status of it.



Let's not reinvent this; the security and cryptography focused people on 
OpenStack are already spread thin. Lets focus on reusing pre-existing 
solutions.






-Rob




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [CI] Try to introduce RFC mechanism to CI.

2015-10-11 Thread Tang Chen

To all,

Ok, thanks for the advices.

Thanks. :)

On 10/09/2015 08:27 PM, Jeremy Stanley wrote:

On 2015-10-09 17:00:27 +0800 (+0800), Tang Chen wrote:
[...]

I'm not sure if it is a good idea. Please help to review the
following BP.

https://blueprints.launchpad.net/openstack-ci/+spec/ci-rfc-mechanism

The Infra team doesn't rely on blueprints, and instead uses
http://specs.openstack.org/openstack-infra/infra-specs/ to plan
complex implementation work. I only just discovered that you can
disable blueprints on a project in Launchpad, so I've now done that
for the "openstack-ci" project.

That said, I don't expect the proposed feature would get very far.
We're committed to providing test results for all patchsets when
possible; that doesn't mean you are required to pay attention to the
results or even wait for them to be posted on your change while it's
still in a state of initial flux.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Requests + urllib3 + distro packages

2015-10-11 Thread Robert Collins
On 10 October 2015 at 02:58, Cory Benfield  wrote:
>
>> On 9 Oct 2015, at 14:40, William M Edmonds  wrote:
>>
>> Cory Benfield  writes:
>> > > The problem that occurs is the result of a few interacting things:
>> > >  - requests has very very specific versions of urllib3 it works with.
>> > > So specific they aren't always released yet.
>> >
>> > This should no longer be true. Our downstream redistributors pointedout to 
>> > us
>> > that this  was making their lives harder than they needed to be, so it's 
>> > now
>> > our policy to only  update to actual release versions of urllib3.
>>
>> That's great... except that I'm confused as to why requests would continue 
>> to repackage urllib3 if that's the case. Why not just prereq the version of 
>> urllib3 that it needs? I thought the one and only answer to that question 
>> had been so that requests could package non-standard versions.
>>
>
> That is not and was never the only reason for vendoring urllib3. However, and 
> I cannot stress this enough, the decision to vendor urllib3 is *not going to 
> be changed on this thread*. If and when it changes, it will be by consensus 
> decision from the requests maintenance team, which we do not have at this 
> time.
>
> Further, as I pointed out to Donald Stufft on IRC, if requests unbundled 
> urllib3 *today* that would not fix the problem. The reason is that we’d 
> specify our urllib3 dependency as: urllib3>=1.12,<1.13. This dependency note 
> would still cause exactly the problem observed in this thread.

Actually, that would fix the problem (in conjunction with a fix to
https://github.com/pypa/pip/issues/2687 - which is conceptually
trivial once 988 is fixed).

> As you correctly identify in your subsequent email, William, the core problem 
> is mixing of packages from distributions and PyPI. This happens with any tool 
> with external dependencies: if you subsequently install a different version 
> of a dependency using a packaging tool that is not aware of some of the 
> dependency tree, it is entirely plausible that an incompatible version will 
> be installed. It’s not hard to trigger this kind of thing on Ubuntu. IMO, 
> what OpenStack needs is a decision about where it’s getting its packages 
> from, and then to refuse to mix the two.

We can't do that for all our downstreams. Further, Ubuntu preserve
dependency information - I think a key underlying issue is that they
don't fix up the dependency data for requests when they alter it. I've
filed https://bugs.launchpad.net/ubuntu/+source/python-requests/+bug/1505039
to complement the one filed on Fedora earlier in this thread.

*We* have the privilege of working directly with folk like libvirt
that have been problematic in the past and getting those things
addressed, so that we can run in a virtualenv happily. But we can't
insist that user X who want to use some openstack library that uses
requests but also some other thing (maybe some SWIG binding or
something). So framing this as being driven by the mix is false: the
thing that drives this is a combination - e.g. coming in part from
defects in pip, and the existence of things that can't be installed in
virtualenvs.

Obviously a trivial workaround is to always use virtualenvs and not
system-site-packages.

To sum up the thread, it sounds to me like a viable way-forward is:

 - get distros to fixup their requests Python dependencies (and
hopefully they can update that in stable releases).
 - fix the existing known bugs in pip where such accurate dependencies
are violated by some operations.

-Rob

-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Scheduler proposal

2015-10-11 Thread Amrith Kumar
Dims,

Not that I know of; I believe that Cassandra works fine with OpenJDK. See [1] 
and [2].

From time to time, there have been questions about the supported JDK for 
Cassandra, a recent one (just that I happen to remember this) tries to make the 
case that you must use Sun/Oracle JDK. This is not a requirement by any means. 
See [3].

To the best of my knowledge, OpenJDK is sufficient.

-amrith

[1] http://wiki.apache.org/cassandra/GettingStarted
[2] http://docs.datastax.com/en/cassandra/2.2/cassandra/install/installDeb.html
[3] 
http://stackoverflow.com/questions/21487354/does-latest-cassandra-support-openjdk


From: Davanum Srinivas [mailto:dava...@gmail.com]
Sent: Saturday, October 10, 2015 8:54 PM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] Scheduler proposal

Not implying cassandra is the right option. Just curious about the assertion.

-- Dims

On Sat, Oct 10, 2015 at 5:53 PM, Davanum Srinivas 
mailto:dava...@gmail.com>> wrote:
Thomas,

i am curious as well. AFAIK, cassandra works well with OpenJDK. Can you please 
elaborate what you concerns are for #1?

Thanks,
Dims

On Sat, Oct 10, 2015 at 5:43 PM, Joshua Harlow 
mailto:harlo...@fastmail.com>> wrote:
I'm curious is there any more detail about #1 below anywhere online?

Does cassandra use some features of the JVM that the openJDK version doesn't 
support? Something else?

-Josh

Thomas Goirand wrote:
On 10/07/2015 07:36 PM, Ed Leafe wrote:
Several months ago I proposed an experiment [0] to see if switching
the data model for the Nova scheduler to use Cassandra as the backend
would be a significant improvement as opposed to the current design

This is probably right. I don't know, I'm not an expert in Nova, or its
scheduler. However, to make it possible for us (ie: downstream
distributions and/or OpenStack users) to use Cassandra, you have to
solve one of the below issues:

1/ Cassandra developers upstream should start caring about OpenJDK, and
make sure that it is also a good platform for it. They should stop
caring only about the Oracle JVM.

... or ...

2/ Oracle should make its JVM free software.

As there is no hope for any of the above, Cassandra is a no-go for
downstream distributions.

So, by all means, propose a new back-end, implement it, profit. But that
back-end cannot be Cassandra the way it is now.

Cheers,

Thomas Goirand (zigo)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Davanum Srinivas :: https://twitter.com/dims



--
Davanum Srinivas :: https://twitter.com/dims
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Security] Introducing Killick PKI

2015-10-11 Thread Robert Collins
On 9 October 2015 at 06:47, Adam Young  wrote:
> On 10/08/2015 12:50 PM, Chivers, Doug wrote:
>>
>> Hi All,
>>
>> At a previous OpenStack Security Project IRC meeting, we briefly discussed
>> a lightweight traditional PKI using the Anchor validation functionality, for
>> use in internal deployments, as an alternative to things like MS ADCS. To
>> take this further, I have drafted a spec, which is in the security-specs
>> repo, and would appreciate feedback:
>>
>> https://review.openstack.org/#/c/231955/
>>
>> Regards
>>
>> Doug
>
> How is this better than Dogtag/FreeIPA?

DogTag is Tomcat yeah? Thats no exactly trivial to deploy - the spec
specifically calls out the desire to have a low-admin-overhead
solution. Perhaps DogTag/FreeIPA are that in the context of a RHEL
environment? I see that the dogtag-pki packages in Debian are up to
date - perhaps more discussion w/ops is needed?

-Rob

-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] service catalog: TNG

2015-10-11 Thread Robert Collins
On 10 October 2015 at 12:14, Clint Byrum  wrote:

>> I think the important thing is to focus on what we have in 6 months
>> doesn't break current users / applications, and is incrementally closer
>> to our end game. That's the lens I'm going to keep putting on this one.
>>
>
> Right, so adopting a new catalog type that we see as the future, and
> making it the backend for the current solution, is the route I'd like
> to work toward. If we get the groundwork laid for that, but we don't
> make any user-visible improvement in 6 months, is that a failure or a win?

I don't think its either from this distance. IF we guess right about
what we need, the groundwork helps, and \o/.

If we guess wrong, then we now have new groundwork that still needs
hammering on, and it might be better or worse :/.

If we can be assessing the new thing against our known needs at each
step, that might reduce the risk.

But the biggest risk I see is the one Sean already articulated: we
have only a vague idea about how folk are /actually/ using what we
built, and thus its very hard to predict 'changing X will break
someone'.

-Rob

-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Nominating two new core reviewers

2015-10-11 Thread Vladyslav Drok
Oh, right, and now I can vote for John, +2 :)
On Oct 11, 2015 21:59, "John Villalovos"  wrote:

> I also would like to say thank you for the nomination and support. I am
> honored and humbled to be chosen. Thank you to all of you for this vote of
> confidence and I will do my best to maintain it.
>
> And congrats to Vlad, it is well deserved :)
>
> John
>
> On Sun, Oct 11, 2015 at 11:27 AM, Vladyslav Drok 
> wrote:
>
>> Thank you all for the nomination and support! It is a big responsibility
>> to be a member of the core team, and I'll do my best to justify your trust.
>>
>> Vlad
>> On Oct 11, 2015 20:33, "Jim Rollenhagen"  wrote:
>>
>>> On Thu, Oct 08, 2015 at 02:47:01PM -0700, Jim Rollenhagen wrote:
>>> > Hi all,
>>> >
>>> > I've been thinking a lot about Ironic's core reviewer team and how we
>>> might
>>> > make it better.
>>> >
>>> > I'd like to grow the team more through trust and mentoring. We should
>>> be
>>> > able to promote someone to core based on a good knowledge of *some* of
>>> > the code base, and trust them not to +2 things they don't know about.
>>> I'd
>>> > also like to build a culture of mentoring non-cores on how to review,
>>> in
>>> > preparation for adding them to the team. Through these pieces, I'm
>>> hoping
>>> > we can have a few rounds of core additions this cycle.
>>> >
>>> > With that said...
>>> >
>>> > I'd like to nominate Vladyslav Drok (vdrok) for the core team. His
>>> reviews
>>> > have been super high quality, and the quantity is ever-increasing. He's
>>> > also started helping out with some smaller efforts (full tempest, for
>>> > example), and I'd love to see that continue with larger efforts.
>>> >
>>> > I'd also like to nominate John Villalovos (jlvillal). John has been
>>> > reviewing a ton of code and making a real effort to learn everything,
>>> > and keep track of everything going on in the project.
>>> >
>>> > Ironic cores, please reply with your vote; provided feedback is
>>> positive,
>>> > I'd like to make this official next week sometime. Thanks!
>>>
>>> It appears all cores have responded positively; I've added both
>>> Vladyslav and John to the ironic-core group in Gerrit.
>>>
>>> Note that your browser cache may prevent you from seeing the +2 option.
>>> If this happens, just do a hard refresh (ctrl-shift-r) and it should
>>> work itself out.
>>>
>>> Congrats to both of you, and welcome to the team! :)
>>>
>>> // jim
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Nominating two new core reviewers

2015-10-11 Thread John Villalovos
I also would like to say thank you for the nomination and support. I am
honored and humbled to be chosen. Thank you to all of you for this vote of
confidence and I will do my best to maintain it.

And congrats to Vlad, it is well deserved :)

John

On Sun, Oct 11, 2015 at 11:27 AM, Vladyslav Drok  wrote:

> Thank you all for the nomination and support! It is a big responsibility
> to be a member of the core team, and I'll do my best to justify your trust.
>
> Vlad
> On Oct 11, 2015 20:33, "Jim Rollenhagen"  wrote:
>
>> On Thu, Oct 08, 2015 at 02:47:01PM -0700, Jim Rollenhagen wrote:
>> > Hi all,
>> >
>> > I've been thinking a lot about Ironic's core reviewer team and how we
>> might
>> > make it better.
>> >
>> > I'd like to grow the team more through trust and mentoring. We should be
>> > able to promote someone to core based on a good knowledge of *some* of
>> > the code base, and trust them not to +2 things they don't know about.
>> I'd
>> > also like to build a culture of mentoring non-cores on how to review, in
>> > preparation for adding them to the team. Through these pieces, I'm
>> hoping
>> > we can have a few rounds of core additions this cycle.
>> >
>> > With that said...
>> >
>> > I'd like to nominate Vladyslav Drok (vdrok) for the core team. His
>> reviews
>> > have been super high quality, and the quantity is ever-increasing. He's
>> > also started helping out with some smaller efforts (full tempest, for
>> > example), and I'd love to see that continue with larger efforts.
>> >
>> > I'd also like to nominate John Villalovos (jlvillal). John has been
>> > reviewing a ton of code and making a real effort to learn everything,
>> > and keep track of everything going on in the project.
>> >
>> > Ironic cores, please reply with your vote; provided feedback is
>> positive,
>> > I'd like to make this official next week sometime. Thanks!
>>
>> It appears all cores have responded positively; I've added both
>> Vladyslav and John to the ironic-core group in Gerrit.
>>
>> Note that your browser cache may prevent you from seeing the +2 option.
>> If this happens, just do a hard refresh (ctrl-shift-r) and it should
>> work itself out.
>>
>> Congrats to both of you, and welcome to the team! :)
>>
>> // jim
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Proposed solution to "Admin" ness improperly scoped:

2015-10-11 Thread Adam Young

https://bugs.launchpad.net/keystone/+bug/968696/comments/39

1. Add a config value ADMIN_PROJECT_ID
2. In token creation, if ADMIN_PROJECT_ID is not None: only add the 
admin role to the token if the id of the scoped project == ADMIN_PROJECT_ID


Does this work?

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Nominating two new core reviewers

2015-10-11 Thread Vladyslav Drok
Thank you all for the nomination and support! It is a big responsibility to
be a member of the core team, and I'll do my best to justify your trust.

Vlad
On Oct 11, 2015 20:33, "Jim Rollenhagen"  wrote:

> On Thu, Oct 08, 2015 at 02:47:01PM -0700, Jim Rollenhagen wrote:
> > Hi all,
> >
> > I've been thinking a lot about Ironic's core reviewer team and how we
> might
> > make it better.
> >
> > I'd like to grow the team more through trust and mentoring. We should be
> > able to promote someone to core based on a good knowledge of *some* of
> > the code base, and trust them not to +2 things they don't know about. I'd
> > also like to build a culture of mentoring non-cores on how to review, in
> > preparation for adding them to the team. Through these pieces, I'm hoping
> > we can have a few rounds of core additions this cycle.
> >
> > With that said...
> >
> > I'd like to nominate Vladyslav Drok (vdrok) for the core team. His
> reviews
> > have been super high quality, and the quantity is ever-increasing. He's
> > also started helping out with some smaller efforts (full tempest, for
> > example), and I'd love to see that continue with larger efforts.
> >
> > I'd also like to nominate John Villalovos (jlvillal). John has been
> > reviewing a ton of code and making a real effort to learn everything,
> > and keep track of everything going on in the project.
> >
> > Ironic cores, please reply with your vote; provided feedback is positive,
> > I'd like to make this official next week sometime. Thanks!
>
> It appears all cores have responded positively; I've added both
> Vladyslav and John to the ironic-core group in Gerrit.
>
> Note that your browser cache may prevent you from seeing the +2 option.
> If this happens, just do a hard refresh (ctrl-shift-r) and it should
> work itself out.
>
> Congrats to both of you, and welcome to the team! :)
>
> // jim
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Document adding --memory option to create containers

2015-10-11 Thread Neil Jerram
Please note that you jumped there from developer-focus to user‎-focus. Of 
course some users are also developers, and vice versa, but I would expect doc 
focussed on development to be quite different from that focussed on use.

For development doc, I think the Neutron devref is a great example, so you 
might want to be inspired by that.

Regards,
 Neil


From: Adrian Otto
Sent: Thursday, 8 October 2015 21:07
To: OpenStack Development Mailing List (not for usage questions)
Reply To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Document adding --memory option to create 
containers


Steve,

I agree with the concept of a simple quickstart doc, but there also needs to be 
a comprehensive user guide, which does not yet exist. In the absence of the 
user guide, the quick start is the void where this stuff is starting to land. 
We simply need to put together a magnum reference document, and start moving 
content into that.

Adrian

On Oct 8, 2015, at 12:54 PM, Steven Dake (stdake) 
mailto:std...@cisco.com>> wrote:

Quickstart guide should be dead dead dead dead simple.  The goal of the 
quickstart guide isn’t to tach people best practices around Magnum.  It is to 
get a developer operational to give them that sense of feeling that Magnum can 
be worked on.  The goal of any quickstart guide should be to encourage the 
thinking that a person involving themselves with the project the quickstart 
guide represents is a good use of the person’s limited time on the planet.

Regards
-steve


From: Hongbin Lu mailto:hongbin...@huawei.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Thursday, October 8, 2015 at 9:00 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: [openstack-dev] [magnum] Document adding --memory option to create 
containers

Hi team,

I want to move the discussion in the review below to here, so that we can get 
more feedback

https://review.openstack.org/#/c/232175/

In summary, magnum currently added support for specifying the memory size of 
containers. The specification of the memory size is optional, and the COE won’t 
reserve any memory to the containers with unspecified memory size. The debate 
is whether we should document this optional parameter in the quickstart guide. 
Below is the positions of both sides:

Pros:
· It is a good practice to always specifying the memory size, because 
containers with unspecified memory size won’t have QoS guarantee.
· The in-development autoscaling feature [1] will query the memory size 
of each container to estimate the residual capacity and triggers scaling 
accordingly. Containers with unspecified memory size will be treated as taking 
0 memory, which negatively affects the scaling decision.
Cons:
· The quickstart guide should be kept as simple as possible, so it is 
not a good idea to have the optional parameter in the guide.

Thoughts?

[1] https://blueprints.launchpad.net/magnum/+spec/autoscale-bay
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Nominating two new core reviewers

2015-10-11 Thread Jim Rollenhagen
On Thu, Oct 08, 2015 at 02:47:01PM -0700, Jim Rollenhagen wrote:
> Hi all,
> 
> I've been thinking a lot about Ironic's core reviewer team and how we might
> make it better.
> 
> I'd like to grow the team more through trust and mentoring. We should be
> able to promote someone to core based on a good knowledge of *some* of
> the code base, and trust them not to +2 things they don't know about. I'd
> also like to build a culture of mentoring non-cores on how to review, in
> preparation for adding them to the team. Through these pieces, I'm hoping
> we can have a few rounds of core additions this cycle.
> 
> With that said...
> 
> I'd like to nominate Vladyslav Drok (vdrok) for the core team. His reviews
> have been super high quality, and the quantity is ever-increasing. He's
> also started helping out with some smaller efforts (full tempest, for
> example), and I'd love to see that continue with larger efforts.
> 
> I'd also like to nominate John Villalovos (jlvillal). John has been
> reviewing a ton of code and making a real effort to learn everything,
> and keep track of everything going on in the project.
> 
> Ironic cores, please reply with your vote; provided feedback is positive,
> I'd like to make this official next week sometime. Thanks!

It appears all cores have responded positively; I've added both
Vladyslav and John to the ironic-core group in Gerrit.

Note that your browser cache may prevent you from seeing the +2 option.
If this happens, just do a hard refresh (ctrl-shift-r) and it should
work itself out.

Congrats to both of you, and welcome to the team! :)

// jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Scheduler proposal

2015-10-11 Thread Joshua Harlow

Clint Byrum wrote:

Excerpts from Boris Pavlovic's message of 2015-10-11 00:02:39 -0700:

2Everybody,

Just curios why we need such complexity.


Let's take a look from other side:
1) Information about all hosts (even in case of 100k hosts) will be less
then 1 GB
2) Usually servers that runs scheduler service have at least 64GB RAM and
more on the board
3) math.log(10)<  12  (binary search per rule)
4) We have less then 20 rules for scheduling
5) Information about hosts is updated every 60 seconds (no updates host is
dead)


According to this information:
1) We can store everything in RAM of single server
2) We can use Python
3) Information about hosts is temporary data and shouldn't be stored in
persistence storage


Simplest architecture to cover this:
1) Single RPC service that has two methods: find_host(rules),
update_host(host, data)
2) Store information about hosts  like a dict (host_name->data)
3) Create for each rule binary tree and update it on each host update
4) Make a algorithm that will use binary trees to find host based on rules
5) Each service like compute node, volume node, or neutron will send
updates about host
that they managed (cross service scheduling)
6) Make a algorithm that will sync host stats in memory between different
schedulers


I'm in, except I think this gets simpler with an intermediary service
like ZK/Consul to keep track of this 1GB of data and replace the need
for 6, and changes the implementation of 5 to "updates its record and
signals its presence".

What you've described is where I'd like to experiment, but I don't want
to reinvent ZK or Consul or etcd when they already exist and do such a
splendid job keeping observers informed of small changes in small data
sets. You still end up with the same in-memory performance, and this is
in line with some published white papers from Google around their use
of Chubby, which is their ZK/Consul.



+1 let's not recreate this; the code @ paste.openstack.org/show/475941/ 
basically does 1-6 and within about ~100 lines, it doesn't optimize 
things into a binary tree, but thats easily doable... for all I care put 
the information received into N trees (perhaps even using 
http://docs.openstack.org/developer/taskflow/types.html#module-taskflow.types.tree) 
and do searches across those as desired (and this is where u can get 
into considering something like numpy to help).



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Plugins related functionality in Fuel Client

2015-10-11 Thread Roman Prykhodchenko
Since there are already two services Fuel Client has to interact with, I filed 
a bug, for using service discovery: https://bugs.launchpad.net/fuel/+bug/1504471

> 9 жовт. 2015 р. о 11:25 Roman Prykhodchenko  написав(ла):
> 
> In that case I would suggest to also use Keystone service directory for 
> discovering services.
> 
>> 9 жовт. 2015 р. о 11:00 Evgeniy L > > написав(ла):
>> 
>> >> I’d say even if it will be a separate service it’s better to proxy 
>> >> requests through Nailgun’s API to have a single entry point.
>> 
>> I don't think that application such as Nailgun should be responsible for 
>> proxying
>> requests, we solved similar problem for OSTF with adding proxy rule in Nginx.
>> 
>> Thanks,
>> 
>> On Fri, Oct 9, 2015 at 11:45 AM, Roman Prykhodchenko > > wrote:
>> I’d say even if it will be a separate service it’s better to proxy requests 
>> through Nailgun’s API to have a single entry point.
>> 
>>> 9 жовт. 2015 р. о 10:23 Evgeniy L >> > написав(ла):
>>> 
>>> Hi,
>>> 
>>> +1, but I think it's better to spawn separate service, instead of adding it 
>>> to Nailgun.
>>> 
>>> Thanks,
>>> 
>>> On Fri, Oct 9, 2015 at 1:40 AM, Roman Prykhodchenko >> > wrote:
>>> Folks,
>>> 
>>> it’s time to speak about Fuel Plugins and the way they are managed.
>>> 
>>> Currently we have some methods in Fuel Client that allow to install, remove 
>>> and do some other things to plugins. Everything looks great except that 
>>> functionality requires Fuel Client to be installed on a master node and be 
>>> running under a root user. It’s time for us to grow up and realize that 
>>> nothing can require Fuel Client to be installed on a specific computer and 
>>> of course we cannot require root permissions for any actions.
>>> 
>>> I’d like to move all that code to Nailgun, utilizing mules and hide it 
>>> behind Nailgun’s API as soon as possible. For that I filed a bug [1] and 
>>> I’d like to ask Fuel Enhancements subgroup of developers to take a close 
>>> look at it.
>>> 
>>> 
>>> 1. https://bugs.launchpad.net/fuel/+bug/1504338 
>>> 
>>> 
>>> 
>>> - romcheg
>>> 
>>> 
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
>>> 
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
>>> 
>>> 
>>> 
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org 
>>> ?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
>>> 
>> 
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
>> 
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
>> 
>> 
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org 
>> ?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
>> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Scheduler proposal

2015-10-11 Thread Davanum Srinivas
Thanks Clint!

On Sat, Oct 10, 2015 at 11:53 PM, Clint Byrum  wrote:

> Excerpts from Joshua Harlow's message of 2015-10-10 17:43:40 -0700:
> > I'm curious is there any more detail about #1 below anywhere online?
> >
> > Does cassandra use some features of the JVM that the openJDK version
> > doesn't support? Something else?
> >
>
> This about sums it up:
>
>
> https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/service/StartupChecks.java#L153-L155
>
> // There is essentially no QA done on OpenJDK builds, and
> // clusters running OpenJDK have seen many heap and load issues.
> logger.warn("OpenJDK is not recommended. Please upgrade to the newest
> Oracle Java release");
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Davanum Srinivas :: https://twitter.com/dims
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Let's get together and fix all the bugs

2015-10-11 Thread Henrique Truta
I'm in! And hope I can put some other folks in too.

Em sáb, 10 de out de 2015 às 12:03, Lance Bragstad 
escreveu:

> On Sat, Oct 10, 2015 at 8:07 AM, Boris Bobrov 
> wrote:
>
>> On Saturday 10 October 2015 08:42:10 Shinobu Kinjo wrote:
>> > So what's the procedure?
>>
>> You go to #openstack-keystone on Friday, choose a bug, talk to someone of
>> the
>> core reviewers. After talking to them fix the bug.
>>
>
> Wash, rinse, repeat? ;)
>
> Looking forward to it, I think this is a much needed pattern!
>
>>
>> > Shinobu
>> >
>> > - Original Message -
>> > From: "Adam Young" 
>> > To: openstack-dev@lists.openstack.org
>> > Sent: Saturday, October 10, 2015 12:11:35 PM
>> > Subject: Re: [openstack-dev] [keystone] Let's get together and fix all
>> the
>> > bugs
>> >
>> > On 10/09/2015 11:04 PM, Chen, Wei D wrote:
>> >
>> >
>> >
>> >
>> >
>> > Great idea! core reviewer’s advice is definitely much important and
>> valuable
>> > before proposing a fixing. I was always thinking it will help save us
>> if we
>> > can get some agreement at some point.
>> >
>> >
>> >
>> >
>> >
>> > Best Regards,
>> >
>> > Dave Chen
>> >
>> >
>> >
>> >
>> > From: David Stanek [ mailto:dsta...@dstanek.com ]
>> > Sent: Saturday, October 10, 2015 3:54 AM
>> > To: OpenStack Development Mailing List
>> > Subject: [openstack-dev] [keystone] Let's get together and fix all the
>> bugs
>> >
>> >
>> >
>> >
>> >
>> > I would like to start running a recurring bug squashing day. The general
>> > idea is to get more focus on bugs and stability. You can find the
>> details
>> > here: https://etherpad.openstack.org/p/keystone-office-hours Can we
>> start
>> > with Bug 968696?
>>
>> --
>> С наилучшими пожеланиями,
>> Boris
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Scheduler proposal

2015-10-11 Thread Adam Lawson
I have a quick question: how is Amazon doing this? When choosing a next
path forward that reliably scales, would be interesting to know how this is
already being done.
On Oct 9, 2015 10:12 AM, "Zane Bitter"  wrote:

> On 08/10/15 21:32, Ian Wells wrote:
>
>>
>> > 2. if many hosts suit the 5 VMs then this is *very* unlucky,because
>> we should be choosing a host at random from the set of
>> suitable hosts and that's a huge coincidence - so this is a tiny
>> corner case that we shouldn't be designing around
>>
>> Here is where we differ in our understanding. With the current
>> system of filters and weighers, 5 schedulers getting requests for
>> identical VMs and having identical information are *expected* to
>> select the same host. It is not a tiny corner case; it is the most
>> likely result for the current system design. By catching this
>> situation early (in the scheduling process) we can avoid multiple
>> RPC round-trips to handle the fail/retry mechanism.
>>
>>
>> And so maybe this would be a different fix - choose, at random, one of
>> the hosts above a weighting threshold, not choose the top host every
>> time? Technically, any host passing the filter is adequate to the task
>> from the perspective of an API user (and they can't prove if they got
>> the highest weighting or not), so if we assume weighting an operator
>> preference, and just weaken it slightly, we'd have a few more options.
>>
>
> The optimal way to do this would be a weighted random selection, where the
> probability of any given host being selected is proportional to its
> weighting. (Obviously this is limited by the accuracy of the weighting
> function in expressing your actual preferences - and it's at least
> conceivable that this could vary with the number of schedulers running.)
>
> In fact, the choice of the name 'weighting' would normally imply that it's
> done this way; hearing that the 'weighting' is actually used as a 'score'
> with the highest one always winning is quite surprising.
>
> cheers,
> Zane.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Nominating two new core reviewers

2015-10-11 Thread Haomeng, Wang
+2 for both, counts! Vladyslav and John.

On Sun, Oct 11, 2015 at 8:53 AM, Chris K  wrote:

> +1 for both so +2 :)
>
> -Chris
>
> On Fri, Oct 9, 2015 at 4:26 PM, Jay Faulkner  wrote:
>
>> +1
>>
>> 
>> From: Jim Rollenhagen 
>> Sent: Thursday, October 8, 2015 2:47 PM
>> To: openstack-dev@lists.openstack.org
>> Subject: [openstack-dev] [ironic] Nominating two new core reviewers
>>
>> Hi all,
>>
>> I've been thinking a lot about Ironic's core reviewer team and how we
>> might
>> make it better.
>>
>> I'd like to grow the team more through trust and mentoring. We should be
>> able to promote someone to core based on a good knowledge of *some* of
>> the code base, and trust them not to +2 things they don't know about. I'd
>> also like to build a culture of mentoring non-cores on how to review, in
>> preparation for adding them to the team. Through these pieces, I'm hoping
>> we can have a few rounds of core additions this cycle.
>>
>> With that said...
>>
>> I'd like to nominate Vladyslav Drok (vdrok) for the core team. His reviews
>> have been super high quality, and the quantity is ever-increasing. He's
>> also started helping out with some smaller efforts (full tempest, for
>> example), and I'd love to see that continue with larger efforts.
>>
>> I'd also like to nominate John Villalovos (jlvillal). John has been
>> reviewing a ton of code and making a real effort to learn everything,
>> and keep track of everything going on in the project.
>>
>> Ironic cores, please reply with your vote; provided feedback is positive,
>> I'd like to make this official next week sometime. Thanks!
>>
>> // jim
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron]Anyone tried to mix-use openstack components or projects?

2015-10-11 Thread Kevin Benton
For the particular Nova Neutron example, the Neutron Kilo API should still
be compatible with the calls Havana Nova makes. I think you will need to
disable the Nova callbacks on the Neutron side because the Havana version
wasn't expecting them.

I've tried out many N+1 combinations (e.g. Icehouse + Juno, Juno + Kilo)
but I haven't tried a gap that big.

Cheers,
Kevin Benton

On Sat, Oct 10, 2015 at 1:50 AM, Germy Lure  wrote:

> Hi all,
>
> As you know, openstack projects are developed separately. And
> theoretically, people can create networks with Neutron in Kilo version for
> Nova in Havana version.
>
> Did Anyone tried it?
> Do we have some pages to show what combination can work together?
>
> Thanks.
> Germy
> .
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Kevin Benton
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron]What happened when the 3-rd controller restarted?

2015-10-11 Thread Kevin Benton
You can have a periodic task that asks your backend if it needs sync info.
Another option is to define a vendor-specific extension that makes it easy
to retrieve all info in one call via the HTTP API.

On Sat, Oct 10, 2015 at 2:24 AM, Germy Lure  wrote:

> Hi all,
>
> After restarting, Agents load data from Neutron via RPC. What about 3-rd
> controller? They only can re-gather data via NBI. Right?
>
> Is it possible to provide some mechanism for those controllers and agents
> to sync data? or something else I missed?
>
> Thanks
> Germy
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Kevin Benton
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Scheduler proposal

2015-10-11 Thread Boris Pavlovic
Clint,

There are many PROS and CONS in both of approaches.

Reinventing wheel (in this case it's quite simple task) and it gives more
flexibility and doesn't require
usage of ZK/Consul (which will simplify integration of it with current
system)

Using ZK/Consul for POC may save a lot of time and as well we are
delegating part of work
to other communities (which may lead in better supported/working code).

By the way some of the parts (like sync of schedulers) stuck on review in
Nova project.

Basically for POC we can use anything and using ZK/Consul may reduce
resources for development
which is good.

Best regards,
Boris Pavlovic

On Sun, Oct 11, 2015 at 12:23 AM, Clint Byrum  wrote:

> Excerpts from Boris Pavlovic's message of 2015-10-11 00:02:39 -0700:
> > 2Everybody,
> >
> > Just curios why we need such complexity.
> >
> >
> > Let's take a look from other side:
> > 1) Information about all hosts (even in case of 100k hosts) will be less
> > then 1 GB
> > 2) Usually servers that runs scheduler service have at least 64GB RAM and
> > more on the board
> > 3) math.log(10) < 12  (binary search per rule)
> > 4) We have less then 20 rules for scheduling
> > 5) Information about hosts is updated every 60 seconds (no updates host
> is
> > dead)
> >
> >
> > According to this information:
> > 1) We can store everything in RAM of single server
> > 2) We can use Python
> > 3) Information about hosts is temporary data and shouldn't be stored in
> > persistence storage
> >
> >
> > Simplest architecture to cover this:
> > 1) Single RPC service that has two methods: find_host(rules),
> > update_host(host, data)
> > 2) Store information about hosts  like a dict (host_name->data)
> > 3) Create for each rule binary tree and update it on each host update
> > 4) Make a algorithm that will use binary trees to find host based on
> rules
> > 5) Each service like compute node, volume node, or neutron will send
> > updates about host
> >that they managed (cross service scheduling)
> > 6) Make a algorithm that will sync host stats in memory between different
> > schedulers
>
> I'm in, except I think this gets simpler with an intermediary service
> like ZK/Consul to keep track of this 1GB of data and replace the need
> for 6, and changes the implementation of 5 to "updates its record and
> signals its presence".
>
> What you've described is where I'd like to experiment, but I don't want
> to reinvent ZK or Consul or etcd when they already exist and do such a
> splendid job keeping observers informed of small changes in small data
> sets. You still end up with the same in-memory performance, and this is
> in line with some published white papers from Google around their use
> of Chubby, which is their ZK/Consul.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Scheduler proposal

2015-10-11 Thread Geoff O'Callaghan
On 11/10/2015 6:25 PM, "Clint Byrum"  wrote:
>
> Excerpts from Boris Pavlovic's message of 2015-10-11 00:02:39 -0700:
> > 2Everybody,
> >
> > Just curios why we need such complexity.
> >
> >
> > Let's take a look from other side:
> > 1) Information about all hosts (even in case of 100k hosts) will be less
> > then 1 GB
> > 2) Usually servers that runs scheduler service have at least 64GB RAM
and
> > more on the board
> > 3) math.log(10) < 12  (binary search per rule)
> > 4) We have less then 20 rules for scheduling
> > 5) Information about hosts is updated every 60 seconds (no updates host
is
> > dead)

[Snip]

>
> I'm in, except I think this gets simpler with an intermediary service
> like ZK/Consul to keep track of this 1GB of data and replace the need
> for 6, and changes the implementation of 5 to "updates its record and
> signals its presence".

I have to agree,  something like ZK looks like it'd make things simpler and
they're in general well proven technology  (esp ZK)
They handle the centralized coordination well and all the hard resiliency
is thrown in.

Geoff
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Scheduler proposal

2015-10-11 Thread Clint Byrum
Excerpts from Boris Pavlovic's message of 2015-10-11 00:02:39 -0700:
> 2Everybody,
> 
> Just curios why we need such complexity.
> 
> 
> Let's take a look from other side:
> 1) Information about all hosts (even in case of 100k hosts) will be less
> then 1 GB
> 2) Usually servers that runs scheduler service have at least 64GB RAM and
> more on the board
> 3) math.log(10) < 12  (binary search per rule)
> 4) We have less then 20 rules for scheduling
> 5) Information about hosts is updated every 60 seconds (no updates host is
> dead)
> 
> 
> According to this information:
> 1) We can store everything in RAM of single server
> 2) We can use Python
> 3) Information about hosts is temporary data and shouldn't be stored in
> persistence storage
> 
> 
> Simplest architecture to cover this:
> 1) Single RPC service that has two methods: find_host(rules),
> update_host(host, data)
> 2) Store information about hosts  like a dict (host_name->data)
> 3) Create for each rule binary tree and update it on each host update
> 4) Make a algorithm that will use binary trees to find host based on rules
> 5) Each service like compute node, volume node, or neutron will send
> updates about host
>that they managed (cross service scheduling)
> 6) Make a algorithm that will sync host stats in memory between different
> schedulers

I'm in, except I think this gets simpler with an intermediary service
like ZK/Consul to keep track of this 1GB of data and replace the need
for 6, and changes the implementation of 5 to "updates its record and
signals its presence".

What you've described is where I'd like to experiment, but I don't want
to reinvent ZK or Consul or etcd when they already exist and do such a
splendid job keeping observers informed of small changes in small data
sets. You still end up with the same in-memory performance, and this is
in line with some published white papers from Google around their use
of Chubby, which is their ZK/Consul.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Scheduler proposal

2015-10-11 Thread Boris Pavlovic
2Everybody,

Just curios why we need such complexity.


Let's take a look from other side:
1) Information about all hosts (even in case of 100k hosts) will be less
then 1 GB
2) Usually servers that runs scheduler service have at least 64GB RAM and
more on the board
3) math.log(10) < 12  (binary search per rule)
4) We have less then 20 rules for scheduling
5) Information about hosts is updated every 60 seconds (no updates host is
dead)


According to this information:
1) We can store everything in RAM of single server
2) We can use Python
3) Information about hosts is temporary data and shouldn't be stored in
persistence storage


Simplest architecture to cover this:
1) Single RPC service that has two methods: find_host(rules),
update_host(host, data)
2) Store information about hosts  like a dict (host_name->data)
3) Create for each rule binary tree and update it on each host update
4) Make a algorithm that will use binary trees to find host based on rules
5) Each service like compute node, volume node, or neutron will send
updates about host
   that they managed (cross service scheduling)
6) Make a algorithm that will sync host stats in memory between different
schedulers
7) ...
8) PROFIT!

It's:
1) Simple to manage
2) Simple to understand
3) Simple to calc scalability limits
4) Simple to integrate in current OpenStack architecture


As a future bonus, we can implement scheduler-per-az functionality, so each
scheduler will store information
only about his AZ, and separated AZ can have own rabbit servers for example
which will allows us to get
horizontal scalability in terms of AZ.


So do we really need Cassandra, Mongo, ... and other Web-scale solution for
such simple task?


Best regards,
Boris Pavlovic

On Sat, Oct 10, 2015 at 11:19 PM, Clint Byrum  wrote:

> Excerpts from Chris Friesen's message of 2015-10-09 23:16:43 -0700:
> > On 10/09/2015 07:29 PM, Clint Byrum wrote:
> >
> > > Even if you figured out how to make the in-memory scheduler crazy fast,
> > > There's still value in concurrency for other reasons. No matter how
> > > fast you make the scheduler, you'll be slave to the response time of
> > > a single scheduling request. If you take 1ms to schedule each node
> > > (including just reading the request and pushing out your scheduling
> > > result!) you will never achieve greater than 1000/s. 1ms is way lower
> > > than it's going to take just to shove a tiny message into RabbitMQ or
> > > even 0mq. So I'm pretty sure this is o-k for small clouds, but would be
> > > a disaster for a large, busy cloud.
> > >
> > > If, however, you can have 20 schedulers that all take 10ms on average,
> > > and have the occasional lock contention for a resource counter
> resulting
> > > in 100ms, now you're at 2000/s minus the lock contention rate. This
> > > strategy would scale better with the number of compute nodes, since
> > > more nodes means more distinct locks, so you can scale out the number
> > > of running servers separate from the number of scheduling requests.
> >
> > As far as I can see, moving to an in-memory scheduler is essentially
> orthogonal
> > to allowing multiple schedulers to run concurrently.  We can do both.
> >
>
> Agreed, and I want to make sure we continue to be able to run concurrent
> schedulers.
>
> Going in memory won't reduce contention for the same resources. So it
> will definitely schedule faster, but it may also serialize with concurrent
> schedulers sooner, and thus turn into a situation where scaling out more
> nodes means the same, or even less throughput.
>
> Keep in mind, I actually think we give our users _WAY_ too much power
> over our clouds, and I actually think we should simply have flavor based
> scheduling and let compute nodes grab node reservation requests directly
> out of flavor based queues based on their own current observation of
> their ability to service it.
>
> But I understand that there are quite a few clouds now that have been
> given shiny dynamic scheduling tools and now we have to engineer for
> those.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev