Re: [openstack-dev] [Nova][Heat] Custom Nova Flavor creation through Heat (pt.2)

2014-05-05 Thread Serg Melikyan
Having project-scoped flavors will rid us of the identified issues, and
will allow a more fine-grained way of managing physical resources.

Flavors concept was introduced in clouds to solve issue with effective
physical resource usage: 8Gb physical memory can be effectively splitted to
two m2.my_small with 4Gb RAM or to eight m1.my_tiny with 1 GB.

Let's consider example when your cloud have only 2 compute nodes with 8GB
RAM:
vm1 (m1.my_tiny) - node1
vm2 (m1.my_tiny) - node1
vm3 (m2.my_small) - node1
vm4 (m2.my_small) - node2 (since we could not spawn on node1)

This leaves ability to spawn predictable 2 VMs with m1.my_tiny flavor on
node1, and 2 VMs m1.my_tiny or 1 VM m2.my_small on node2. If user has
ability to create any flavor that he likes, he can create flavor like
mx.my_flavor with 3GB of RAM that could not be spawned on node1 at all, and
leaves 1GB under-used on node2 when spawned there. If you will multiply
number of nodes to 100 or even 1000 (like some of the OpenStack
deployments) you will have very big memory under-usage.

Do we really need to have ability to allocate any number of physical
resources for VM? If yes, I suggest to make two ways to define number of
physical resource allocation for VMs: with flavors and dynamically. Leaving
to cloud admins/owners to decide which way they prefer they cloud resources
to be allocated. Since small clouds may prefer flavors, and big clouds may
have dynamic resource allocation (impact from under-usage will be not so
big). As transition plan project-scoped flavors may do the job.


On Fri, May 2, 2014 at 5:35 PM, Dimitri Mazmanov 
dimitri.mazma...@ericsson.com wrote:

  This topic has already been discussed last year and a use-case was
 described (see [1]).
 Here's a Heat blueprint for a new OS::Nova::Flavor resource: [2].
 Several issues have been brought up after posting my implementation for
 review [3], all related to how flavors are defined/implemented in nova:

- Only admin tenants can manage flavors due to the default admin rule
in policy.json.
- Per-stack flavor creation will pollute the global flavor list
- If two stacks create a flavor with the same name, collision will
occur, which will lead to the following error: ERROR (Conflict): Flavor
with name dupflavor already exists. (HTTP 409)

 These and the ones described by Steven Hardy in [4] are related to the
 flavor scoping in Nova.

  Is there any plan/discussion to allow project scoped flavors in nova,
 similar to the Steven’s proposal for role-based scoping (see [4])?
 Currently the only purpose of the is_public flag is to hide the flavor
 from users without the admin role, but it’s still visible in all projects.
 Any plan to change this?

  Having project-scoped flavors will rid us of the identified issues, and
 will allow a more fine-grained way of managing physical resources.

  Dimitri

  [1]
 http://lists.openstack.org/pipermail/openstack-dev/2013-November/018744.html
 [2] https://wiki.openstack.org/wiki/Heat/Blueprints/dynamic-flavors
 [3] https://review.openstack.org/#/c/90029
 [4]
 http://lists.openstack.org/pipermail/openstack-dev/2013-November/019099.html

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Serg Melikyan, Senior Software Engineer at Mirantis, Inc.
http://mirantis.com | smelik...@mirantis.com

+7 (495) 640-4904, 0261
+7 (903) 156-0836
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Openstack-dev][qa] Running tempest tests against existing openstack installation

2014-05-05 Thread Swapnil Kulkarni
Hello,

I am trying to run tempest tests against an existing openstack deployment.
I have configured tempest.conf for the environment details. But when I
execute run_tempest.sh, it does not run any tests.

Although when I run testr run, the tests fail with *NoSuchOptError: no such
option: IMAGE_ID*

The trace has been added at [1]

If anyone has tried that before, any pointers are much appreciated.

[1] http://paste.openstack.org/show/78885/

Best Regards,
Swapnil Kulkarni
irc : coolsvap
cools...@gmail.com
+91-87960 10622(c)
http://in.linkedin.com/in/coolsvap
*It's better to SHARE*
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] QA Summit Meet-up Atlanta

2014-05-05 Thread Koderer, Marc
All right, let’s meet on Monday ;)
We can discuss the details after the QA meeting this week.

Von: Frittoli, Andrea (HP Cloud) [mailto:fritt...@hp.com]
Gesendet: Donnerstag, 1. Mai 2014 18:42
An: OpenStack Development Mailing List (not for usage questions)
Betreff: Re: [openstack-dev] [qa] QA Summit Meet-up Atlanta

I will arrive Sunday late.
If you meet on Monday I’ll see you there ^_^

From: Miguel Lavalle [mailto:mig...@mlavalle.com]
Sent: 01 May 2014 17:28
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [qa] QA Summit Meet-up Atlanta

I arrive Sunday at 3:30pm. Either Sunday or Monday are fine with me. Lookging 
forward to it :-)

On Wed, Apr 30, 2014 at 5:11 AM, Koderer, Marc 
m.kode...@telekom.demailto:m.kode...@telekom.de wrote:
Hi folks,

last time we met one day before the Summit started for a short meet-up.
Should we do the same this time?

I will arrive Saturday to recover from the jet lag ;) So Sunday 11th would be 
fine for me.

Regards,
Marc
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] explanations on the current state of config file handling

2014-05-05 Thread Thomas Goirand
On 05/03/2014 12:48 AM, Mark T. Voelker wrote:
 I think it's not just devstack/grenade that would benefit from this.
 Variance in the plugin configuration patterns is a fairly common
 complaint I hear from folks deploying OpenStack, and going to a single
 config would likely make that easier.  I think it probably benefits
 distributions too.  There have been several issues with distro init
 scripts not properly supplying all the necessary --config-file
 arguments to Neutron services because they're not uniformly pattered.

With my OpenStack Debian package maintainer hat on...

TL;DR: Moving to a single configuration file is unimportant for me, what
I would like is configuration files that I can easily parse.

Longer version:

The issue hasn't been multiple configuration. I've dealt with that by
parsing the core_plugin value, and do what should be done in the init
script depending on that.

The main issue with current configuration files is that they are *not*
parser friendly. Eg, there stuff like:

# Example:
# directive = value
#directive=None

A script cannot make the difference between the first and the 2nd
instance of the directive, and it's like this all over the place in
Neutron configuration files. Something like this:

sed -i s/[ \t#]*directive[ \t]*=.*/directive = newvalue/ \
/etc/neutron/neutron.conf

which I extensively use (well, it's a little bit more complicated as
what I do doesn't show in /proc: for security purposes, directives
values shouldn't show in a ps, but you got the point...).

Even worse, there's some huge example snippet which I had to delete (for
the same parseability reasons). These belong in the openstack-manuals,
or in the sphinx doc, *not* in a configuration files.

So for me, parseability should be addressed with top priority. I am
currently patching the neutron configuration files because of this, and
I have to constantly rebase my debian-specific patches, that's boring,
and error prone. I'd very much prefer generated configuration files like
for other projects instead (on a single or multiple configuration files:
I don't care since I have the logic already implemented...).

Cheers,

Thomas Goirand (zigo)


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [MagnetoDB] Configuring consistency draft of concept

2014-05-05 Thread Illia Khudoshyn
Can't say for others but I'm personally not really happy with Charles 
Dima approach. As Charles pointed out (or hinted) , QUORUM during write may
be equal to both EVENTUAL and STRONG, depending on consistency level chosen
for later read. The same is with QUORUM for read. I'm afraid, this way MDB
will become way too complex, and it would take more effort to predict its
behaviour from user's point of view.
I'd rather prefer it to be as straightforward as possible -- take full
control and responsibility or follow reasonable defaults.

And, please note, we're aiming to multi DC support, soon or late. And for
that we'll need more flexible consistency control, so binary option would
not be enough.

Thanks


On Thu, May 1, 2014 at 12:10 AM, Charles Wang charles_w...@symantec.comwrote:

 Discussed further with Dima. Our consensus is to have WRITE consistency
 level defined in table schema, and READ consistency control at data item
 level. This should satisfy our use cases for now.

 For example, user defined table has Eventual Consistency (Quorum). After
 user writes data using the consistency level defined in table schema, when
 user tries to read data back asking for Strong consistency, MagnetoDB can
 do a READ Eventual Consistency (Quorum) to satisfy user's Strong
 consistency requirement.

 Thanks,

 Charles

 From: Charles Wang charles_w...@symantec.com
 Date: Wednesday, April 30, 2014 at 10:19 AM
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org, Illia Khudoshyn 
 ikhudos...@mirantis.com
 Cc: Keith Newstadt keith_newst...@symantec.com

 Subject: Re: [openstack-dev] [MagnetoDB] Configuring consistency draft of
 concept

 Sorry for being late to the party.  Since we follow mostly DynamoDB, it
 makes sense not to deviate too much away from DynamoDB’s consistency mode.

 From what I read about DynamoDB, READ consistency is defined to be either
 strong consistency or eventual consistency.

   ConsistentRead 
 http://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_Query.html#DDB-Query-request-ConsistentRead:
  *boolean*”,

 *ConsistentRead 
 http://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_Query.html#API_Query_RequestSyntax*

 If set to true, then the operation uses strongly consistent reads; otherwise, 
 eventually consistent reads are used.

 Strongly consistent reads are not supported on global secondary indexes. If 
 you query a global secondary index with *ConsistentRead* set to true, you 
 will receive an error message.

 Type: Boolean

 Required: No


 http://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_Query.html

 WRITE consistency is not clearly defined anywhere. From what Werner
 Vogel’s description, it seems to indicate writes are replicated across
 availability zones/data centers synchronously. I guess inside data center,
 writes are replicated asynchronously. And the API doesn’t allow user to
 specify WRITE consistency level.

 http://www.allthingsdistributed.com/2012/01/amazon-dynamodb.html

 Considering the above factors and what Cassandra’s capabilities, I propose
 we use the following model.

 READ:

- Strong consistency (synchronously replicate to all, maps to
Cassandra READ All consistency level)
- Eventual consistency (quorum read, maps to Cassandra READ Quorum)
- Weak consistency (not in DynamoDB, maps to Cassandra READ ONE)

 WRITE:

- Strong consistency (synchronously replicate to all, maps to
Cassandra WRITE All consistency level)
- Eventual consistency (quorum write, maps to Cassandra WRITE Quorum)
- Weak consistency (not in DynamoDB, maps to Cassandra WRITE ANY)

 For conditional writes (conditional putItem/deletItem), only strong and
 eventual consistency should be supported.

 Thoughts?

 Thanks,

 Charles

 From: Dmitriy Ukhlov dukh...@mirantis.com
 Reply-To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Date: Tuesday, April 29, 2014 at 10:43 AM
 To: Illia Khudoshyn ikhudos...@mirantis.com
 Cc: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [MagnetoDB] Configuring consistency draft of
 concept

 Hi Illia,
 WEAK/QUORUM instead of true/false it is ok for me.

 But we have also STRONG.

 What does STRONG mean? In current concept we a using QUORUM and say that
 it is strong. I guess it is confusing (at least for me) and can have
 different behavior for different backends.

 I believe that from user point of view only 4 usecases exist: write and
 read with consistency or not.
 For example if we use QUORUM for write what is usecase to use read with
 STRONG one? QUORUM read is enought to get consistent data. Or if we use
 WEAK (ONE) for consistent write what is the use case to use read from
 QUORUM? we need to read from ALL.

 But we can to use different kinds of backend's abilities to implement
 consistent and incosistent operation. To 

Re: [openstack-dev] [TripleO] HAProxy and Keystone setup (in Overcloud)

2014-05-05 Thread Jan Provaznik

On 04/28/2014 10:05 PM, Jay Dobies wrote:

We may want to consider making use of Heat outputs for this.


This was my first thought as well. stack-show returns a JSON document
that would be easy enough to parse through instead of having it in two
places.


Rather than assuming hard coding, create an output on the overcloud
template that is something like 'keystone_endpoint'. It would look
something like this:

Outputs:
   keystone_endpoint:
 Fn::Join:
   - ''
   - - http://;
 - {Fn::GetAtt: [ haproxy_node, first_ip ]} # fn select and yada
 - :
 - {Ref: KeystoneEndpointPort} # thats a parameter
 - /v2.0


These are then made available via heatclient as stack.outputs in
'stack-show'.

That way as we evolve new stacks that have different ways of controlling
the endpoints (LBaaS anybody?) we won't have to change os-cloud-config
for each one.



The output endpoint list would be quite long, it would have to contain 
full list of all possible services (even if a service is not included in 
an image) + SSL URI for each port.


It might be better to get haproxy ports from template params (which 
should be available as stack.params) and define only virtual IP in 
stack.ouputs, then build endpoint URI in os-cloud-config. I'm not sure 
if we would have to change os-cloud-config for LBaaS or not. My first 
thought was that VIP and port are only bits which should vary, so 
resulting URI should be same in both cases.




2) do Keystone setup from inside Overcloud:
Extend keystone element, steps done in init-keystone script would be
done in keystone's os-refresh-config script. This script would have to
be called only on one of nodes in cluster and only once (though we
already do similar check for other services - mysql/rabbitmq, so I don't
think this is a problem). Then this script can easily get list of
haproxy ports from heat metadata. This looks like more attractive option
to me - it eliminates an extra post-create config step.


Things that can be done from outside the cloud, should be done from
outside the cloud. This helps encourage the separation of concerns and
also makes it simpler to reason about which code is driving the cloud
versus code that is creating the cloud.



Related to Keystone setup is also the plan around keys/cert setup
described here:
http://lists.openstack.org/pipermail/openstack-dev/2014-March/031045.html

But I think this plan would remain same no matter which of the options
above would be used.


What do you think?

Jan



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] about pci device filter

2014-05-05 Thread Bohai (ricky)
Hi, stackers:

Now there is an default while list filter for PCI device.
But maybe it's not enough in some scenario.

Maybe it's better if we provide a mechanism to specify a customize filter.

For example:
So user can make a special filter , then specify which filter to use in 
configure files.

Any advices?

Best regards to you.
Ricky


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] about pci device filter

2014-05-05 Thread Bohai (ricky)
Hi, stackers:

Now there is an default while list filter for PCI device.
But maybe it's not enough in some scenario.

Maybe it's better if we provide a mechanism to specify a customize filter.

For example:
So user can make a special filter , then specify which filter to use in 
configure files.

Any advices?

Best regards to you.
Ricky


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Run migrations for service plugins every time

2014-05-05 Thread Anna Kamyshnikova
Salvatore,

Thanks for your answer. I was on holidays, so I clearly read your response
just now.

As I understand from your comments there is no reason for me to continue
working on making migrations run unconditionally for service plugins.
Making all migrations run unconditionally based on current state of
database will be rather long process, because I'll have to find out which
tables are in database for each plugin and then create migration that will
recreate them, if they won't exist.

While I was working on this change https://review.openstack.org/87935 I
researched different opportunities how to check database state without
breaking offline migrations. The only suggestion is to use op.execute() and
SQL statement 'CREATE TABLE IF NOT EXIST'. The problem with that is that if
table uses enum type it can't be done for PostgreSQL. I described this
problem in earlier mail with topic 'Fix migration that breaks Grenade jobs'

If we won't care about offline migrations this could be done by rerunning
some migrations or recreating some tables, if running the whole migration
is not reliable.

Regards,
Ann


On Wed, Apr 30, 2014 at 6:03 PM, Salvatore Orlando sorla...@nicira.comwrote:

 Anna,

 It's good to see progress being made on this blueprint.
 I have some comments inline.

 Also, I would recommend keeping in mind the comments Mark had regarding
 migration generation and plugin configuration in hist post on the email
 thread I started.

 Salvatore


 On 30 April 2014 14:16, Anna Kamyshnikova akamyshnik...@mirantis.comwrote:

 Hi everyone!

 I'm working on blueprint
 https://blueprints.launchpad.net/neutron/+spec/neutron-robust-db-migrations.
 This topic was discussed earlier by Salvatore in ML thread Migrations,
 service plugins and Grenade jobs. I'm researching how to make migrations
 for service plugins run unconditionally. In fact it is enough to change
 should_run method for migrations -  to make it return True if this
 migration is for service plugin
 https://review.openstack.org/#/c/91323/1/neutron/db/migration/__init__.py
 .


 I think running migrations unconditionally for service plugins is an
 advancement but not yet a final solution.
 I would insist on the path you've been pursuing of running unconditionally
 all migrations. We should strive to solve the issues you found there so far


 It is almost working except for conflicts of VNPaaS service plugin with
 Brocade and Nuage plugins http://paste.openstack.org/show/77946/.
 1) Migration for Brocade plugin fails, because 2c4af419145b_l3_support
 doesn't run (it adds necessary table 'routers') It can be fixed by adding
 Brocade plugin to list migration_for_plugins in 2c4af419145b_l3_support.

 2) Migration for Nuage plugin fails, because e766b19a3bb_nuage_initial
 runs after 52ff27f7567a_support_for_vpnaas, and there is no necessary table
 'routers'.  It can be fixed by adding Nuage plugin to list
 migration_for_plugins in 2c4af419145b_l3_support and removing addition of
 these tables in e766b19a3bb_nuage_initial.

 I noticed that too in the past.
 However there are two aspects to this fix. The one you're mentioning is
 for fixing migrations in new deployments; on the other hand migrations
 should be fixed also for existing deployments. This kind of problem seems
 to me more concerning the work for removing schema auto-generation. Indeed
 the root problem here is probably that l3 schemas for these two plugins are
 being created only because of schema auto-generation.

 Also I researched opportunity to make all migrations run unconditionally,
 but this could not be done as there is going to be a lot of conflicts
 http://paste.openstack.org/show/77902/. Mostly this conflicts take place
 in initial migrations for core plugins as they create basical tables that
 was created before. For example, mlnx_initial create tables securitygroups,
 securitygroupportbindings, networkdhcpagentbindings, portbindingports that
 were already created in 3cb5d900c5de_securitygroups,
 4692d074d587_agent_scheduler, 176a85fc7d79_add_portbindings_db migrations.
 Besides I was thinking about 'making migrations smart' - to make them check
 whether all necessary migrations has been run or not, but the main problem
 about that is we need to store information about applied migrations, so
 this should have been planned from the very beginning.


 I agree that just changing migrations to run unconditionally is not a
 viable solution.
 Rather than changing the existing migration path, I was thinking more
 about adding corrective unconditional migrations to make the database
 state the same regardless of the plugin configuration. The difficult part
 here is that these migrations would need to be smart enough to understand
 whether a previous migration was executed or skipped; this might also break
 offline migrations (but perhaps this might be tolerable).

 I look forward for any suggestions or thoughts on this topic.

 Regards, Ann

 ___
 

Re: [openstack-dev] [TripleO][Tuskar] Feedback on init-keystone spec

2014-05-05 Thread Jiří Stránský

On 30.4.2014 09:02, Steve Kowalik wrote:

Hi,

I'm looking at moving init-keystone from tripleo-incubator to
os-cloud-config, and I've drafted a spec at
https://etherpad.openstack.org/p/tripleo-init-keystone-os-cloud-config .

Feedback welcome.

Cheers,



Hi Steve,

that looks good :) Just to clarify -- should the long-term plan for 
Keystone PKI initialization still be to generate the key+certs on 
undercloud and push it to overcloud via Heat? (Likewise for 
seed-undercloud.)


Thanks

Jirka

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [infra] Jenkins jobs are always running for abandoned patches..

2014-05-05 Thread wu jiang
Hi all,

I found Jenkins met some problems about abandoned patches.
The jenkins-jobs are always running even if the jobs were successful or
failed..

I've checked some patches[1][2][3] suit the issue. It look like if you add
comments to an abandoned patch, the Jenkins will start check jobs again and
again..

I've submitted the issue on launchpad now[4].

So if you have idea about the issue, please give some tips about that.
Thanks.


wingwj

---

[1] https://review.openstack.org/#/c/75802/
[2] https://review.openstack.org/#/c/74917/
[3] https://review.openstack.org/#/c/91606/

[4] https://bugs.launchpad.net/openstack-ci/+bug/1316064
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [mistral] Community meeting reminder - 05/05/2014

2014-05-05 Thread Renat Akhmerov
Hi,

This is a reminder about the community meeting we’ll have today at 16.00 UTC at 
#openstack-meeting.

Agenda:
Review action items
Current status (quickly by team members)
Summit preparations
Further plans
Open discussion

It can also be found at https://wiki.openstack.org/wiki/Meetings/MistralAgenda 
as well as the links to the previous meeting minutes and logs.

Thanks!

Renat Akhmerov
@ Mirantis Inc.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Question of necessary queries for Event implemented on HBase

2014-05-05 Thread Igor Degtiarov
Hello Dmitriy!

Of course event_id could identify row. Actually Row structure now is a
question for discuss.
The main feature of HBase that data is stored in sorted rows.

To filter events by time interval we will need to scan all stored data,
when
rowkey is started with event_id. Instead of adding timestamp in rowkey, we
could set
timestamps as a additional column value or qualifier inside row.

Another decision is to start rowkey with  timestamp + event_id.
In this case we will have events stored by timestamps in HBase.

What type of rowkey will give more efficiency is a question.


On Wed, Apr 30, 2014 at 4:36 PM, Dmitriy Ukhlov dukh...@mirantis.comwrote:

 Hello Igor!

 Could you clarify, please, Why do we need event_id + reversed_timestamp
 row key?
  Isn't event_id identify row?



 On Tue, Apr 29, 2014 at 11:08 AM, Igor Degtiarov 
 idegtia...@mirantis.comwrote:

 Hi, everybody.

 I’ve started to work on implementation of Event in ceilometer on HBase
 backend in the edges of blueprint
 https://blueprints.launchpad.net/ceilometer/+spec/hbase-events-feature

 By now Events has been implemented only in SQL.

 You know, using SQL  we can build any query we need.

 With HBase it is another story. The data structure is built basing on
 queries we need, so

 to construct the structure of Event on HBase, it is very important to
 answer the question what queries should be implemented to retrieve events
 from storage.

 I registered bp
 https://blueprints.launchpad.net/ceilometer/+spec/hbase-events-structurefor 
 discussion Events structure in HBase.

 For today it is prepared preliminary structure of Events in HBase:

 table: Events

 - rowkey:  event_id + reversed_timestamp



   - column: event_type = string with description of event

   - [list of columns: trait_id + trait_desc +
 trait_type= trait_data]

 Structure that is proposed will support next queries:

 - event’s generation time

 - event id

 - event type

 - trait: id, description, type

 Any thoughts about additional queries that are necessary for Events.

 I’ll publish the patch with current implementation soon.

 Sincerely,
 Igor Degtiarov

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Best regards,
 Dmitriy Ukhlov
 Mirantis Inc.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] q-agt error

2014-05-05 Thread abhishek jain
Hi all


I'm getting following error in q-agt service of compute node while booting
VM from controller node onto it..

 14-05-04 08:56:53.508 13530 DEBUG neutron.agent.linux.utils [-]
Command: ['sudo', '/usr/local/bin/neutron-rootwrap',
'/etc/neutron/rootwrap.conf', 'ip6tables-restore', '-c']
Exit code: 2
Stdout: ''
Stderr: libkmod: ERROR ../libkmod/libkmod.c:554 kmod_search_moddep: could
not open moddep file
'/lib/modules/3.8.13-rt9-QorIQ-SDK-V1.4/modules.dep.bin'\nip6tables-restore
v1.4.18: ip6tables-restore: unable to initialize table 'filter'\n\nError
occurred at line: 2\nTry `ip6tables-restore -h' or 'ip6tables-restore
--help' for more information.\n execute
/opt/stack/neutron/neutron/agent/linux/utils.py:73
2014-05-04 08:56:53.510 13530 DEBUG neutron.openstack.common.lockutils [-]
Released file lock iptables at
/opt/stack/data/neutron/lock/neutron-iptables for method _apply... inner
/opt/stack/neutron/neutron/openstack/common/lockutils.py:239
2014-05-04 08:56:53.512 13530 ERROR
neutron.plugins.openvswitch.agent.ovs_neutron_agent [-] Error in agent
event loop
2014-05-04 08:56:53.512 13530 TRACE
neutron.plugins.openvswitch.agent.ovs_neutron_agent Traceback (most recent
call last):
2014-05-04 08:56:53.512 13530 TRACE
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File
/opt/stack/neutron/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py,
line 1082, in rpc_loop

Please help me regarding this.

Thanks
Abhishek Jain
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] Project list with turned-on policy in Keystone

2014-05-05 Thread Yaguang Tang
I think this is an common requirement for users who want to keystone v3. I
filed a blueprint for it
https://blueprints.launchpad.net/horizon/+spec/domain-based-rbac.


2014-04-24 23:30 GMT+08:00 Roman Bodnarchuk roman.bodnarc...@indigitus.ch:

  Hello,

 As far as I can tell, Horizon uses python-openstack-auth to authenticate
 users.  In the same time, openstack_auth.KeystoneBackend.authenticate
 method generates only project scoped tokens.

 After enabling policy checks in Keystone, I tried to view a list of all
 projects on Admin panel and got *Error: *Unauthorized: Unable to
 retrieve project list. on dashboard and the next in Keystone log:

 enforce identity:list_projects: {'project_id':
 u'80d91944f5af4c53ad5df4e386376e08', 'group_ids': [], 'user_id':
 u'ed14fd91122b47d2a6f575499ed0c4bb', 'roles': [u'admin']}
 ...
 WARNING keystone.common.wsgi [-] You are not authorized to perform the
 requested action, identity:list_projects.

 This is expected, since user's token is scoped to project, and no access
 to domain-wide resources should be allowed.

 How to work-around this?  Is it possible to use policy checks on Keystone
 side while working with Horizon?

 I am using stable/icehouse and Keystone API v3.

 Thanks,
 Roman

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Tang Yaguang

Canonical Ltd. | www.ubuntu.com | www.canonical.com
Mobile:  +86 152 1094 6968
gpg key: 0x187F664F
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] Nova-network to Neutron migration: issues with libvirt

2014-05-05 Thread Oleg Bondarev
On Wed, Apr 30, 2014 at 7:40 PM, Salvatore Orlando sorla...@nicira.comwrote:

 On 30 April 2014 17:28, Jesse Pretorius jesse.pretor...@gmail.com wrote:

 On 30 April 2014 16:30, Oleg Bondarev obonda...@mirantis.com wrote:

 I've tried updating interface while running ssh session from guest to
 host and it was dropped :(


 Please allow me to tell you I told you so! ;)


 The drop is not great, but ok if the instance is still able to be
 communicated to after the arp tables refresh and the connection is
 re-established.

 If the drop can't be avoided, there is comfort in knowing that there is
 no need for an instance reboot, suspend/resume or any manual actions.


 I agree with Jesse's point. I think it will be reasonable to say that the
 migration will trigger a connection reset for all existing TCP connections.
 However, what are exactly the changes we're making on the data plane? Are
 you testing with migrating VIF from a linux bridge instance to an Open
 vSwitch obne?


Actually I was testing VIF move from nova-net bridge br100 to neutron's
bridge (qbrXXX) (which is kind of final step in instance migration as I see
it)
Another problem that I faced is clearing network filters which nova-net
configures on the VIF, but this seems to be fixed in libvirt now:
https://www.redhat.com/archives/libvirt-users/2014-May/msg2.html

Thanks,
Oleg


 Salvatore



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Add VMware dvSwitch/vSphere API support for Neutron ML2

2014-05-05 Thread Ilkka Tengvall

Hi,

I would like to start a discussion about a ML2 driver for VMware 
distributed virtual switch (dvSwitch) for Neutron. There is a new 
blueprint made by Sami Mäkinen (sjm) in 
https://blueprints.launchpad.net/neutron/+spec/neutron-ml2-mech-vmware-dvswitch.


The driver is described and code is publicly available and hosted in 
github: 
https://blueprints.launchpad.net/neutron/+spec/neutron-ml2-mech-vmware-dvswitch


We would like to get the driver through contribution process, what ever 
that exactly means :)


The original problem this driver solves for is is the following:

We've been running VMware virtualization platform in our data center 
before OpenStack, and we will keep doing it due existing services. We 
also have been running OpenStack for a while also. Now we wanted to get 
the most out of both by combining the customers networks on the both 
plafroms by using provider networks. The problem is that the networks 
need two separate managers, neutron and vmware. There was no OpenStack 
tools to attach the guests on VMware side to OpenStack provider networks 
during instance creation.


Now we are putting our VMware under control of OpenStack. We want to 
have one master to control the networks, Neutron. We implemented the new 
ML2 driver to do just that. It is capable of joining the machines 
created in vSphere to the same provider networks the OpenStack uses, 
using dvSwitch port groups.



I just wanted to open the discussion, for the technical details please 
contact our experts on the CC list:


Sami J. Mäkinen
Jussi Sorjonen (freenode: mieleton)


BR,

Ilkka Tengvall
 Advisory Consultant, Cloud Architecture
 email:  ilkka.tengv...@cybercom.com
 mobile: +358408443462
 freenode: ikke-t
 web:http://cybercom.com - http://cybercom.fi

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] OpenStack access using Java SDK

2014-05-05 Thread Vikas Kokare
I am looking for a standard, seamless way to access OpenStack APIs , most
likely using the Java SDKs that are summarized at
https://wiki.openstack.org/wiki/SDKs#Software_Development_Kits

There are various distributions of Openstack available today. Is this
possible using these SDK's to write an application that works well across
distributions?

If the answer to the above is yes, then how does one evaluate the pros/cons
of these SDK's?

-Vikas
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [infra] elements vs. openstack-infra puppet for CI infra nodes

2014-05-05 Thread Dan Prince
I originally sent this to TripleO but perhaps [infra] would have been a better 
choice.

The short version is I'd like to run a lightweight (unofficial) mirror for 
Fedora in infra:

 https://review.openstack.org/#/c/90875/

And then in the TripleO CI racks we can run local Squid caches using something 
like this:

 https://review.openstack.org/#/c/91161/

We have to do something because we see quite a bit of job failures related to 
unstable mirrors. If using local Squid caches doesn't work out then perhaps we 
will have to run local mirrors in each TripleO CI rack but I would like to 
avoid that if possible as it is more resource heavy. Especially because we'll 
need to do the same things in each rack for Fedora and Ubuntu (both of which 
run in each TripleO CI test rack).

Dan

- Forwarded Message -
From: Dan Prince dpri...@redhat.com
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Sent: Tuesday, April 29, 2014 4:10:30 PM
Subject: [openstack-dev] [TripleO] elements vs. openstack-infra puppet for  
CI infra nodes

A bit of background TripleO CI background:

 At this point we've got two public CI overcloud which we can use to run 
TripleO check jobs for CI. Things are evolving nicely and we've recently been 
putting some effort into making things run faster by adding local distro and 
Pypi mirrors. Etc. This is good in that it should help us improve both the 
stability of test results and runtimes.



This brings up the question of how we are going to manage our TripleO overcloud 
CI resources for things like: distro mirrors, caches, test environment brokers, 
etc

1) Do we use and or create openstack-infra/config modules for everything we 
need and manage it via the normal OpenStack infrastructure way using Puppet 
etc.?

2) Or, do we take the TripleO oriented approach and use image elements and Heat 
templates to manage things?

Which of these two options do we prefer given that we eventually want TripleO 
to be gating? And who is responsible for maintaining them (TripleO CD Admins or 
OpenStack Infra)?



If it helps to narrow the focus of this thread I do want to stress I'm only 
really talking about the public CI (overcloud) resources. What happens 
underneath this layer is already managed via TripleO tooling itself.

Regardless of what we use I'd like to be able to maintain feature parity with 
regards to setting up these CI cloud resources across providers (HP and Red Hat 
at this point). As is I fear we've got a growing list of CI infrastructure that 
isn't easily reproducible across the racks.

Dan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-dev][qa] Running tempest tests against existing openstack installation

2014-05-05 Thread David Kranz

On 05/05/2014 02:26 AM, Swapnil Kulkarni wrote:

Hello,

I am trying to run tempest tests against an existing openstack 
deployment. I have configured tempest.conf for the environment 
details. But when I execute run_tempest.sh, it does not run any tests.


Although when I run testr run, the tests fail with *NoSuchOptError: no 
such option: IMAGE_ID*


This must be coming from not having changed this value in the [compute] 
section of tempest.conf:


#image_ref={$IMAGE_ID}

See etc/tempest.conf.sample.

 -David



*
*
The trace has been added at [1]

If anyone has tried that before, any pointers are much appreciated.

[1] http://paste.openstack.org/show/78885/

Best Regards,
Swapnil Kulkarni
irc : coolsvap
cools...@gmail.com mailto:cools...@gmail.com
+91-87960 10622(c)
http://in.linkedin.com/in/coolsvap
*It's better to SHARE*


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] [VMware]How can I put my proposals for VMware drivers in the VMware driver session?

2014-05-05 Thread Sheng Bo Hou
Hi VMware folks,

How can I put my proposals for VMware drivers in the VMware driver 
session? Is there someone from VMware responsible for receiving proposals 
for this session?

I submitted two topics: Resource Pool Support for VMware VCenter: 
http://summit.openstack.org/cfp/details/349, 
https://blueprints.launchpad.net/nova/+spec/driver-specific-snapshot.

and Reinforcement with Native Server Snapshot: 
http://summit.openstack.org/cfp/details/351, 
https://blueprints.launchpad.net/nova/+spec/vmware-resource-pool-enablement
.

The community decided to put the first into the VMware driver session, but 
the second has got refused. However, I hope both of them can be put in the
VMware driver session. Can someone from VMware help me to put them into 
the VMware driver session for the discussion in the design summit?
Thank you very much.

Best wishes,
Vincent Hou (侯胜博)

Staff Software Engineer, Open Standards and Open Source Team, Emerging 
Technology Institute, IBM China Software Development Lab

Tel: 86-10-82450778 Fax: 86-10-82453660
Notes ID: Sheng Bo Hou/China/IBM@IBMCNE-mail: sb...@cn.ibm.com 
Address:3F Ring, Building 28 Zhongguancun Software Park, 8 Dongbeiwang 
West Road, Haidian District, Beijing, P.R.C.100193
地址:北京市海淀区东北旺西路8号中关村软件园28号楼环宇大厦3层 邮编:100193
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] [VMware]How can I put my proposals for VMware drivers in the VMware driver session?

2014-05-05 Thread Michael Still
The way to go is probably to put them into the etherpad for that
session, which is at
https://etherpad.openstack.org/p/juno-nova-vmware-driver-roadmap

Cheers,
Michael

On Mon, May 5, 2014 at 11:32 PM, Sheng Bo Hou sb...@cn.ibm.com wrote:
 Hi VMware folks,

 How can I put my proposals for VMware drivers in the VMware driver session?
 Is there someone from VMware responsible for receiving proposals for this
 session?

 I submitted two topics: Resource Pool Support for VMware VCenter:
 http://summit.openstack.org/cfp/details/349,
 https://blueprints.launchpad.net/nova/+spec/driver-specific-snapshot.

 and Reinforcement with Native Server Snapshot:
 http://summit.openstack.org/cfp/details/351,
 https://blueprints.launchpad.net/nova/+spec/vmware-resource-pool-enablement.

 The community decided to put the first into the VMware driver session, but
 the second has got refused. However, I hope both of them can be put in the
 VMware driver session. Can someone from VMware help me to put them into the
 VMware driver session for the discussion in the design summit?
 Thank you very much.

 Best wishes,
 Vincent Hou (侯胜博)

 Staff Software Engineer, Open Standards and Open Source Team, Emerging
 Technology Institute, IBM China Software Development Lab

 Tel: 86-10-82450778 Fax: 86-10-82453660
 Notes ID: Sheng Bo Hou/China/IBM@IBMCNE-mail: sb...@cn.ibm.com
 Address:3F Ring, Building 28 Zhongguancun Software Park, 8 Dongbeiwang West
 Road, Haidian District, Beijing, P.R.C.100193
 地址:北京市海淀区东北旺西路8号中关村软件园28号楼环宇大厦3层 邮编:100193
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS]L7 conent switching APIs

2014-05-05 Thread Samuel Bercovici
Hi Stephen,

For Icehouse we did not go into L7 content modification as the general feeling 
was that it might not be exactly the same as content switching and we wanted to 
tackle content switching fiest.

L7 content switching and L7 content modification are different, I prefer to be 
explicit and declarative and use different objects.
This will make the API more readable.
What do you think?

I plan to look deeper into L7 content modification later this week to propose a 
list of capabilities.

-Sam.


From: Stephen Balukoff [mailto:sbaluk...@bluebox.net]
Sent: Saturday, May 03, 2014 1:33 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS]L7 conent switching APIs

Hi Adam and Samuel!

Thanks for the questions / comments! Reactions in-line:

On Thu, May 1, 2014 at 8:14 PM, Adam Harwell 
adam.harw...@rackspace.commailto:adam.harw...@rackspace.com wrote:
Stephen, the way I understood your API proposal, I thought you could 
essentially combine L7Rules in an L7Policy, and have multiple L7Policies, 
implying that the L7Rules would use AND style combination, while the L7Policies 
themselves would use OR combination (I think I said that right, almost seems 
like a tongue-twister while I'm running on pure caffeine). So, if I said:

Well, my goal wasn't to create a whole DSL for this (or anything much 
resembling this) because:

  1.  Real-world usage of the L7 stuff is generally pretty primitive. Most 
L7Policies will consist of 1 rule. Those that consist of more than one rule are 
almost always the sort that need a simple sort. This is based off the usage 
data collected here (which admittedly only has Blue Box's data-- because 
apparently nobody else even offers L7 right now?)  
https://docs.google.com/spreadsheet/ccc?key=0Ar1FuMFYRhgadDVXZ25NM2NfbGtLTkR0TDFNUWJQUWcusp=sharing
  2.  I was trying to keep things as simple as possible to make it easier for 
load balancer vendors to support. (That is to say, I wouldn't expect all 
vendors to provide the same kind of functionality as HAProxy ACLs, for example.)
Having said this, I think yours and Sam's clarification that different 
L7Policies can be used to effective OR conditions together makes sense, and 
therefore assuming all the Rules in a given policy are ANDed together makes 
sense.

If we do this, it therefore also might make sense to expose other criteria on 
which L7Rules can be made, like HTTP method used for the request and whatnot.

Also, should we introduce a flag to say whether a given Rule's condition should 
be negated?  (eg. HTTP method is GET and URL is *not* /api) This would get 
us closer to being able to use more sophisticated logic for L7 routing.

Does anyone foresee the need to offer this kind of functionality?

 * The policy { rules: [ rule1: match path REGEX .*index.*, rule2: match path 
REGEX hello/.* ] } directs to Pool A
 * The policy { rules: [ rule1: match hostname EQ 
mysite.comhttp://mysite.com ] } directs to Pool B
then order would matter for the policies themselves. In this case, if they ran 
in the order I listed, it would match 
mysite.com/hello/index.htmhttp://mysite.com/hello/index.htm and direct it 
to Pool A, while mysite.com/hello/nope.htmhttp://mysite.com/hello/nope.htm 
would not match BOTH rules in the first policy, and would be caught by the 
second policy, directing it to Pool B. If I had wanted the first policy to use 
OR logic, I would have just specified two separate policies both pointing to 
Pool A:

Clarification on this: There is an 'order' attribute to L7Policies. :) But 
again, if all the L7Rules in a given policy are ANDed together, then order 
doesn't matter within the rules that make up an L7Policy.

 * The policy { rules: [ rule1: match path REGEX .*index.* ] } directs to 
Pool A
 * The policy { rules: [ rule1: match path REGEX hello/.* ] } directs to Pool 
A
 * The policy { rules: [ rule1: match hostname EQ 
mysite.comhttp://mysite.com ] } directs to Pool B
In that case, it would match 
mysite.com/hello/nope.htmhttp://mysite.com/hello/nope.htm on the second 
policy, still directing to Pool A.
In both cases, mysite.com/hi/http://mysite.com/hi/ would only be caught by 
the last policy, directing to Pool B.
Maybe I was making some crazy jumps of logic, and that's not how you intended 
it? That said, even if that wasn't your intention, could it work that way? It 
seems like that allows a decent amount of options… :)

 --Adam



 On Fri, May 2, 2014 at 4:59 AM, Samuel Bercovici 
samu...@radware.commailto:samu...@radware.com wrote:
Adam, you are correct to show why order matters in policies.
It is a good point to consider AND between rules.
If you really want to OR rules you can use different policies.

Stephen, the need for order contradicts using content modification with the 
same API since for modification you would really want to evaluate the whole 
list.

Hi Sam, I was a bit confused on this point since we don't see users often using 
both 

Re: [openstack-dev] [Neutron][LBaaS]User Stories and sruvey

2014-05-05 Thread Samuel Bercovici
Hi,

I will not freeze the document to allow people to work on requirements which 
are not tenant facing (ex: operator, etc.)
I think that we have enough use cases for tenant facing capabilities to reflect 
most common use cases.
I am in the process of creation a survey in surveymonkey for tenant facing use 
cases and hope to send it to ML ASAP.

Regards,
-Sam.


From: Samuel Bercovici
Sent: Thursday, May 01, 2014 8:40 PM
To: OpenStack Development Mailing List (not for usage questions)
Cc: Samuel Bercovici
Subject: [openstack-dev] [Neutron][LBaaS]User Stories and sruvey

Hi Everyone!

To assist in evaluating the use cases that matter and since we now have ~45 use 
cases, I would like to propose to conduct a survey using something like 
surveymonkey.
The idea is to have a non-anonymous survey listing the use cases and ask you 
identify and vote.
Then we will publish the results and can prioritize based on this.

To do so in a timely manner, I would like to freeze the document for editing 
and allow only comments by Monday May 5th 08:00AMUTC and publish the survey 
link to ML ASAP after that.

Please let me know if this is acceptable.

Regards,
-Sam.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] explanations on the current state of config file handling

2014-05-05 Thread Jay Pipes

On 05/04/2014 01:13 PM, John Dickinson wrote:

To add some color, Swift supports both single conf files and conf.d 
directory-based configs. See 
http://docs.openstack.org/developer/swift/deployment_guide.html#general-service-configuration.


+1


The single config file pattern is quite useful for simpler configurations, 
but the directory-based ones becomes especially useful when looking at cluster 
configuration management tools--stuff that auto-generates and composes config settings 
(ie non hand-curated configs). For example, the conf.d configs can support each 
middleware config or background daemon process in a separate file. Or server settings in 
one file and common logging settings in another.


Also +1


(Also, to answer before it's asked [but I don't want to derail the current 
thread], I'd be happy to look at oslo config parsing if it supports the same 
functionality.)


And a final +1. :)

-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] SSL in Common client

2014-05-05 Thread Chmouel Boudjnah
Rob Crittenden rcrit...@redhat.com writes:

 From what I found nothing has changed either upstream or in swift.

If you are asking about the ability to disable SSL compression it is up
to the OS to provide that so nothing was added when we changed
swiftclient to requests.

Most modern OSes have SSL compression by default, only Debian stable was
still enabling it.

The only feature that was left behind when we ported swiftclient to
requests was the support of Expects (100-Continue) which is referenced
upstream in this bug :

https://github.com/kennethreitz/requests/issues/713

and it does not seem be trivial to add to requests

Chmouel.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] Accessibility

2014-05-05 Thread Liz Blanchard

On Apr 24, 2014, at 11:06 AM, Douglas Fish drf...@us.ibm.com wrote:

 
 I've proposed a design session for accessibility for the Juno summit, and
 I'd like to get a discussion started on the work that needs to be done.
 (Thanks Julie P for pushing that!)
 
 I've started to add information to the wiki that Joonwon Lee created:
 https://wiki.openstack.org/wiki/Horizon/WebAccessibility
 I think that's going to be a good place to gather material.  I'd like to
 see additional information added about what tools we can use to verify
 accessibility.  I'd like to try to summarize the WCAG guidelines into some
 broad areas where Horzion needs work.  I expect to add a checklist of
 accessibility-related items to consider while reviewing code.
 
 Joonwon (or anyone else with an interest in accessibility):  It would be
 great if you could re-inspect the icehouse level code and create bugs for
 any issues that remain.  I'll do the same for issues that I am aware of.
 In each bug we should include a link to the WCAG guideline that has been
 violated.  Also, we should describe the testing technique:  How was this
 bug discovered?  How could a developer (or reviewer) determine the issue
 has actually been fixed?  We should probably put that in each bug at first,
 but as we go they should be gathered up into the wiki page.
 
 There are some broad areas of need that might justify blueprints (I'm
 thinking of WAI-ARIA tagging, and making sure external UI widgets that we
 pull in are accessible).
 
 Any suggestions on how to best share info, or organize bugs and blueprints
 are welcome!

Doug,

Thanks very much for bringing this up as a topic and pushing it forward at 
Summit. I think this will be a great step forward for Horizon in maturity as we 
continue to have better accessibility support. I wanted to throw out any 
article that I found helpful when it comes to testing sites for accessibility 
for colorblindness:
http://css-tricks.com/accessibility-basics-testing-your-page-for-color-blindness/

I think we can make some very small/easy changes with the color scheme to be 
sure there is enough contrast to target all types of colorblindness.

Looking forward to this session,
Liz

 
 Doug Fish
 IBM STG Cloud Solution Development
 T/L 553-6879, External Phone 507-253-6879
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] SSL in Common client

2014-05-05 Thread Chmouel Boudjnah
Chmouel Boudjnah chmo...@enovance.com writes:

 Most modern OSes have SSL compression by default, only Debian stable was
 still enabling it.

I mean have SSL compression *disabled* by default.

Chmouel.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] explanations on the current state of config file handling

2014-05-05 Thread Doug Hellmann
On Mon, May 5, 2014 at 10:18 AM, Jay Pipes jaypi...@gmail.com wrote:
 On 05/04/2014 01:13 PM, John Dickinson wrote:

 To add some color, Swift supports both single conf files and conf.d
 directory-based configs. See
 http://docs.openstack.org/developer/swift/deployment_guide.html#general-service-configuration.


 +1


 The single config file pattern is quite useful for simpler
 configurations, but the directory-based ones becomes especially useful when
 looking at cluster configuration management tools--stuff that auto-generates
 and composes config settings (ie non hand-curated configs). For example, the
 conf.d configs can support each middleware config or background daemon
 process in a separate file. Or server settings in one file and common
 logging settings in another.


 Also +1


 (Also, to answer before it's asked [but I don't want to derail the current
 thread], I'd be happy to look at oslo config parsing if it supports the same
 functionality.)


 And a final +1. :)

oslo.config includes a --config-dir option for this. When
--config-dir dirname is used with an app, oslo.config parses
dirname/*.conf and adds the settings to the configuration being
built.

Doug


 -jay


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] SSL in Common client

2014-05-05 Thread Rob Crittenden

Chmouel Boudjnah wrote:

Rob Crittenden rcrit...@redhat.com writes:


 From what I found nothing has changed either upstream or in swift.


If you are asking about the ability to disable SSL compression it is up
to the OS to provide that so nothing was added when we changed
swiftclient to requests.

Most modern OSes have SSL compression by default, only Debian stable was
still enabling it.

The only feature that was left behind when we ported swiftclient to
requests was the support of Expects (100-Continue) which is referenced
upstream in this bug :

https://github.com/kennethreitz/requests/issues/713

and it does not seem be trivial to add to requests


Yes, that was my take as well. I didn't mean to come across as 
criticizing the swift client. I was just trying to outline the current 
state of things.


rob


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][IPv6] No IRC meeting

2014-05-05 Thread Collins, Sean
We will not be meeting tomorrow, May 6th 2014, since the summit
is next week.

-- 
Sean M. Collins
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] plan for moving to using oslo.db

2014-05-05 Thread Matt Riedemann
Just wanted to get some thoughts down while they are in my head this 
morning.


Oslo DB is now a library [1].  I'm trying to figure out what the steps 
are to getting Nova to using that so we can rip out the sync'ed common 
db code.


1. Looks like it's not in global-requirements yet [2], so that's 
probably a first step.


2. We'll want to cut a sqlalchemy-migrate release once this patch is 
merged [3]. This moves a decent chunk of unique constraint patch code 
out of oslo and into sqlalchemy-migrate where it belongs so we can run 
unit tests with sqlite to drop unique constraints.


3. Rip this [4] out of oslo.db once migrate is updated and released.

4. Replace nova.openstack.common.db with oslo.db.

5. ???

6. Profit!

Did I miss anything?

[1] http://git.openstack.org/cgit/openstack/oslo.db/
[2] 
http://git.openstack.org/cgit/openstack/requirements/tree/global-requirements.txt

[3] https://review.openstack.org/#/c/87773/
[4] https://review.openstack.org/#/c/31016/

--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo] Proposal: add local hacking for oslo-incubator

2014-05-05 Thread ChangBo Guo
Hi Stackers,

I find some common code style would be avoided while I'm reviewing code ,so
think these check would
be nice to move into local hacking. The local hacking can ease reviewer
burden.

The idea from keystone blueprint [1].
Hacking is a great start at automating checks for common style issues.
There are still lots of things that it is not checking for that it probably
should. The local hacking ease reviewer burden . This is the list of from
[1][2] that would be nice to move into an automated check:

- use import style 'from openstack.common.* import' not use 'import
openstack.common.*'
- assertIsNone should be used when using None with assertEqual
- _() should not be used in debug log statements
-do not use 'assertTrue(isinstance(a, b)) sentence'
-do not use 'assertEqual(type(A), B) sentence'

[1]
https://blueprints.launchpad.net/keystone/+spec/more-code-style-automation
[2] https://github.com/openstack/nova/blob/master/nova/hacking/checks.py

I just registered a blueprint for this in [3] and submit first patch in [4].

[3] https://blueprints.launchpad.net/oslo/+spec/oslo-local-hacking

[4] 
https://review.openstack.org/#/c/87832/https://github.com/openstack/nova/blob/master/nova/hacking/checks.py
Should we add local hacking for oslo-incubator ?  If yes, what's the other
check will be added ?
Your comment is appreciated :-)

-- 
ChangBo Guo(gcb)
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Heat] Custom Nova Flavor creation through Heat (pt.2)

2014-05-05 Thread Solly Ross
Just to expand a bit on this, flavors are supposed to be the lowest level of 
granularity,
and the general idea is to round to the nearest flavor (so if you have a VM 
that requires
3GB of RAM, go with a 4GB flavor).  Hence, in my mind, it doesn't make any 
sense to create
flavors on the fly; you should have enough flavors to suit your needs, but I 
can't really
think of a situation where you'd need so much granularity that you'd need to 
create new
flavors on the fly (assuming, of course, that you planned ahead and created 
enough flavors
that you don't have VMs that are extremely over-allocated).

Best Regards,
Solly Ross

- Original Message -
From: Serg Melikyan smelik...@mirantis.com
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Sent: Monday, May 5, 2014 2:18:21 AM
Subject: Re: [openstack-dev] [Nova][Heat] Custom Nova Flavor creation through 
Heat (pt.2)

 Having project-scoped flavors will rid us of the identified issues, and will 
 allow a more fine-grained way of managing physical resources. 

Flavors concept was introduced in clouds to solve issue with effective physical 
resource usage: 8Gb physical memory can be effectively splitted to two 
m2.my_small with 4Gb RAM or to eight m1.my_tiny with 1 GB. 

Let's consider example when your cloud have only 2 compute nodes with 8GB RAM: 
vm1 (m1.my_tiny) - node1 
vm2 (m1.my_tiny) - node1 
vm3 (m2.my_small) - node1 
vm4 (m2.my_small) - node2 (since we could not spawn on node1) 

This leaves ability to spawn predictable 2 VMs with m1.my_tiny flavor on node1, 
and 2 VMs m1.my_tiny or 1 VM m2.my_small on node2. If user has ability to 
create any flavor that he likes, he can create flavor like mx.my_flavor with 
3GB of RAM that could not be spawned on node1 at all, and leaves 1GB under-used 
on node2 when spawned there. If you will multiply number of nodes to 100 or 
even 1000 (like some of the OpenStack deployments) you will have very big 
memory under-usage. 

Do we really need to have ability to allocate any number of physical resources 
for VM? If yes, I suggest to make two ways to define number of physical 
resource allocation for VMs: with flavors and dynamically. Leaving to cloud 
admins/owners to decide which way they prefer they cloud resources to be 
allocated. Since small clouds may prefer flavors, and big clouds may have 
dynamic resource allocation (impact from under-usage will be not so big). As 
transition plan project-scoped flavors may do the job. 


On Fri, May 2, 2014 at 5:35 PM, Dimitri Mazmanov  
dimitri.mazma...@ericsson.com  wrote: 



This topic has already been discussed last year and a use-case was described 
(see [1]). 
Here's a Heat blueprint for a new OS::Nova::Flavor resource: [2]. 
Several issues have been brought up after posting my implementation for review 
[3], all related to how flavors are defined/implemented in nova: 


* Only admin tenants can manage flavors due to the default admin rule in 
policy.json. 
* Per-stack flavor creation will pollute the global flavor list 
* If two stacks create a flavor with the same name, collision will occur, 
which will lead to the following error: ERROR (Conflict): Flavor with name 
dupflavor already exists. (HTTP 409) 
These and the ones described by Steven Hardy in [4] are related to the flavor 
scoping in Nova. 

Is there any plan/discussion to allow project scoped flavors in nova, similar 
to the Steven’s proposal for role-based scoping (see [4])? 
Currently the only purpose of the is_public flag is to hide the flavor from 
users without the admin role, but it’s still visible in all projects. Any plan 
to change this? 

Having project-scoped flavors will rid us of the identified issues, and will 
allow a more fine-grained way of managing physical resources. 

Dimitri 

[1] 
http://lists.openstack.org/pipermail/openstack-dev/2013-November/018744.html 
[2] https://wiki.openstack.org/wiki/Heat/Blueprints/dynamic-flavors 
[3] https://review.openstack.org/#/c/90029 
[4] 
http://lists.openstack.org/pipermail/openstack-dev/2013-November/019099.html 

___ 
OpenStack-dev mailing list 
OpenStack-dev@lists.openstack.org 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 




-- 
Serg Melikyan, Senior Software Engineer at Mirantis, Inc. 
http://mirantis.com | smelik...@mirantis.com 

+7 (495) 640-4904, 0261 
+7 (903) 156-0836 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Monitoring as a Service

2014-05-05 Thread Hochmuth, Roland M
Alexandre, Great timing on this question and I agree with your proposal. I
work for HP and we are just about to open-source a project for Monitoring
as a Service (MaaS), called Jahmon. Jahmon is based on our
customer-facing monitoring as a service solution and internal monitoring
projects.


Jahmon is a multi-tenant, highly performant, scalable, reliable and
fault-tolerant monitoring solution that scales to service provider levels
of metrics throughput. It has a RESTful API that is used for
storing/querying metrics, creating compound alarms, querying alarm
state/history, sending notifications and more.

I can go over the project with you as well as others that are interested.
We would like to start working with other open-source developers. I'll
also be at the Summit next week.

Regards --Roland


On 5/4/14, 1:37 PM, John Dickinson m...@not.mn wrote:

One of the advantages of the program concept within OpenStack is that
separate code projects with complementary goals can be managed under the
same program without needing to be the same codebase. The most obvious
example across every program are the server and client projects under
most programs.

This may be something that can be used here, if it doesn't make sense to
extend the ceilometer codebase itself.

--John





On May 4, 2014, at 12:30 PM, Denis Makogon dmako...@mirantis.com wrote:

 Hello to All.
 
 I also +1 this idea. As I can see, Telemetry program (according to
Launchpad) covers the process of the infrastructure metrics (networking,
etc) and in-compute-instances metrics/monitoring.
 So, the best option, I guess, is to propose add such great feature to
Ceilometer. In-compute-instance monitoring will be the great value-add
to upstream Ceilometer.
 As for me, it's a good chance to integrate well-known production ready
monitoring systems that have tons of specific plugins (like Nagios etc.)
 
 Best regards,
 Denis Makogon
 
 воскресенье, 4 мая 2014 г. пользователь John Griffith написал:
 
 
 
 On Sun, May 4, 2014 at 9:37 AM, Thomas Goirand z...@debian.org wrote:
 On 05/02/2014 05:17 AM, Alexandre Viau wrote:
  Hello Everyone!
 
  My name is Alexandre Viau from Savoir-Faire Linux.
 
  We have submited a Monitoring as a Service blueprint and need
feedback.
 
  Problem to solve: Ceilometer's purpose is to track and
*measure/meter* usage information collected from OpenStack components
(originally for billing). While Ceilometer is usefull for the cloud
operators and infrastructure metering, it is not a *monitoring* solution
for the tenants and their services/applications running in the cloud
because it does not allow for service/application-level monitoring and
it ignores detailed and precise guest system metrics.
 
  Proposed solution: We would like to add Monitoring as a Service to
Openstack
 
  Just like Rackspace's Cloud monitoring, the new monitoring service -
lets call it OpenStackMonitor for now -  would let users/tenants keep
track of their ressources on the cloud and receive instant notifications
when they require attention.
 
  This RESTful API would enable users to create multiple monitors with
predefined checks, such as PING, CPU usage, HTTPS and SMTP or custom
checks performed by a Monitoring Agent on the instance they want to
monitor.
 
  Predefined checks such as CPU and disk usage could be polled from
Ceilometer. Other predefined checks would be performed by the new
monitoring service itself. Checks such as PING could be flagged to be
performed from multiple sites.
 
  Custom checks would be performed by an optional Monitoring Agent.
Their results would be polled by the monitoring service and stored in
Ceilometer.
 
  If you wish to collaborate, feel free to contact me at
alexandre.v...@savoirfairelinux.com
  The blueprint is available here:
https://blueprints.launchpad.net/openstack-ci/+spec/monitoring-as-a-servi
ce
 
  Thanks!
 
 I would prefer if monitoring capabilities was added to Ceilometer rather
 than adding yet-another project to deal with.
 
 What's the reason for not adding the feature to Ceilometer directly?
 
 Thomas
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ​I'd also be interested in the overlap between your proposal and
Ceilometer.  It seems at first thought that it would be better to
introduce the monitoring functionality in to Ceilometer and make that
project more diverse as opposed to yet another project.​
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [barbican] Atlanta Summit Etherpads for Review

2014-05-05 Thread Chad Lung
The Barbican team has placed a few etherpads on our wiki for the community
to review. We plan to work on these at the Atlanta summit next week in our
sessions and throughout the week.

https://wiki.openstack.org/wiki/Barbican#Discussions_.2F_Etherpads

Thanks,

Chad Lung
http://giantflyingsaucer.com/blog/
@chadlung
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [barbican] Atlanta Summit Etherpads for Review

2014-05-05 Thread Tiwari, Arvind
Hi Chad,

We are working on following topics and expecting some time to discuss in the 
summit. Can we accommodate them in the summit?

https://blueprints.launchpad.net/barbican/+spec/secret-isolation-at-user-level 
(We are working on POC + API change proposal)
https://blueprints.launchpad.net/barbican/+spec/api-remove-uri-tenant-id (API 
change proposal)
https://blueprints.launchpad.net/barbican/+spec/ability-to-hard-delete-barbican-entities
 (API change proposal)

Thanks,
Arvind


From: Chad Lung [mailto:chad.l...@gmail.com]
Sent: Monday, May 05, 2014 10:06 AM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [barbican] Atlanta Summit Etherpads for Review


The Barbican team has placed a few etherpads on our wiki for the community to 
review. We plan to work on these at the Atlanta summit next week in our 
sessions and throughout the week.

https://wiki.openstack.org/wiki/Barbican#Discussions_.2F_Etherpads
Thanks,
Chad Lung
http://giantflyingsaucer.com/blog/
@chadlung
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [barbican] Atlanta Summit Etherpads for Review

2014-05-05 Thread Tiwari, Arvind
Chad,

Please let me know if you want me to start etherpads for them?

Regards,
Arvind

From: Tiwari, Arvind
Sent: Monday, May 05, 2014 10:22 AM
To: openstack-dev@lists.openstack.org
Subject: RE: [openstack-dev] [barbican] Atlanta Summit Etherpads for Review

Hi Chad,

We are working on following topics and expecting some time to discuss in the 
summit. Can we accommodate them in the summit?

https://blueprints.launchpad.net/barbican/+spec/secret-isolation-at-user-level 
(We are working on POC + API change proposal)
https://blueprints.launchpad.net/barbican/+spec/api-remove-uri-tenant-id (API 
change proposal)
https://blueprints.launchpad.net/barbican/+spec/ability-to-hard-delete-barbican-entities
 (API change proposal)

Thanks,
Arvind


From: Chad Lung [mailto:chad.l...@gmail.com]
Sent: Monday, May 05, 2014 10:06 AM
To: openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: [openstack-dev] [barbican] Atlanta Summit Etherpads for Review


The Barbican team has placed a few etherpads on our wiki for the community to 
review. We plan to work on these at the Atlanta summit next week in our 
sessions and throughout the week.

https://wiki.openstack.org/wiki/Barbican#Discussions_.2F_Etherpads
Thanks,
Chad Lung
http://giantflyingsaucer.com/blog/
@chadlung
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Heat] Custom Nova Flavor creation through Heat (pt.2)

2014-05-05 Thread Clint Byrum
Excerpts from Solly Ross's message of 2014-05-05 08:23:41 -0700:
 Just to expand a bit on this, flavors are supposed to be the lowest level of 
 granularity,
 and the general idea is to round to the nearest flavor (so if you have a VM 
 that requires
 3GB of RAM, go with a 4GB flavor).  Hence, in my mind, it doesn't make any 
 sense to create
 flavors on the fly; you should have enough flavors to suit your needs, but I 
 can't really
 think of a situation where you'd need so much granularity that you'd need to 
 create new
 flavors on the fly (assuming, of course, that you planned ahead and created 
 enough flavors
 that you don't have VMs that are extremely over-allocated).

I agree with the conclusion you're arguing for, but it is a bit more
complex than that. Flavor defines at least three things, and possibly 4:
RAM, root disk, and vcpu, with an optional ephemeral disk. Because of
that, the matrix of possibilities can get extremely large.

Consider if you segment RAM as 1GB, 4GB, 8GB, 16GB, vcpu as 1, 2, 4,
8, and root disk as 10G, 50G, 100G, 1T. Your matrix is now 4³, 64
flavors. If you've never heard of the paradox of choice, consumers
are slowed down by having too many choices. So less flavors will not
only make your resource consumption easier to optimize, it will help
new users move forward with more certainty.

But the reality is, nobody needs an 8 CPU, 1T, 1GB flavor. Likewise,
nobody needs a 1 CPU 16GB 10G server. Both are missing the mark with
the common workloads. And a lot are filled in by higher level services
like Trove, LBaaS, Swift, etc.

So realistically, having 20-30 flavors, with groupings around the
combinations users demand, is a known pattern that seems to work well.
If a user has a workload that is poorly served by any of these, it
probably makes the most sense for them to over-buy and demand a better
sized flavor from the provider. Dynamically allocating flavors is just
going to complicate things for everybody.

That said, if Nova supports it, Heat should too.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Do we have a way to add a non-nova managed host to nova managed?

2014-05-05 Thread Joe Gordon
On Sun, May 4, 2014 at 8:28 PM, Chen CH Ji jiche...@cn.ibm.com wrote:

 Hi
   Not sure ti's proper to ask this question here, or maybe
 someone asked this question before and discussion , please share info to me
 if we have

   If we have a bunch of hosts that used to manage by them
 self, some vm and hypervisors already running on the hosts
  Do we have a way to include those vms into management of nova
 ?



The short answer is no. There have been previous discussions on this
mailing list about this subject, and the answer keeps coming back to no.




 Best Regards!

 Kevin (Chen) Ji 纪 晨

 Engineer, zVM Development, CSTL
 Notes: Chen CH Ji/China/IBM@IBMCN   Internet: jiche...@cn.ibm.com
 Phone: +86-10-82454158
 Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District,
 Beijing 100193, PRC

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [mistral] Community meeting minutes/log - 05/05/2014

2014-05-05 Thread Renat Akhmerov
Thanks for joining us today.

Meeting minutes: 
http://eavesdrop.openstack.org/meetings/mistral/2014/mistral.2014-05-05-16.00.html
Full log: 
http://eavesdrop.openstack.org/meetings/mistral/2014/mistral.2014-05-05-16.00.log.html

My suggestion would be to skip the next two meetings due to Summit activities 
and have the next meeting on May 26th. Thoughts?

Renat Akhmerov
@ Mirantis Inc.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Monitoring as a Service

2014-05-05 Thread Clint Byrum
Excerpts from John Dickinson's message of 2014-05-04 12:37:55 -0700:
 One of the advantages of the program concept within OpenStack is that 
 separate code projects with complementary goals can be managed under the same 
 program without needing to be the same codebase. The most obvious example 
 across every program are the server and client projects under most 
 programs.
 
 This may be something that can be used here, if it doesn't make sense to 
 extend the ceilometer codebase itself.
 

Totally agree. Programs can also just do targetted work in projects that
are also managed by another program. For instance, in the Deployment
program, we drive features into all of the other projects that make it
possible to deploy OpenStack with itself.

Having an API to define monitoring would be a fantastically useful thing
for that effort btw.

But I digress, I thin John is spot on here. The Telemetry program would
be the place to manage the service. However, I think it makes sense for
it to at least start out as separate from Ceilometer for one reason:
Ceilometer is focused on measuring the infrastructure itself. What is
needed is a thing for monitoring the workloads from the user POV.

They may share data model, and even API, but I imagine the backend
might be quite different, since the ceilometer agents are trusted as
being under the cloud and have access to RPC, but the users would be
constrained to REST API.

So sort of like Trove is eventually going to converge on using Heat, so
should a monitoring as a service project eventually converge on riding
on Ceilometer as much as possible. That said, if there's something
working now, I say bring it into the fold and iterate as needed.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] sqlalchemy-migrate 0.9.1 release impending - oslo.db and Py3 support

2014-05-05 Thread David Ripton
We need a new release of sqlalchemy-migrate (0.9.1) to make commit 
93efb62fd100 available, as a step toward fixing some oslo.db issues with 
sqlite and its limited/optional support for unique constraints.


This release will also be the first one with commit a03b141a954, for 
Python 3 support.


I plan on doing the release tomorrow, unless someone objects.

Thanks.

--
David Ripton   Red Hat   drip...@redhat.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Proposal: add local hacking for oslo-incubator

2014-05-05 Thread Joe Gordon
On Mon, May 5, 2014 at 8:02 AM, ChangBo Guo glongw...@gmail.com wrote:

 Hi Stackers,

 I find some common code style would be avoided while I'm reviewing code
 ,so think these check would
 be nice to move into local hacking. The local hacking can ease reviewer
 burden.

 The idea from keystone blueprint [1].
 Hacking is a great start at automating checks for common style issues.
 There are still lots of things that it is not checking for that it probably
 should. The local hacking ease reviewer burden . This is the list of from
 [1][2] that would be nice to move into an automated check:


Can you go into why its worth enforcing each of these?

As a rule of thumb I prefer to add checks to hacking when relevant to all
projects instead of each project re-inventing a local hacking rule.


 - use import style 'from openstack.common.* import' not use 'import
 openstack.common.*'
 - assertIsNone should be used when using None with assertEqual
 - _() should not be used in debug log statements
 -do not use 'assertTrue(isinstance(a, b)) sentence'
 -do not use 'assertEqual(type(A), B) sentence'

[1]
 https://blueprints.launchpad.net/keystone/+spec/more-code-style-automation
 [2] https://github.com/openstack/nova/blob/master/nova/hacking/checks.py

 I just registered a blueprint for this in [3] and submit first patch in
 [4].

 [3] https://blueprints.launchpad.net/oslo/+spec/oslo-local-hacking

 [4] 
 https://review.openstack.org/#/c/87832/https://github.com/openstack/nova/blob/master/nova/hacking/checks.py
 Should we add local hacking for oslo-incubator ?  If yes, what's the other
 check will be added ?
 Your comment is appreciated :-)

 --
 ChangBo Guo(gcb)

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Question about addit log in nova-compute.log

2014-05-05 Thread Jay Pipes

On 05/04/2014 11:09 PM, Chen CH Ji wrote:

Hi
I saw in my compute.log has following logs which looks
to me strange at first, Free resource is negative make me confused and I
take a look at the existing code
looks to me the logic is correct and calculation doesn't
have problem ,but the output 'Free' is confusing

Is this on purpose or might need to be enhanced?

2014-05-05 10:51:33.732 4992 AUDIT nova.compute.resource_tracker [-]
Free ram (MB): -1559
2014-05-05 10:51:33.732 4992 AUDIT nova.compute.resource_tracker [-]
Free disk (GB): 29
2014-05-05 10:51:33.732 4992 AUDIT nova.compute.resource_tracker [-]
Free VCPUS: -3


Hi Kevin,

I think changing free to available might make things a little more 
clear. In the above case, it may be that your compute worker has both 
CPU and RAM overcommit enabled.


Best,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Heat] Custom Nova Flavor creation through Heat (pt.2)

2014-05-05 Thread Steve Gordon
- Original Message -
 Excerpts from Solly Ross's message of 2014-05-05 08:23:41 -0700:
  Just to expand a bit on this, flavors are supposed to be the lowest level
  of granularity,
  and the general idea is to round to the nearest flavor (so if you have a VM
  that requires
  3GB of RAM, go with a 4GB flavor).  Hence, in my mind, it doesn't make any
  sense to create
  flavors on the fly; you should have enough flavors to suit your needs, but
  I can't really
  think of a situation where you'd need so much granularity that you'd need
  to create new
  flavors on the fly (assuming, of course, that you planned ahead and created
  enough flavors
  that you don't have VMs that are extremely over-allocated).
 
 I agree with the conclusion you're arguing for, but it is a bit more
 complex than that. Flavor defines at least three things, and possibly 4:
 RAM, root disk, and vcpu, with an optional ephemeral disk. Because of
 that, the matrix of possibilities can get extremely large.

In addition extra specifications may denote the passthrough of additional 
devices, adding another dimension. This seems likely to be the case in the use 
case outlined in the original thread [1].

Thanks,

Steve

[1] http://lists.openstack.org/pipermail/openstack-dev/2013-November/018744.html

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Heat] Custom Nova Flavor creation through Heat (pt.2)

2014-05-05 Thread Solly Ross
If you actually have 64 flavors, though, and it's overwhelming
your users, it would be pretty trivial to implement a flavor
recommender where you input your requirements and it pops out
the nearest flavor.  That being said, I do think you're right
that you can probably mitigate flavor explosion by trimming out
the outlier flavors.  20-30 flavors is still a bit much, but with
some clever naming, the burden of choosing a flavor can be lessened.

Additionally, if this is all for the purpose of orchestration,
we have a computer dealing with choosing the correct flavor,
and if your computer has a problem dealing with a choice between 64
(or even 512) different flavors, perhaps it's time to upgrade :-P.

Best Regards,
Solly Ross

- Original Message -
From: Clint Byrum cl...@fewbar.com
To: openstack-dev openstack-dev@lists.openstack.org
Sent: Monday, May 5, 2014 12:28:58 PM
Subject: Re: [openstack-dev] [Nova][Heat] Custom Nova Flavor creation   through 
Heat (pt.2)

Excerpts from Solly Ross's message of 2014-05-05 08:23:41 -0700:
 Just to expand a bit on this, flavors are supposed to be the lowest level of 
 granularity,
 and the general idea is to round to the nearest flavor (so if you have a VM 
 that requires
 3GB of RAM, go with a 4GB flavor).  Hence, in my mind, it doesn't make any 
 sense to create
 flavors on the fly; you should have enough flavors to suit your needs, but I 
 can't really
 think of a situation where you'd need so much granularity that you'd need to 
 create new
 flavors on the fly (assuming, of course, that you planned ahead and created 
 enough flavors
 that you don't have VMs that are extremely over-allocated).

I agree with the conclusion you're arguing for, but it is a bit more
complex than that. Flavor defines at least three things, and possibly 4:
RAM, root disk, and vcpu, with an optional ephemeral disk. Because of
that, the matrix of possibilities can get extremely large.

Consider if you segment RAM as 1GB, 4GB, 8GB, 16GB, vcpu as 1, 2, 4,
8, and root disk as 10G, 50G, 100G, 1T. Your matrix is now 4³, 64
flavors. If you've never heard of the paradox of choice, consumers
are slowed down by having too many choices. So less flavors will not
only make your resource consumption easier to optimize, it will help
new users move forward with more certainty.

But the reality is, nobody needs an 8 CPU, 1T, 1GB flavor. Likewise,
nobody needs a 1 CPU 16GB 10G server. Both are missing the mark with
the common workloads. And a lot are filled in by higher level services
like Trove, LBaaS, Swift, etc.

So realistically, having 20-30 flavors, with groupings around the
combinations users demand, is a known pattern that seems to work well.
If a user has a workload that is poorly served by any of these, it
probably makes the most sense for them to over-buy and demand a better
sized flavor from the provider. Dynamically allocating flavors is just
going to complicate things for everybody.

That said, if Nova supports it, Heat should too.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] elements vs. openstack-infra puppet for CI infra nodes

2014-05-05 Thread Clint Byrum
Excerpts from Dan Prince's message of 2014-05-05 05:51:52 -0700:
 I originally sent this to TripleO but perhaps [infra] would have been a 
 better choice.
 
 The short version is I'd like to run a lightweight (unofficial) mirror for 
 Fedora in infra:
 
  https://review.openstack.org/#/c/90875/
 
 And then in the TripleO CI racks we can run local Squid caches using 
 something like this:
 
  https://review.openstack.org/#/c/91161/
 
 We have to do something because we see quite a bit of job failures related to 
 unstable mirrors. If using local Squid caches doesn't work out then perhaps 
 we will have to run local mirrors in each TripleO CI rack but I would like to 
 avoid that if possible as it is more resource heavy. Especially because we'll 
 need to do the same things in each rack for Fedora and Ubuntu (both of which 
 run in each TripleO CI test rack).
 
 Dan
 
 - Forwarded Message -
 From: Dan Prince dpri...@redhat.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Sent: Tuesday, April 29, 2014 4:10:30 PM
 Subject: [openstack-dev] [TripleO] elements vs. openstack-infra puppet for
 CI infra nodes
 
 A bit of background TripleO CI background:
 
  At this point we've got two public CI overcloud which we can use to run 
 TripleO check jobs for CI. Things are evolving nicely and we've recently been 
 putting some effort into making things run faster by adding local distro and 
 Pypi mirrors. Etc. This is good in that it should help us improve both the 
 stability of test results and runtimes.
 
 
 
 This brings up the question of how we are going to manage our TripleO 
 overcloud CI resources for things like: distro mirrors, caches, test 
 environment brokers, etc
 
 1) Do we use and or create openstack-infra/config modules for everything we 
 need and manage it via the normal OpenStack infrastructure way using Puppet 
 etc.?
 
 2) Or, do we take the TripleO oriented approach and use image elements and 
 Heat templates to manage things?
 
 Which of these two options do we prefer given that we eventually want TripleO 
 to be gating? And who is responsible for maintaining them (TripleO CD Admins 
 or OpenStack Infra)?
 

The way I see things going, infra has a job to do _today_ and they should
choose the tools they want for doing that job.

TripleO is trying to make OpenStack deploy itself, and hopefully in
so doing, also make managing workloads easier on OpenStack. If we are
operating any workload, it would make a lot of sense for us to use the
same tools we are suggesting OpenStack operators consider using.

So it really comes down to who is operating the mirrors. If we're doing
it, we should be doing it with the tools we've built for that specific
purpose. If we expect infra to do it, then they should decide what works
best for them.

Either way, I completely agree that we shouldn't be doing anymore one-off
cowboy mirrors, which is what we've been doing.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [barbican] Atlanta Summit Etherpads for Review

2014-05-05 Thread John Wood
Hello Arvind,

These all look like topics to dig into if possible at the summit, so let's 
discuss this at today's IRC meeting for sure.

Thanks,
John


From: Tiwari, Arvind [arvind.tiw...@hp.com]
Sent: Monday, May 05, 2014 11:24 AM
To: Tiwari, Arvind; OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [barbican] Atlanta Summit Etherpads for Review

Chad,

Please let me know if you want me to start etherpads for them?

Regards,
Arvind

From: Tiwari, Arvind
Sent: Monday, May 05, 2014 10:22 AM
To: openstack-dev@lists.openstack.org
Subject: RE: [openstack-dev] [barbican] Atlanta Summit Etherpads for Review

Hi Chad,

We are working on following topics and expecting some time to discuss in the 
summit. Can we accommodate them in the summit?

https://blueprints.launchpad.net/barbican/+spec/secret-isolation-at-user-level 
(We are working on POC + API change proposal)
https://blueprints.launchpad.net/barbican/+spec/api-remove-uri-tenant-id (API 
change proposal)
https://blueprints.launchpad.net/barbican/+spec/ability-to-hard-delete-barbican-entities
 (API change proposal)

Thanks,
Arvind


From: Chad Lung [mailto:chad.l...@gmail.com]
Sent: Monday, May 05, 2014 10:06 AM
To: openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: [openstack-dev] [barbican] Atlanta Summit Etherpads for Review


The Barbican team has placed a few etherpads on our wiki for the community to 
review. We plan to work on these at the Atlanta summit next week in our 
sessions and throughout the week.

https://wiki.openstack.org/wiki/Barbican#Discussions_.2F_Etherpads
Thanks,
Chad Lung
http://giantflyingsaucer.com/blog/
@chadlung
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Heat] Custom Nova Flavor creation through Heat (pt.2)

2014-05-05 Thread Clint Byrum
Excerpts from Solly Ross's message of 2014-05-05 10:01:12 -0700:
 If you actually have 64 flavors, though, and it's overwhelming
 your users, it would be pretty trivial to implement a flavor
 recommender where you input your requirements and it pops out
 the nearest flavor.  That being said, I do think you're right
 that you can probably mitigate flavor explosion by trimming out
 the outlier flavors.  20-30 flavors is still a bit much, but with
 some clever naming, the burden of choosing a flavor can be lessened.
 
 Additionally, if this is all for the purpose of orchestration,
 we have a computer dealing with choosing the correct flavor,
 and if your computer has a problem dealing with a choice between 64
 (or even 512) different flavors, perhaps it's time to upgrade :-P.
 

When starting with nothing, users are pretty much going to have to
guess. They'll have 64 things to choose from. The computer only gets
involved when you have data.

A piece of software which takes measurements of resource constraints
of your application, and understands how to then choose a flavor is not
exactly simple. One also needs business rules, as sometimes you may want
to accept being resource constrained in exchange for reducing cost.

Though it would be a pretty cool feature if one could simply run an agent
on their vm and it would be able to identify the flavor that would be
most efficient. :)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] [Heat] Custom Nova Flavor creation through Heat (pt.2)

2014-05-05 Thread Dimitri Mazmanov
I guess I need to describe the use-case first.
An example of Telco application is IP Multimedia Subsystem (IMS) [1],
which is a fairly complex beast. Each component of IMS can have very
different requirements on the computing resources. If we try to capture
everything in terms of flavors the list of flavors can grow very quickly
and still be specific to one single application. There¹s also many more
apps to deploy. Agree, one can say, just round to the best matching
flavor! Will work, but not the most efficient solution (a set of 4-5
global flavors will not provide the best fitting model for every VM we
need to spawn). For such applications a flavor is not the lowest level of
granularity. RAM, CPU, Disk is. Hence the question. In OpenStack, tenants
are bound to think in terms flavors. And if this model is the lowest level
of granularity, then dynamic creation of flavors actually supports this
model and allows non-trivial applications to use flavors (I guess this is
why this question had been raised last year by NSN). But, there are some
issues related to this :) and these issues I have written down in my first
mail.

Dimitri 

[1] http://en.wikipedia.org/wiki/IP_Multimedia_Subsystem


On 05/05/14 17:23, Solly Ross sr...@redhat.com wrote:

Just to expand a bit on this, flavors are supposed to be the lowest level
of granularity,
and the general idea is to round to the nearest flavor (so if you have a
VM that requires
3GB of RAM, go with a 4GB flavor).  Hence, in my mind, it doesn't make
any sense to create
flavors on the fly; you should have enough flavors to suit your needs,
but I can't really
think of a situation where you'd need so much granularity that you'd need
to create new
flavors on the fly (assuming, of course, that you planned ahead and
created enough flavors
that you don't have VMs that are extremely over-allocated).

Best Regards,
Solly Ross

- Original Message -
From: Serg Melikyan smelik...@mirantis.com
To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Sent: Monday, May 5, 2014 2:18:21 AM
Subject: Re: [openstack-dev] [Nova][Heat] Custom Nova Flavor creation
through Heat (pt.2)

 Having project-scoped flavors will rid us of the identified issues, and
will allow a more fine-grained way of managing physical resources.

Flavors concept was introduced in clouds to solve issue with effective
physical resource usage: 8Gb physical memory can be effectively splitted
to two m2.my_small with 4Gb RAM or to eight m1.my_tiny with 1 GB.

Let's consider example when your cloud have only 2 compute nodes with 8GB
RAM: 
vm1 (m1.my_tiny) - node1
vm2 (m1.my_tiny) - node1
vm3 (m2.my_small) - node1
vm4 (m2.my_small) - node2 (since we could not spawn on node1)

This leaves ability to spawn predictable 2 VMs with m1.my_tiny flavor on
node1, and 2 VMs m1.my_tiny or 1 VM m2.my_small on node2. If user has
ability to create any flavor that he likes, he can create flavor like
mx.my_flavor with 3GB of RAM that could not be spawned on node1 at all,
and leaves 1GB under-used on node2 when spawned there. If you will
multiply number of nodes to 100 or even 1000 (like some of the OpenStack
deployments) you will have very big memory under-usage.

Do we really need to have ability to allocate any number of physical
resources for VM? If yes, I suggest to make two ways to define number of
physical resource allocation for VMs: with flavors and dynamically.
Leaving to cloud admins/owners to decide which way they prefer they cloud
resources to be allocated. Since small clouds may prefer flavors, and big
clouds may have dynamic resource allocation (impact from under-usage will
be not so big). As transition plan project-scoped flavors may do the job.


On Fri, May 2, 2014 at 5:35 PM, Dimitri Mazmanov 
dimitri.mazma...@ericsson.com  wrote:



This topic has already been discussed last year and a use-case was
described (see [1]).
Here's a Heat blueprint for a new OS::Nova::Flavor resource: [2].
Several issues have been brought up after posting my implementation for
review [3], all related to how flavors are defined/implemented in nova:


* Only admin tenants can manage flavors due to the default admin rule
in policy.json. 
* Per-stack flavor creation will pollute the global flavor list
* If two stacks create a flavor with the same name, collision will
occur, which will lead to the following error: ERROR (Conflict): Flavor
with name dupflavor already exists. (HTTP 409)
These and the ones described by Steven Hardy in [4] are related to the
flavor scoping in Nova.

Is there any plan/discussion to allow project scoped flavors in nova,
similar to the Steven¹s proposal for role-based scoping (see [4])?
Currently the only purpose of the is_public flag is to hide the flavor
from users without the admin role, but it¹s still visible in all
projects. Any plan to change this?

Having project-scoped flavors will rid us of the identified issues, and
will allow a more fine-grained way 

[openstack-dev] [Heat] Special session on heat-translator project at Atlanta summit

2014-05-05 Thread Thomas Spatzier

Hi all,

I mentioned in some earlier mail that we have started to implement a TOSCA
YAML to HOT translator on stackforge as project heat-translator. We have
been lucky to get a session allocated in the context of the Open source @
OpenStack program for the Atlanta summit, so I wanted to share this with
the Heat community to hopefully attract some interested people. Here is the
session link:

http://openstacksummitmay2014atlanta.sched.org/event/c94698b4ea2287eccff8fb743a358d8c#.U2e-zl6cuVg

While there is some focus on TOSCA, the goal of discussions would also be
to find a reasonable design for sitting such a translation layer on-top of
Heat, but also identify the relations and benefits for other projects, e.g.
how Murano use cases that include workflows for templates (which is part of
TOSCA) could be addressed long term. So we hope to see a lot of interested
folks there!

Regards,
Thomas

PS: Here is a more detailed description of the session that we submitted:

1) Project Name:
heat-translator

2) Describe your project, including links to relevent sites, repositories,
bug trackers and documentation:
We have recently started a stackforge project [1] with the goal to enable
the deployment of templates defined in standard format such as OASIS TOSCA
on top of OpenStack Heat. The Heat community has been implementing a native
template format 'HOT' (Heat Orchestration Templates) during the Havana and
Icehouse cycles, but it is recognized that support for other standard
formats that are sufficiently aligned with HOT are also desirable to be
supported.
Therefore, the goal of the heat-translator project is to enable such
support by translating such formats into Heat's native format and thereby
enable a deployment on Heat. Current focus is on OASIS TOSCA. In fact, the
OASIS TOSCA TC is currently working on a TOSCA Simple Profile in YAML [2]
which has been greatly inspired by discussions with the Heat team, to help
getting TOSCA adoption in the community. The TOSCA TC and the Heat team
have also be in close discussion to keep HOT and TOSCA YAML aligned. Thus,
the first goal of heat-translator will be to enable deployment of TOSCA
YAML templates thru Heat.
Development had been started in a separate public github repository [3]
earlier this year, but we are currently in the process of moving all code
to the stackforge projects

[1] https://github.com/stackforge/heat-translator
[2]
https://www.oasis-open.org/committees/document.php?document_id=52571wg_abbrev=tosca
[3] https://github.com/spzala/heat-translator

3) Please describe how your project relates to OpenStack:
Heat has been working on a native template format HOT to replace the
original CloudFormation format as the primary template of the core Heat
engine. CloudFormation shall continue to be supported as one possible
format (to protect existing content), but it is desired to move such
support out of the core engine into a translation layer. This is one
architectural move that can be supported by the heat-translator project.
Furthermore, there is desire to enable standardized formats such OASIS
TOSCA to run on Heat, which will also be possible thru heat-translator.

In addition, recent discussions [4] in the large OpenStack orchestration
community have shown that several groups (e.g. Murano) are looking at
extending orchestration capabilities beyond Heat functionality, and in the
course of doing this also extend current template formats. It has been
suggested in mailing list posts that TOSCA could be one potential format to
center such discussions around instead of several groups developing their
own orchestration DSLs. The next version of TOSCA with its simple profile
in YAML is very open for input from the community, so there is a great
opportunity to shape the standard in a way to address use cases brought up
by the community. Willingness to join discussions with the TOSCA TC have
already been indicated by several companies contributing to OpenStack.
Therefore we think the heat-translator project can help to focus such
discussions.

[4]
http://lists.openstack.org/pipermail/openstack-dev/2014-March/028957.html

4) How do you plan to use the time and space?
Give attendees an overview of current developments of the TOSCA Simple
Profile in YAML and how we are aligning this with HOT.
Give a quick summary of current code.
Discuss next steps and long term direction of the heat-translator project:
alignment with Heat, parts that could move into Heat, parts that would stay
outside of Heat etc.
Collect use cases from other interested groups (e.g. Murano), and discuss
that as potential input for the project and also ongoing TOSCA standards
work.
Discuss if and how this project could help to address requirements of
different groups.
Discuss and agree on a design to (1) meet important requirements based on
those discussions, and (2) to best enable collaborative development with
the community.


___
OpenStack-dev mailing list

Re: [openstack-dev] [sahara] Design Summit Sessions

2014-05-05 Thread Matthew Farrellee
bummer. it seems to me like having the api discussion on the same day as 
the other outwardly facing topics would be a good idea.


best,


matt

On 04/28/2014 10:29 AM, Sergey Lukjanov wrote:

Matt, I'd like to keep the v2 api discussion in the end of our design
sessions track to have enough input on other areas. IMO we should
discuss first what we need to have and then how it'll looks like.

On Fri, Apr 25, 2014 at 9:29 PM, Matthew Farrellee m...@redhat.com wrote:

On 04/24/2014 10:51 AM, Sergey Lukjanov wrote:


Hey folks,

I've pushed the draft schedule for Sahara sessions on ATL design
summit. The description isn't fully completed, I'm working on it. I'll
do it till the end of week and add an etherpad to each session.

Sahara folks, please, take a look on a schedule and share your
thoughts / comments.

Thanks.

http://junodesignsummit.sched.org/overview/type/sahara+%28ex-savanna%29



will you swap v2-api and scalable slots? part of it will flow into ux re
image-registry.

maybe add some error handling / state machine to the ux improvements

best,


matt

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev







___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Swift] This week's team meeting: summit prep

2014-05-05 Thread John Dickinson
This week's Swift team meeting will spend time looking at the Swift-related 
conference talks and summit design sessions.

https://wiki.openstack.org/wiki/Meetings/Swift

If you are leading a session topic or giving a conference talk, please attend 
this week's meeting. I want to make sure you have what you need and are 
adequately prepared for the conference.

--John






signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Monitoring as a Service

2014-05-05 Thread Alexandre Viau
Thanks to everyone for the feedback. I agree that this falls under the
Telemetry Program and I have moved the blueprint.

You can find it here:
https://blueprints.launchpad.net/ceilometer/+spec/monitoring-as-a-service
Wiki page: https://wiki.openstack.org/wiki/MaaS
Etherpad: https://etherpad.openstack.org/p/MaaS

 I can go over the project with you as well as others that are interested.
 We would like to start working with other open-source developers. I'll
 also be at the Summit next week.
Roland,

I currently have no plans to be at the Summit next week. However, I
would be interested in exploring what you have already done and learn
from it.  Maybe we can schedule a meeting? You can always contact me on
IRC (aviau) or by e-mail at alexandre.v...@savoirfairelinux.com

For now, I think we should focus on the use cases. I invite all of you
to help us list them on the Etherpad.

Alexandre



On 14-05-05 12:00 PM, Hochmuth, Roland M wrote:
 Alexandre, Great timing on this question and I agree with your proposal. I
 work for HP and we are just about to open-source a project for Monitoring
 as a Service (MaaS), called Jahmon. Jahmon is based on our
 customer-facing monitoring as a service solution and internal monitoring
 projects.


 Jahmon is a multi-tenant, highly performant, scalable, reliable and
 fault-tolerant monitoring solution that scales to service provider levels
 of metrics throughput. It has a RESTful API that is used for
 storing/querying metrics, creating compound alarms, querying alarm
 state/history, sending notifications and more.

 I can go over the project with you as well as others that are interested.
 We would like to start working with other open-source developers. I'll
 also be at the Summit next week.

 Regards --Roland


 On 5/4/14, 1:37 PM, John Dickinson m...@not.mn wrote:

 One of the advantages of the program concept within OpenStack is that
 separate code projects with complementary goals can be managed under the
 same program without needing to be the same codebase. The most obvious
 example across every program are the server and client projects under
 most programs.

 This may be something that can be used here, if it doesn't make sense to
 extend the ceilometer codebase itself.

 --John





 On May 4, 2014, at 12:30 PM, Denis Makogon dmako...@mirantis.com wrote:

 Hello to All.

 I also +1 this idea. As I can see, Telemetry program (according to
 Launchpad) covers the process of the infrastructure metrics (networking,
 etc) and in-compute-instances metrics/monitoring.
 So, the best option, I guess, is to propose add such great feature to
 Ceilometer. In-compute-instance monitoring will be the great value-add
 to upstream Ceilometer.
 As for me, it's a good chance to integrate well-known production ready
 monitoring systems that have tons of specific plugins (like Nagios etc.)

 Best regards,
 Denis Makogon

 воскресенье, 4 мая 2014 г. пользователь John Griffith написал:



 On Sun, May 4, 2014 at 9:37 AM, Thomas Goirand z...@debian.org wrote:
 On 05/02/2014 05:17 AM, Alexandre Viau wrote:
 Hello Everyone!

 My name is Alexandre Viau from Savoir-Faire Linux.

 We have submited a Monitoring as a Service blueprint and need
 feedback.
 Problem to solve: Ceilometer's purpose is to track and
 *measure/meter* usage information collected from OpenStack components
 (originally for billing). While Ceilometer is usefull for the cloud
 operators and infrastructure metering, it is not a *monitoring* solution
 for the tenants and their services/applications running in the cloud
 because it does not allow for service/application-level monitoring and
 it ignores detailed and precise guest system metrics.
 Proposed solution: We would like to add Monitoring as a Service to
 Openstack
 Just like Rackspace's Cloud monitoring, the new monitoring service -
 lets call it OpenStackMonitor for now -  would let users/tenants keep
 track of their ressources on the cloud and receive instant notifications
 when they require attention.
 This RESTful API would enable users to create multiple monitors with
 predefined checks, such as PING, CPU usage, HTTPS and SMTP or custom
 checks performed by a Monitoring Agent on the instance they want to
 monitor.
 Predefined checks such as CPU and disk usage could be polled from
 Ceilometer. Other predefined checks would be performed by the new
 monitoring service itself. Checks such as PING could be flagged to be
 performed from multiple sites.
 Custom checks would be performed by an optional Monitoring Agent.
 Their results would be polled by the monitoring service and stored in
 Ceilometer.
 If you wish to collaborate, feel free to contact me at
 alexandre.v...@savoirfairelinux.com
 The blueprint is available here:
 https://blueprints.launchpad.net/openstack-ci/+spec/monitoring-as-a-servi
 ce
 Thanks!
 I would prefer if monitoring capabilities was added to Ceilometer rather
 than adding yet-another project to deal with.

 What's the reason for not adding the 

Re: [openstack-dev] [Nova] [Heat] Custom Nova Flavor creation through Heat (pt.2)

2014-05-05 Thread Solly Ross
One thing that I was discussing with @jaypipes and @dansmith over
on IRC was the possibility of breaking flavors down into separate
components -- i.e have a disk flavor, a CPU flavor, and a RAM flavor.
This way, you still get the control of the size of your building blocks
(e.g. you could restrict RAM to only 2GB, 4GB, or 16GB), but you avoid
exponential flavor explosion by separating out the axes.

Best Regards,
Solly Ross

P.S. For people who use flavor names to convey information about the
workload, that probably a job better done by the VM tagging proposal
(https://review.openstack.org/#/c/91444/).

- Original Message -
From: Dimitri Mazmanov dimitri.mazma...@ericsson.com
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Sent: Monday, May 5, 2014 1:20:09 PM
Subject: Re: [openstack-dev] [Nova] [Heat] Custom Nova Flavor creation through 
Heat (pt.2)

I guess I need to describe the use-case first.
An example of Telco application is IP Multimedia Subsystem (IMS) [1],
which is a fairly complex beast. Each component of IMS can have very
different requirements on the computing resources. If we try to capture
everything in terms of flavors the list of flavors can grow very quickly
and still be specific to one single application. There¹s also many more
apps to deploy. Agree, one can say, just round to the best matching
flavor! Will work, but not the most efficient solution (a set of 4-5
global flavors will not provide the best fitting model for every VM we
need to spawn). For such applications a flavor is not the lowest level of
granularity. RAM, CPU, Disk is. Hence the question. In OpenStack, tenants
are bound to think in terms flavors. And if this model is the lowest level
of granularity, then dynamic creation of flavors actually supports this
model and allows non-trivial applications to use flavors (I guess this is
why this question had been raised last year by NSN). But, there are some
issues related to this :) and these issues I have written down in my first
mail.

Dimitri 

[1] http://en.wikipedia.org/wiki/IP_Multimedia_Subsystem


On 05/05/14 17:23, Solly Ross sr...@redhat.com wrote:

Just to expand a bit on this, flavors are supposed to be the lowest level
of granularity,
and the general idea is to round to the nearest flavor (so if you have a
VM that requires
3GB of RAM, go with a 4GB flavor).  Hence, in my mind, it doesn't make
any sense to create
flavors on the fly; you should have enough flavors to suit your needs,
but I can't really
think of a situation where you'd need so much granularity that you'd need
to create new
flavors on the fly (assuming, of course, that you planned ahead and
created enough flavors
that you don't have VMs that are extremely over-allocated).

Best Regards,
Solly Ross

- Original Message -
From: Serg Melikyan smelik...@mirantis.com
To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Sent: Monday, May 5, 2014 2:18:21 AM
Subject: Re: [openstack-dev] [Nova][Heat] Custom Nova Flavor creation
through Heat (pt.2)

 Having project-scoped flavors will rid us of the identified issues, and
will allow a more fine-grained way of managing physical resources.

Flavors concept was introduced in clouds to solve issue with effective
physical resource usage: 8Gb physical memory can be effectively splitted
to two m2.my_small with 4Gb RAM or to eight m1.my_tiny with 1 GB.

Let's consider example when your cloud have only 2 compute nodes with 8GB
RAM: 
vm1 (m1.my_tiny) - node1
vm2 (m1.my_tiny) - node1
vm3 (m2.my_small) - node1
vm4 (m2.my_small) - node2 (since we could not spawn on node1)

This leaves ability to spawn predictable 2 VMs with m1.my_tiny flavor on
node1, and 2 VMs m1.my_tiny or 1 VM m2.my_small on node2. If user has
ability to create any flavor that he likes, he can create flavor like
mx.my_flavor with 3GB of RAM that could not be spawned on node1 at all,
and leaves 1GB under-used on node2 when spawned there. If you will
multiply number of nodes to 100 or even 1000 (like some of the OpenStack
deployments) you will have very big memory under-usage.

Do we really need to have ability to allocate any number of physical
resources for VM? If yes, I suggest to make two ways to define number of
physical resource allocation for VMs: with flavors and dynamically.
Leaving to cloud admins/owners to decide which way they prefer they cloud
resources to be allocated. Since small clouds may prefer flavors, and big
clouds may have dynamic resource allocation (impact from under-usage will
be not so big). As transition plan project-scoped flavors may do the job.


On Fri, May 2, 2014 at 5:35 PM, Dimitri Mazmanov 
dimitri.mazma...@ericsson.com  wrote:



This topic has already been discussed last year and a use-case was
described (see [1]).
Here's a Heat blueprint for a new OS::Nova::Flavor resource: [2].
Several issues have been brought up after posting my implementation for
review [3], 

Re: [openstack-dev] [nova] about pci device filter

2014-05-05 Thread Jiang, Yunhong
Hi, Bohai, are you talking about the scheduler filter for PCI, right?

I think the scheduler filters can be changed by nova options already, so I 
don't think we need another mechanism and just create another filter to replace 
the default pci filter?

--jyh

 -Original Message-
 From: Bohai (ricky) [mailto:bo...@huawei.com]
 Sent: Monday, May 05, 2014 1:32 AM
 To: OpenStack-dev@lists.openstack.org
 Subject: [openstack-dev] [nova] about pci device filter
 
 Hi, stackers:
 
 Now there is an default while list filter for PCI device.
 But maybe it's not enough in some scenario.
 
 Maybe it's better if we provide a mechanism to specify a customize filter.
 
 For example:
 So user can make a special filter , then specify which filter to use in
 configure files.
 
 Any advices?
 
 Best regards to you.
 Ricky
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Heat] Custom Nova Flavor creation through Heat (pt.2)

2014-05-05 Thread Chris Friesen

On 05/05/2014 10:51 AM, Steve Gordon wrote:


In addition extra specifications may denote the passthrough of additional 
devices, adding another dimension. This seems likely to be the case in the use 
case outlined in the original thread [1].

Thanks,

Steve

[1] http://lists.openstack.org/pipermail/openstack-dev/2013-November/018744.html


Agreed.

The ability to set arbitrary metadata on the flavor means that you could 
realistically have many different flavors all with identical virtual 
hardware but different metadata.


As one example, the flavor metadata can be used to match against host 
aggregate metadata.


Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] HAProxy and Keystone setup (in Overcloud)

2014-05-05 Thread Clint Byrum
Excerpts from Jan Provaznik's message of 2014-05-05 01:10:56 -0700:
 On 04/28/2014 10:05 PM, Jay Dobies wrote:
  We may want to consider making use of Heat outputs for this.
 
  This was my first thought as well. stack-show returns a JSON document
  that would be easy enough to parse through instead of having it in two
  places.
 
  Rather than assuming hard coding, create an output on the overcloud
  template that is something like 'keystone_endpoint'. It would look
  something like this:
 
  Outputs:
 keystone_endpoint:
   Fn::Join:
 - ''
 - - http://;
   - {Fn::GetAtt: [ haproxy_node, first_ip ]} # fn select and yada
   - :
   - {Ref: KeystoneEndpointPort} # thats a parameter
   - /v2.0
 
 
  These are then made available via heatclient as stack.outputs in
  'stack-show'.
 
  That way as we evolve new stacks that have different ways of controlling
  the endpoints (LBaaS anybody?) we won't have to change os-cloud-config
  for each one.
 
 
 The output endpoint list would be quite long, it would have to contain 
 full list of all possible services (even if a service is not included in 
 an image) + SSL URI for each port.
 
 It might be better to get haproxy ports from template params (which 
 should be available as stack.params) and define only virtual IP in 
 stack.ouputs, then build endpoint URI in os-cloud-config. I'm not sure 
 if we would have to change os-cloud-config for LBaaS or not. My first 
 thought was that VIP and port are only bits which should vary, so 
 resulting URI should be same in both cases.
 

+1

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] [Heat] Custom Nova Flavor creation through Heat (pt.2)

2014-05-05 Thread Chris Friesen

On 05/05/2014 11:40 AM, Solly Ross wrote:

One thing that I was discussing with @jaypipes and @dansmith over
on IRC was the possibility of breaking flavors down into separate
components -- i.e have a disk flavor, a CPU flavor, and a RAM flavor.
This way, you still get the control of the size of your building blocks
(e.g. you could restrict RAM to only 2GB, 4GB, or 16GB), but you avoid
exponential flavor explosion by separating out the axes.


I like this idea because it allows for greater flexibility, but I think 
we'd need to think carefully about how to expose it via horizon--maybe 
separate tabs within the overall flavors page?


As a simplifying view you could keep the existing flavors which group 
all of them, while still allowing instances to specify each one 
separately if desired.


Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] [Heat] Custom Nova Flavor creation through Heat (pt.2)

2014-05-05 Thread Chris Friesen

On 05/05/2014 12:18 PM, Chris Friesen wrote:


As a simplifying view you could keep the existing flavors which group
all of them, while still allowing instances to specify each one
separately if desired.


Also, if we're allowing the cpu/memory/disk to be specified 
independently at instance boot time, we might want to allow for 
arbitrary metadata to be specified as well (that would be matched as per 
the existing flavor extra_spec).


Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] HAProxy and Keystone setup (in Overcloud)

2014-05-05 Thread Robert Collins
On 6 May 2014 06:13, Clint Byrum cl...@fewbar.com wrote:
 Excerpts from Jan Provaznik's message of 2014-05-05 01:10:56 -0700:
 On 04/28/2014 10:05 PM, Jay Dobies wrote:
  We may want to consider making use of Heat outputs for this.
..
 The output endpoint list would be quite long, it would have to contain
 full list of all possible services (even if a service is not included in
 an image) + SSL URI for each port.

 It might be better to get haproxy ports from template params (which
 should be available as stack.params) and define only virtual IP in
 stack.ouputs, then build endpoint URI in os-cloud-config. I'm not sure
 if we would have to change os-cloud-config for LBaaS or not. My first
 thought was that VIP and port are only bits which should vary, so
 resulting URI should be same in both cases.


 +1

I think outputs are good here, but indeed we should not be exposing
the control plane innards: thats what the virtual IP is for - lets
export that out of Heat, along with the port #, and thats it.

-Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Feedback on init-keystone spec

2014-05-05 Thread Clint Byrum
Excerpts from Jiří Stránský's message of 2014-05-05 01:54:11 -0700:
 On 30.4.2014 09:02, Steve Kowalik wrote:
  Hi,
 
  I'm looking at moving init-keystone from tripleo-incubator to
  os-cloud-config, and I've drafted a spec at
  https://etherpad.openstack.org/p/tripleo-init-keystone-os-cloud-config .
 
  Feedback welcome.
 
  Cheers,
 
 
 Hi Steve,
 
 that looks good :) Just to clarify -- should the long-term plan for 
 Keystone PKI initialization still be to generate the key+certs on 
 undercloud and push it to overcloud via Heat? (Likewise for 
 seed-undercloud.)

Long term I'd like to see us generate keys locally and have Barbican
store the keys. It is still not quite far enough on the incubation path
to be something we rely on directly, but we should consider that a very
temporary situation.

Short term we'll have to push things around via Heat. That behooves us
to ensure SSL is working for metadata fetching btw. I've not checked on
that in a very long time, and I'm not sure any of our CI enables it.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] [Heat] Custom Nova Flavor creation through Heat (pt.2)

2014-05-05 Thread Dimitri Mazmanov
This is good! Is there a blueprint describing this idea? Or any plans
describing it in a blueprint?
Would happily share the work.

Should we mix it with flavors in horizon though? I¹m thinking of having a
separate ³Resources² page,
wherein the user can ³define² resources. I¹m not a UX expert though.

But let me come back to the project-scoped flavor creation issues.
Why do you think it¹s such a bad idea to let tenants create flavors for
their project specific needs?

I¹ll refer again to the Steve Hardy¹s proposal:
- Normal user : Can create a private flavor in a tenant where they
  have the Member role (invisible to any other users)
- Tenant Admin user : Can create public flavors in the tenants where they
  have the admin role (visible to all users in the tenant)
- Domain admin user : Can create public flavors in the domains where they
  have the admin role (visible to all users in all tenants in that domain)


 If you actually have 64 flavors, though, and it's overwhelming
 your users, ...

The users won¹t see all 64 flavor, only those they have defined and public.

-

Dimitri

On 05/05/14 20:18, Chris Friesen chris.frie...@windriver.com wrote:

On 05/05/2014 11:40 AM, Solly Ross wrote:
 One thing that I was discussing with @jaypipes and @dansmith over
 on IRC was the possibility of breaking flavors down into separate
 components -- i.e have a disk flavor, a CPU flavor, and a RAM flavor.
 This way, you still get the control of the size of your building blocks
 (e.g. you could restrict RAM to only 2GB, 4GB, or 16GB), but you avoid
 exponential flavor explosion by separating out the axes.

I like this idea because it allows for greater flexibility, but I think
we'd need to think carefully about how to expose it via horizon--maybe
separate tabs within the overall flavors page?

As a simplifying view you could keep the existing flavors which group
all of them, while still allowing instances to specify each one
separately if desired.

Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Question about addit log in nova-compute.log

2014-05-05 Thread Jiang, Yunhong


 -Original Message-
 From: Jay Pipes [mailto:jaypi...@gmail.com]
 Sent: Monday, May 05, 2014 9:50 AM
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [nova] Question about addit log in
 nova-compute.log
 
 On 05/04/2014 11:09 PM, Chen CH Ji wrote:
  Hi
  I saw in my compute.log has following logs which looks
  to me strange at first, Free resource is negative make me confused and I
  take a look at the existing code
  looks to me the logic is correct and calculation doesn't
  have problem ,but the output 'Free' is confusing
 
  Is this on purpose or might need to be enhanced?
 
  2014-05-05 10:51:33.732 4992 AUDIT nova.compute.resource_tracker
 [-]
  Free ram (MB): -1559
  2014-05-05 10:51:33.732 4992 AUDIT nova.compute.resource_tracker
 [-]
  Free disk (GB): 29
  2014-05-05 10:51:33.732 4992 AUDIT nova.compute.resource_tracker
 [-]
  Free VCPUS: -3
 
 Hi Kevin,
 
 I think changing free to available might make things a little more
 clear. In the above case, it may be that your compute worker has both
 CPU and RAM overcommit enabled.
 
 Best,
 -jay

HI, Jay,
I don't think change 'free' to 'available' will make it clearer. 
IMHO, the calculation of the 'free' is bogus. When report the status in 
the periodic task, the resource tracker has no idea of the over-commit ration 
at all, thus it simply subtract the total RAM number assigned to instances from 
the RAM number provided by hypervisor w/o considering the over-commitment at 
all. So this number really have meaningless.

--jyh

 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [trove] Request for transparent Gerrit approval process

2014-05-05 Thread Lowery, Mathew
As a non-core, I would like to understand the process by which core prioritizes 
Gerrit changes. I'd also like to know any specific criteria used for approval. 
If such a process was transparent and followed consistently, wouldn't that 
eliminate the need for Hey core, can you review change? in IRC? 
Specifically:

  *   Is a prioritized queue used such as the one found at 
http://status.openstack.org/reviews/? (This output comes from a project called 
ReviewDayhttps://github.com/openstack-infra/reviewday and it is prioritized 
based on the Launchpad ticket and age: http://git.io/h3QULg.) If not, how do 
you keep a Gerrit change from starving?
  *   Is there a baking period? In other words, is there a minimum amount of 
time that has to elapse before even being considered for approval?
  *   What effect do -1 code reviews have on the chances of approval (or even 
looking at the change)? Sometimes, -1 code reviews seem to be given in a 
cavalier manner. In my mind, -1 code reviewers have a duty to respond to 
follow-up questions by the author. Changes not tainted with a -1 simply have a 
greater chance of getting looked at.
  *   To harp on this again: Isn't Hey core, can you review change? 
inherently unfair assuming that there is a process by which a Gerrit change 
would normally be prioritized and reviewed in an orderly fashion?
  *   Are there specific actions that non-cores can take to assist in the 
orderly and timely approval of Gerrit changes? (e.g. don't give a -1 code 
review on a multiple +1'ed Gerrit change when it's a nice-to-have and don't 
leave a -1 and then vanish)?

Any clarification of the process would be greatly appreciated.

Thanks,
Mat
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Introducing Project Graffiti

2014-05-05 Thread Tripp, Travis S
Hello Everybody,

A challenge we've experienced with using OpenStack is discovering, sharing, and 
correlating metadata across services and different types of resources. We 
believe this affects both end users and administrators.

For end users, the UI can be too technical and require too much pre-existing 
knowledge of OpenStack concepts. For example, when you launch instances, you 
should be able to just specify categories like Big Data or an OS Family and 
let the system find the boot source for you, whether that is an image, 
snapshot, or volume.  It should also allow finer grained filtering such as 
choosing specific versions of software that you want.

For administrators, we'd like there to be an easier way to meaningfully 
collaborate on properties across host aggregates, flavors, images, volumes, or 
other cloud resources. Today, this often involves searching wikis and opening 
the source code.

We, HP and Intel, believe that both of the above problems come back to needing 
a better way for users to collaborate on metadata across services and resource 
types.  We started project Graffiti to explore ideas and concepts for how to 
make this easier and more approachable for end users. We're asking for your 
input and participation to help us move forward!

To help explain the ideas of the project, we have created a quick screencast 
demonstrating the concepts running under POC code. Please take a look!


* Graffiti Concepts Overview:

o   http://youtu.be/f0SZtPgcxk4

Please join with us to help refine the concepts and identify where we can best 
fit in to the ecosystem. We have a few blueprints, but they need additional 
support outside of the Horizon UI. We believe our best path is one where we can 
contribute the Graffiti service as either a new project in an existing program 
or as a series of enhancements to existing projects.  Your insight and feedback 
is important to us and we look forward to growing this initiative with you!

We have a design session at the summit where we'd love to have open discussion 
with all who can attend:

Juno Summit Design Session
http://sched.co/1m7wghx

For more info, please visit our wiki:

Project Wiki
https://wiki.openstack.org/wiki/Graffiti

IRC
#graffiti on  Freenodehttp://freenode.net/

Related Blueprints
https://blueprints.launchpad.net/horizon/+spec/instance-launch-using-capability-filtering
https://blueprints.launchpad.net/horizon/+spec/faceted-search
https://blueprints.launchpad.net/horizon/+spec/tagging
https://blueprints.launchpad.net/horizon/+spec/host-aggregate-update-metadata
https://blueprints.launchpad.net/python-cinderclient/+spec/support-volume-image-metadata

Thank you,
Travis Tripp


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ML2] L2 population mechanism driver

2014-05-05 Thread Sławek Kapłoński
Hello,

Thanks for answear. Now I made my own mech_driver which inherits from l2_pop 
driver and it is working ok. But If I will set it as: 
mechanism_drivers=openvswitch,l2population than ports will be binded with 
ovs driver so population mechanism will be working in such network?

Best regards
Slawek Kaplonski
sla...@kaplonski.pl

Dnia poniedziałek, 5 maja 2014 00:29:56 Narasimhan, Vivekanandan pisze:
 Hi Slawek,
 
 I think L2 pop driver needs to be used in conjunction with other mechanism
 drivers.
 
 It only deals with pro-actively informing agents on which MAC Addresses
 became available/unavailable on cloud nodes and is not meant for
 binding/unbinding ports on segments.
 
 If you configure mechanism_drivers=openvswitch,l2population in your
 ml2_conf.ini and restart your neutron-server, you'll notice that bind_port
 is handled by OVS mechanism driver (via AgentMechanismDriverBase inside
 ml2/drivers/mech_agent.py).
 
 --
 Thanks,
 
 Vivek
 
 
 -Original Message-
 From: Sławek Kapłoński [mailto:sla...@kaplonski.pl]
 Sent: Sunday, May 04, 2014 12:32 PM
 To: openstack-dev@lists.openstack.org
 Subject: [openstack-dev] [ML2] L2 population mechanism driver
 
 Hello,
 
 Last time I want to try using L2pop mechanism driver in ML2 (and openvswitch
 agents on compute nodes). But every time when I try to spawn instance I
 have got error binding failed. After some searching in code I found that
 l2pop driver have not implemented method bind_port and as it inherit
 directly from MechanismDriver this method is in fact not implemented.
 Is is ok and this mechanism driver should be used in other way or maybe
 there is some bug in this driver and it miss this method?
 
 Best regards
 Slawek Kaplonski
 sla...@kaplonski.pl
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova]The status of the hypervisor node and the service

2014-05-05 Thread Jiang, Yunhong
Hi, all
Currently I'm working on spec at https://review.openstack.org/#/c/90172/4 
which return the status of the hypervisor node. Per the comments, including 
comments from operator, this is a welcome features. As in 
https://review.openstack.org/#/c/90172/4/specs/juno/return-status-for-hypervisor-node.rst#77
 , I try to return the status as up, down, disabled, which is in fact a 
mix of corresponding service's status and state.

However, there are some disagreement on how to return the status. For 
example, should we return both the 'status' and 'state', and should we return 
the 'disabled reason' if the corresponding service is disabled. 

I have several questions that want to get feedback from the community:
a) Why do we distinguish he service 'status' and 'state'? What's the exact 
difference of 'state' and 'status' in English? IMHO, a service is 'up' when 
enabled, and is 'down' when either disabled or when temporally lost the 
heartbeat. 

b) The difference of the hypervisor node status and service status. I know the 
relationship of 'node' and 'host' in still under discussion 
(http://junodesignsummit.sched.org/event/a0d38e1278182eb09f06e22457d94c0c#.U2fy3PldWrQ
 ), do you think the node status and the service status is in scope of this 
discussion? Or I should simply copy the status/state/disabled_reason in the 
hypvisor_node status? Would it be possible that in some virt driver, one 
hypervisor node can have its own status value different with the service value?

Thanks
--jyh

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Proposal: add local hacking for oslo-incubator

2014-05-05 Thread Doug Hellmann
On Mon, May 5, 2014 at 1:56 PM, Ben Nemec openst...@nemebean.com wrote:
 On 05/05/2014 10:02 AM, ChangBo Guo wrote:

 Hi Stackers,

 I find some common code style would be avoided while I'm reviewing code
 ,so think these check would
 be nice to move into local hacking. The local hacking can ease reviewer
 burden.

 The idea from keystone blueprint [1].
 Hacking is a great start at automating checks for common style issues.
 There are still lots of things that it is not checking for that it
 probably should. The local hacking ease reviewer burden . This is the
 list of from [1][2] that would be nice to move into an automated check:

 - use import style 'from openstack.common.* import' not use 'import
 openstack.common.*'


 This is the only one that I think belongs in Oslo.  The others are all
 generally applicable, but the other projects aren't going to want to enforce
 the import style since it's only to make Oslo syncs work right.

+1, and only for the incubator repository

 - assertIsNone should be used when using None with assertEqual
 - _() should not be used in debug log statements
 -do not use 'assertTrue(isinstance(a, b)) sentence'
 -do not use 'assertEqual(type(A), B) sentence'


 The _() one in particular I think we'll want as we make the logging changes.
 Some additional checks to make sure the the correct _ function is used with
 the correct logging function would be good too (for example,
 LOG.warning(_LE('foo')) should fail pep8).

 But again, that belongs in hacking proper, not an Oslo module.

+1

 The assert ones do seem to fit the best practices as I understand them, but
 I suspect there's going to be quite a bit of work to get projects compliant.

I've seen some work being done on that already, but I don't know how
strongly we care about those specific rules as an overall project.

Doug



 [1]
 https://blueprints.launchpad.net/keystone/+spec/more-code-style-automation
 [2] https://github.com/openstack/nova/blob/master/nova/hacking/checks.py

 I just registered a blueprint for this in [3] and submit first patch in
 [4].

 [3] https://blueprints.launchpad.net/oslo/+spec/oslo-local-hacking

 [4] https://review.openstack.org/#/c/87832/
 https://github.com/openstack/nova/blob/master/nova/hacking/checks.py


 Should we add local hacking for oslo-incubator ?  If yes, what's the
 other check will be added ?
 Your comment is appreciated :-)

 --
 ChangBo Guo(gcb)


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [marconi] ATL Summit Pads - Please Review

2014-05-05 Thread Kurt Griffiths
Hi everyone, I’ve seeded some pads corresponding to the design sessions we have 
scheduled; please review these and help me flesh them out this week, leading up 
to the summit:

  *   Tues 14:50 - Queue 
Flavorshttps://etherpad.openstack.org/p/juno-marconi-queue-flavors — Flavio 
Percoco
  *   Tues 15:40 - Notifications on 
Marconihttps://etherpad.openstack.org/p/juno-marconi-notifications-on-marconi 
— Balaji Iyer
  *   Tues 16:40 - Marconi Dev/Ops 
Sessionhttps://etherpad.openstack.org/p/ATL-marconi-ops — TBD (need a 
volunteer to moderate this; let me know if you are interested)
  *   Tues 17:30 - Scaling an Individual 
Queuehttps://etherpad.openstack.org/p/juno-marconi-scale-single-queue — Kurt 
Griffiths

Also, I’ve set up some pads for the unconference sessions (@ the Marconi table) 
I know about:

  *   Signed 
messageshttps://etherpad.openstack.org/p/juno-marconi-signed-messages
  *   Benchmarkinghttps://etherpad.openstack.org/p/juno-marconi-benchmarking
  *   Performance 
Tuninghttps://etherpad.openstack.org/p/juno-marconi-perf-tuning

@kgriffs
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][olso] How is the trusted message going on?

2014-05-05 Thread Jiang, Yunhong
Hi, all
The trusted messaging 
(https://blueprints.launchpad.net/oslo.messaging/+spec/trusted-messaging) has 
been removed from icehouse, does anyone know how is current status? I noticed a 
summit session may cover it ( 
http://junodesignsummit.sched.org/event/9a6b59be11cdeaacfea70fef34328931#.U2gMo_ldWrQ
 ), but would really appreciate if anyone can provide some information.

Thanks
--jyh

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Use-Cases with VPNs Distinction

2014-05-05 Thread Stephen Balukoff
Hi Sam,

So, If I understand you correctly, you don't think that specifying routing
rules (eg. static routing configuration) should be beyond the scope of
LBaaS?

I agree that it might be possible to reach a given member over different
routes. The example that comes to mind for me is a member with a public IP
on the internet somewhere that's either accessible from the VIP address via
the VIP's subnet's default gateway, or via a VPN service available on the
same layer 2 network. But if we're going to support choosing routes to a
given member, shouldn't this information be located with the member?

I don't know why putting this information as properties of the VIP in the
object model would make scheduling and placing the configuration any
easier--  specifically, if you've got enough information / completed
objects to deploy a load balancing service, wouldn't the service's pools
and pool member information also be implicitly available as part of the
overall configuration for the service?

Thanks,
Stephen


On Sun, May 4, 2014 at 12:36 AM, Samuel Bercovici samu...@radware.comwrote:

  Hi,



 I prefer a different approach (AKA, I oppose J)

 I think that this information should be properties of the VIP and not the
 pool.

 So VIP should have:

 1.   VIP subnet (this is where the IP will be allocated)

 2.   List of members subnets (it could be optional. This means that
 members have L2 proximity on the VIP subnet)

 3.   List of static routes (to be able to specify how to reach
 members which are not on L2 proximity) – if not presented, this could be
 calculated by the “driver” backend but sometimes where you can use multiple
 different paths a user intervention might be required.



 I prefer this approach for the following:

 1.   Concentrating the L3 information in a single place (VIP) – this
 also makes scheduling and placement of the configuration easier.

 2.   When using multiple pools (L7 content switching) that have
 members on the same subnet, no need to repeat the subnet information



 Regards,

 -Sam.







 *From:* Adam Harwell [mailto:adam.harw...@rackspace.com]
 *Sent:* Saturday, May 03, 2014 10:17 AM

 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] [Neutron][LBaaS] Use-Cases with VPNs
 Distinction



 Sounds about right to me. I guess I agree with your agreement. :)

 Does anyone actually *oppose* this arrangement?



 --Adam



 *From: *Stephen Balukoff sbaluk...@bluebox.net
 *Reply-To: *OpenStack Development Mailing List (not for usage
 questions) openstack-dev@lists.openstack.org
 *Date: *Friday, May 2, 2014 7:53 PM
 *To: *OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 *Subject: *Re: [openstack-dev] [Neutron][LBaaS] Use-Cases with VPNs
 Distinction



 Hi guys,



 Yep, so what I'm hearing is that we should be able to assume that either
 all members in a single pool are adjacent (ie. layer-2 connected), or are
 routable from that subnet.



 Adam-- I could see it going either way with regard to how to communicate
 with members:  If the particular device that the provider uses lives
 outside tenant private networks, the driver for said devices would need to
 make sure that VIFs (or some logical equivalent) are added such that the
 devices can talk to the members. This is also the case for virtual load
 balancers (or other devices) which are assigned to the tenant but live on
 an external network. (In this topology, VIP subnet and pool subnet could
 differ, and the driver needs to make sure that the load balancer has a
 virtual interface/neutron port + IP address on the pool subnet.)



 There's also the option that if the device being used for load balancing
 exists as a virtual appliance that can be deployed on an internal network,
 one can make it publicly accessible by adding a neutron floating IP (ie.
 static NAT rule) that forwards any traffic destined for a public external
 IP to the load balancer's internal IP address.  (In this topology, VIP
 subnet and pool subnet would be the same thing.) The nifty thing about this
 topology is that load balancers that don't have this static NAT rule added
 are implicitly private to the tenant internal subnet.



 Having seen what our customers do with their topologies, my gut reaction
 is to say that the 99.9% use case is that all the members of a pool will be
 in the same subnet, or routable from the pool subnet. And I agree that if
 someone has a really strange topology in use that doesn't work with this
 assumption, it's not the job of LBaaS to try and solve this for them.



 Anyway, I'm hearing general agreement that subnet_id should be an
 attribute of the pool.



 On Fri, May 2, 2014 at 5:24 AM, Eugene Nikanorov enikano...@mirantis.com
 wrote:

 Agree with Sam here,

 Moreover, i think it makes sense to leave subnet an attribute of the pool.

 Which would mean that members reside in that subnet or are 

[openstack-dev] [Ceilometer][AMQP]

2014-05-05 Thread Hachem Chraiti
Hi
how can i fix this error please with Openstack Havana in Ubuntu 12.04 LTS:

Error communicating with http://controller:8777 [Errno 111] Connection
refused

*ceilometer-agent-central.log :
2014-05-05 23:28:50.633 14745 ERROR ceilometer.openstack.common.rpc.common
[-] AMQP server on controller:5672 is unreachable: Socket closed. Trying
again in 30 seconds.
2014-05-05 23:31:26.454 16607 ERROR ceilometer.openstack.common.rpc.common
[-] Failed to consume message from queue: Socket closed
2014-05-05 23:31:26.454 16607 TRACE ceilometer.openstack.common.rpc.common
Traceback (most recent call last):
2014-05-05 23:31:26.454 16607 TRACE
ceilometer.openstack.common.rpc.common   File
/usr/lib/python2.7/dist-packages/ceilometer/openstack/common/rpc/impl_kombu.py,
line 577, in ensure
2014-05-05 23:31:26.454 16607 TRACE
ceilometer.openstack.common.rpc.common return method(*args, **kwargs)
2014-05-05 23:31:26.454 16607 TRACE
ceilometer.openstack.common.rpc.common   File
/usr/lib/python2.7/dist-packages/ceilometer/openstack/common/rpc/impl_kombu.py,
line 657, in _consume
2014-05-05 23:31:26.454 16607 TRACE
ceilometer.openstack.common.rpc.common return
self.connection.drain_events(timeout=timeout)
2014-05-05 23:31:26.454 16607 TRACE
ceilometer.openstack.common.rpc.common   File
/usr/lib/python2.7/dist-packages/kombu/connection.py, line 281, in
drain_events
2014-05-05 23:31:26.454 16607 TRACE
ceilometer.openstack.common.rpc.common return
self.transport.drain_events(self.connection, **kwargs)
2014-05-05 23:31:26.454 16607 TRACE
ceilometer.openstack.common.rpc.common   File
/usr/lib/python2.7/dist-packages/kombu/transport/pyamqp.py, line 91, in
drain_events
2014-05-05 23:31:26.454 16607 TRACE
ceilometer.openstack.common.rpc.common return
connection.drain_events(**kwargs)
2014-05-05 23:31:26.454 16607 TRACE
ceilometer.openstack.common.rpc.common   File
/usr/lib/python2.7/dist-packages/amqp/connection.py, line 266, in
drain_events
2014-05-05 23:31:26.454 16607 TRACE
ceilometer.openstack.common.rpc.common chanmap, None, timeout=timeout,
2014-05-05 23:31:26.454 16607 TRACE
ceilometer.openstack.common.rpc.common   File
/usr/lib/python2.7/dist-packages/amqp/connection.py, line 328, in
_wait_multiple
2014-05-05 23:31:26.454 16607 TRACE
ceilometer.openstack.common.rpc.common channel, method_sig, args,
content = read_timeout(timeout)
2014-05-05 23:31:26.454 16607 TRACE
ceilometer.openstack.common.rpc.common   File
/usr/lib/python2.7/dist-packages/amqp/connection.py, line 292, in
read_timeout
2014-05-05 23:31:26.454 16607 TRACE
ceilometer.openstack.common.rpc.common return
self.method_reader.read_method()
2014-05-05 23:31:26.454 16607 TRACE
ceilometer.openstack.common.rpc.common   File
/usr/lib/python2.7/dist-packages/amqp/method_framing.py, line 187, in
read_method
2014-05-05 23:31:26.454 16607 TRACE
ceilometer.openstack.common.rpc.common raise m
2014-05-05 23:31:26.454 16607 TRACE ceilometer.openstack.common.rpc.common
IOError: Socket closed
2014-05-05 23:31:26.454 16607 TRACE ceilometer.openstack.common.rpc.common
2014-05-05 23:31:26.474 16607 ERROR ceilometer.openstack.common.rpc.common
[-] AMQP server on controller:5672 is unreachable: [Errno 111]
ECONNREFUSED. Trying again in 1 seconds.
2014-05-05 23:31:27.486 16607 ERROR ceilometer.openstack.common.rpc.common
[-] AMQP server on controller:5672 is unreachable: [Errno 111]
ECONNREFUSED. Trying again in 3 seconds.




***ceilometer-api.log:


014-05-05 23:53:15.684 17722 INFO keystoneclient.middleware.auth_token [-]
Starting keystone auth_token middleware
2014-05-05 23:53:15.685 17722 INFO keystoneclient.middleware.auth_token [-]
Using /tmp/keystone-signing-RrLKeq as cache directory for signing
certificate
2014-05-05 23:53:15.721 17722 CRITICAL ceilometer [-] command
SON([('authenticate', 1), ('user', u'ceilometer'), ('nonce',
u'f4ca9befd11425b4'), ('key', u'8c5de2059a054c0ae7afb15c4c946c43')])
failed: auth fails



Need help pleeease
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] OpenStack access using Java SDK

2014-05-05 Thread Stefano Maffulli
On Mon 05 May 2014 05:47:02 AM PDT, Vikas Kokare wrote:
 I am looking for a standard, seamless way to access OpenStack APIs ,
 most likely using the Java SDKs that are summarized at
 https://wiki.openstack.org/wiki/SDKs#Software_Development_Kits
[...]

This is the wrong list for usage questions, this list is to discuss the 
future development of OpenStack projects. Please ask on the General 
list openst...@lists.openstack.org.

Thanks
Stef

--
Ask and answer questions on https://ask.openstack.org

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS]L7 conent switching APIs

2014-05-05 Thread Stephen Balukoff
Hi Sam,

In working off the document in the wiki on L7 functionality for LBaaS (
https://wiki.openstack.org/wiki/Neutron/LBaaS/l7 ), I notice that
MODIFY_CONTENT is one of the actions listed for a
L7VipPolicyAssociation. That's
the primary reason I included this in the API design I created.

To be honest, it frustrates me more than a little to hear after the fact
that the only locate-able documentation like this online is inaccurate on
many meaningful details like this.

I could actually go either way on this issue:  I included content
modification as one possible action of L7Policies, but it is somewhat
wedged in there:  It works, but in L7Policies that do content modification
or blocking of the request, the order field as I've proposed it could be
confusing for users, and these L7Policies wouldn't be associated with a
back-end pool anyway.

I'm interested in hearing others' opinions on this as well.

Stephen



On Mon, May 5, 2014 at 6:47 AM, Samuel Bercovici samu...@radware.comwrote:

  Hi Stephen,



 For Icehouse we did not go into L7 content modification as the general
 feeling was that it might not be exactly the same as content switching and
 we wanted to tackle content switching fiest.



 L7 content switching and L7 content modification are different, I prefer
 to be explicit and declarative and use different objects.

 This will make the API more readable.

 What do you think?



 I plan to look deeper into L7 content modification later this week to
 propose a list of capabilities.



 -Sam.





 *From:* Stephen Balukoff [mailto:sbaluk...@bluebox.net]
 *Sent:* Saturday, May 03, 2014 1:33 AM

 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] [Neutron][LBaaS]L7 conent switching APIs



 Hi Adam and Samuel!



 Thanks for the questions / comments! Reactions in-line:



 On Thu, May 1, 2014 at 8:14 PM, Adam Harwell adam.harw...@rackspace.com
 wrote:

 Stephen, the way I understood your API proposal, I thought you could
 essentially combine L7Rules in an L7Policy, and have multiple L7Policies,
 implying that the L7Rules would use AND style combination, while the
 L7Policies themselves would use OR combination (I think I said that right,
 almost seems like a tongue-twister while I'm running on pure caffeine). So,
 if I said:



 Well, my goal wasn't to create a whole DSL for this (or anything much
 resembling this) because:

1. Real-world usage of the L7 stuff is generally pretty primitive.
Most L7Policies will consist of 1 rule. Those that consist of more than one
rule are almost always the sort that need a simple sort. This is based off
the usage data collected here (which admittedly only has Blue Box's data--
because apparently nobody else even offers L7 right now?)

 https://docs.google.com/spreadsheet/ccc?key=0Ar1FuMFYRhgadDVXZ25NM2NfbGtLTkR0TDFNUWJQUWcusp=sharing
2. I was trying to keep things as simple as possible to make it easier
for load balancer vendors to support. (That is to say, I wouldn't expect
all vendors to provide the same kind of functionality as HAProxy ACLs, for
example.)

  Having said this, I think yours and Sam's clarification that different
 L7Policies can be used to effective OR conditions together makes sense,
 and therefore assuming all the Rules in a given policy are ANDed together
 makes sense.



 If we do this, it therefore also might make sense to expose other criteria
 on which L7Rules can be made, like HTTP method used for the request and
 whatnot.



 Also, should we introduce a flag to say whether a given Rule's condition
 should be negated?  (eg. HTTP method is GET and URL is *not* /api) This
 would get us closer to being able to use more sophisticated logic for L7
 routing.



 Does anyone foresee the need to offer this kind of functionality?



   * The policy { rules: [ rule1: match path REGEX .*index.*, rule2:
 match path REGEX hello/.* ] } directs to Pool A

  * The policy { rules: [ rule1: match hostname EQ mysite.com ] }
 directs to Pool B

 then order would matter for the policies themselves. In this case, if they
 ran in the order I listed, it would match mysite.com/hello/index.htm
 and direct it to Pool A, while mysite.com/hello/nope.htm would not
 match BOTH rules in the first policy, and would be caught by the second
 policy, directing it to Pool B. If I had wanted the first policy to use OR
 logic, I would have just specified two separate policies both pointing to
 Pool A:



 Clarification on this: There is an 'order' attribute to L7Policies. :) But
 again, if all the L7Rules in a given policy are ANDed together, then order
 doesn't matter within the rules that make up an L7Policy.



* The policy { rules: [ rule1: match path REGEX .*index.* ] }
 directs to Pool A

  * The policy { rules: [ rule1: match path REGEX hello/.* ] } directs to
 Pool A

  * The policy { rules: [ rule1: match hostname EQ mysite.com ] }
 directs to Pool B

 In that case, it would 

[openstack-dev] [all] process problem with release tagging

2014-05-05 Thread John Dickinson
tl;dr: (1) the current tag names used don't work and we need
something else. (2) Swift (at least) needs to burn a
release number with a new tag

The current process of release is:

1) branch milestone-proposed (hereafter, m-p) from master
2) tag m-p with an RC tag (eg 1.13.1.rc1)
* note that since there are no commits on m-p,
  this tag is an ancestor of master (effectively on master itself)
3) continue development on master
3.1) backport any changes necessary to m-p
4) after QA, tag m-p with the final version
5) merge m-p into master, thus making the final version tag
   an ancestor of master[0]


This process has 2 flaws:

First (and easiest to fix), the rc tag name sorts after the final
release name (`dpkg --compare-versions 1.13.1.rc1.25 lt 1.13.1`
fails). The practical result is that if someone grabbed a version of
the repo after m-p was created but before the merge and then packaged
and deployed it, their currently-installed version actually sorts
newer than the current version on master[1]. The short-term fix is to
burn a version number to get a newer version on master. The long-term
fix is to use a different template for creating the RC tags on m-p.
For example, `dpkg --compare-versions 1.13.1~rc1.25 lt 1.13.1` works.

Second, the process creates a time window where the version number on
master is incorrect. There are a few ways I'd propose to fix this. One
way is to stop using post-release versioning. Set the version number
in a string in the code when development starts so that the first
commit after a release (or creation of m-p) is the version number for
the next release. I'm not a particular fan of this, but it is the way
we used to do things and it does work.

Another option would be to not tag a release until the m-p branch
actually is merged to master. This would eliminate any windows of
wrong versions and keep master always deployable (all tags, except
security backports, would be on master). Another option would be to do
away with the m-p branch altogether and only create it if there is a
patch needed after the RC period starts.

The general idea of keeping release tags on the master branch would
help enable deployers (ie ops) who are tracking master and not just
releasing the distro-packaged versions. We know that some of the
largest and loudest OpenStack deployers are proud that they follow
master.

What other options are there?


[0] This is the process for Swift, but in both Keystone and Ceilometer
I don't see any merge commits from m-p back to master. This
actually means that for Keystone and Ceilometer, any deployer
packaging master will get bitten by the same issue we've seen in
the Swift community.
[1] In Icehouse, this window of opportunity was exacerbated by the
long time (2 weeks?) it took to get m-p merged back into master.



--John





signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Use-Cases with VPNs Distinction

2014-05-05 Thread Stephen Balukoff
German--

I agree. This sounds very much like an edge case that we don't need to
worry about supporting until someone comes up with a specific use case to
illustrate the problem.

Stephen


On Mon, May 5, 2014 at 5:03 PM, Eichberger, German german.eichber...@hp.com
 wrote:

  Hi Stephen,



 I think this is too strange an edge case to be covered by the LBaaS. In
 any case I am wondering if there is a valid use case if we can add it to
 the user stories.



 German



 *From:* Stephen Balukoff [mailto:sbaluk...@bluebox.net]
 *Sent:* Monday, May 05, 2014 4:05 PM

 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] [Neutron][LBaaS] Use-Cases with VPNs
 Distinction



 Hi Sam,



 So, If I understand you correctly, you don't think that specifying routing
 rules (eg. static routing configuration) should be beyond the scope of
 LBaaS?



 I agree that it might be possible to reach a given member over different
 routes. The example that comes to mind for me is a member with a public IP
 on the internet somewhere that's either accessible from the VIP address via
 the VIP's subnet's default gateway, or via a VPN service available on the
 same layer 2 network. But if we're going to support choosing routes to a
 given member, shouldn't this information be located with the member?



 I don't know why putting this information as properties of the VIP in the
 object model would make scheduling and placing the configuration any
 easier--  specifically, if you've got enough information / completed
 objects to deploy a load balancing service, wouldn't the service's pools
 and pool member information also be implicitly available as part of the
 overall configuration for the service?



 Thanks,

 Stephen



 On Sun, May 4, 2014 at 12:36 AM, Samuel Bercovici samu...@radware.com
 wrote:

 Hi,



 I prefer a different approach (AKA, I oppose J)

 I think that this information should be properties of the VIP and not the
 pool.

 So VIP should have:

 1.   VIP subnet (this is where the IP will be allocated)

 2.   List of members subnets (it could be optional. This means that
 members have L2 proximity on the VIP subnet)

 3.   List of static routes (to be able to specify how to reach
 members which are not on L2 proximity) – if not presented, this could be
 calculated by the “driver” backend but sometimes where you can use multiple
 different paths a user intervention might be required.



 I prefer this approach for the following:

 1.   Concentrating the L3 information in a single place (VIP) – this
 also makes scheduling and placement of the configuration easier.

 2.   When using multiple pools (L7 content switching) that have
 members on the same subnet, no need to repeat the subnet information



 Regards,

 -Sam.







 *From:* Adam Harwell [mailto:adam.harw...@rackspace.com]
 *Sent:* Saturday, May 03, 2014 10:17 AM


 *To:* OpenStack Development Mailing List (not for usage questions)

 *Subject:* Re: [openstack-dev] [Neutron][LBaaS] Use-Cases with VPNs
 Distinction



 Sounds about right to me. I guess I agree with your agreement. :)

 Does anyone actually *oppose* this arrangement?



 --Adam



 *From: *Stephen Balukoff sbaluk...@bluebox.net
 *Reply-To: *OpenStack Development Mailing List (not for usage
 questions) openstack-dev@lists.openstack.org
 *Date: *Friday, May 2, 2014 7:53 PM
 *To: *OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 *Subject: *Re: [openstack-dev] [Neutron][LBaaS] Use-Cases with VPNs
 Distinction



 Hi guys,



 Yep, so what I'm hearing is that we should be able to assume that either
 all members in a single pool are adjacent (ie. layer-2 connected), or are
 routable from that subnet.



 Adam-- I could see it going either way with regard to how to communicate
 with members:  If the particular device that the provider uses lives
 outside tenant private networks, the driver for said devices would need to
 make sure that VIFs (or some logical equivalent) are added such that the
 devices can talk to the members. This is also the case for virtual load
 balancers (or other devices) which are assigned to the tenant but live on
 an external network. (In this topology, VIP subnet and pool subnet could
 differ, and the driver needs to make sure that the load balancer has a
 virtual interface/neutron port + IP address on the pool subnet.)



 There's also the option that if the device being used for load balancing
 exists as a virtual appliance that can be deployed on an internal network,
 one can make it publicly accessible by adding a neutron floating IP (ie.
 static NAT rule) that forwards any traffic destined for a public external
 IP to the load balancer's internal IP address.  (In this topology, VIP
 subnet and pool subnet would be the same thing.) The nifty thing about this
 topology is that load balancers that don't have this static NAT rule added
 are implicitly 

Re: [openstack-dev] [TripleO] Alternating meeting time for more TZ friendliness

2014-05-05 Thread James Polley
To revive this thread... I'd really like to see us trying out alternate
meeting times for a while.

 Tuesdays at 14:00 UTC on #openstack-meeting-alt is available.

Looking at the iCal feed it looks like it's actually taken by Solum. But in
any case:

On Thu, Mar 20, 2014 at 11:17 AM, Robert Collins robe...@robertcollins.net
 wrote:

 I think we need a timezone to cover the current impossible regions:

  - Australia
  - China / Japan
  - India


1400UTC is midnight Sydney time, 11pm Tokyo, 7:30pm New Delhi. Personally
that's worse for me than 1700UTC.

I'd prefer to move it significantly later than 1900UTC, especially if we
want a time that's civilized (or at least, achievable) in India. 0700UTC
seems ideal to me - 5pm Sydney, 4pm Tokyo, 12:30pm New Delhi. It's not
 friendly to the US (midnight SF, 3am NYC). but it should work well for
Europe - 8am London, and it gets later in the day as you go east.

What would we need to do to try this time out next week?


On Thu, Mar 20, 2014 at 8:03 PM, j...@ioctl.org wrote:

 On Wed, 19 Mar 2014, Sullivan, Jon Paul wrote:

   From: James Slagle [mailto:james.sla...@gmail.com]
   Sent: 18 March 2014 19:58
   Subject: [openstack-dev] [TripleO] Alternating meeting time for more TZ
   friendliness
  
   Our current meeting time is Tuesdays at 19:00 UTC.  I think this works
   ok for most folks in and around North America.
  
   It was proposed during today's meeting to see if there is interest is
 an
   alternating meeting time every other week so that we can be a bit more
   friendly to those folks that currently can't attend.
   If that interests you, speak up :).
 
  Speaking up! :D

 Also interested.

  Tuesdays at 14:00 UTC on #openstack-meeting-alt is available.

 Relatively speaking, that's actually sociable.

 --
 Update your address books: j...@ioctl.org  http://ioctl.org/jan/
 perl -e 's?ck?t??print:perl==pants if $_=Just Another Perl Hacker\n'

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] process problem with release tagging

2014-05-05 Thread Clark Boylan
On Mon, May 5, 2014 at 5:14 PM, John Dickinson m...@not.mn wrote:
 tl;dr: (1) the current tag names used don't work and we need
 something else. (2) Swift (at least) needs to burn a
 release number with a new tag

 The current process of release is:

 1) branch milestone-proposed (hereafter, m-p) from master
 2) tag m-p with an RC tag (eg 1.13.1.rc1)
 * note that since there are no commits on m-p,
   this tag is an ancestor of master (effectively on master itself)
 3) continue development on master
 3.1) backport any changes necessary to m-p
 4) after QA, tag m-p with the final version
 5) merge m-p into master, thus making the final version tag
an ancestor of master[0]


 This process has 2 flaws:

 First (and easiest to fix), the rc tag name sorts after the final
 release name (`dpkg --compare-versions 1.13.1.rc1.25 lt 1.13.1`
 fails). The practical result is that if someone grabbed a version of
 the repo after m-p was created but before the merge and then packaged
 and deployed it, their currently-installed version actually sorts
 newer than the current version on master[1]. The short-term fix is to
 burn a version number to get a newer version on master. The long-term
 fix is to use a different template for creating the RC tags on m-p.
 For example, `dpkg --compare-versions 1.13.1~rc1.25 lt 1.13.1` works.

Going to defer to mordred on this, but isn't this a known problem with
known fixes?

 Second, the process creates a time window where the version number on
 master is incorrect. There are a few ways I'd propose to fix this. One
 way is to stop using post-release versioning. Set the version number
 in a string in the code when development starts so that the first
 commit after a release (or creation of m-p) is the version number for
 the next release. I'm not a particular fan of this, but it is the way
 we used to do things and it does work.

 Another option would be to not tag a release until the m-p branch
 actually is merged to master. This would eliminate any windows of
 wrong versions and keep master always deployable (all tags, except
 security backports, would be on master). Another option would be to do
 away with the m-p branch altogether and only create it if there is a
 patch needed after the RC period starts.

I don't have all the answers, but I am pretty sure that the m-p
branches are just a source of pain and don't really help us in our
release process. Hoping to talk to ttx about this a bit more at the
summit but the existence of the m-p branch introduces some weird
interactions in the test infrastructure allowing for periods of time
where we don't correctly test everything, then we release, cut the
stable branch, start testing properly and hope that nothing broke
during that time period. So I am a fan of removing these branches.

For not swift I think that means we cut the stable/foo branch instead
of an m-p branch. For swift maybe we cut a stable/releaseversion
branch or forgo it completely as you suggest. But going straight to
stable makes a clear delineation between this is our release and any
associated fixes backported from master and what master is. It will
also open up master to development sooner in the dev cycle which may
help things move along more quickly.

 The general idea of keeping release tags on the master branch would
 help enable deployers (ie ops) who are tracking master and not just
 releasing the distro-packaged versions. We know that some of the
 largest and loudest OpenStack deployers are proud that they follow
 master.

 What other options are there?


 [0] This is the process for Swift, but in both Keystone and Ceilometer
 I don't see any merge commits from m-p back to master. This
 actually means that for Keystone and Ceilometer, any deployer
 packaging master will get bitten by the same issue we've seen in
 the Swift community.
 [1] In Icehouse, this window of opportunity was exacerbated by the
 long time (2 weeks?) it took to get m-p merged back into master.



 --John


Clark

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Alternating meeting time for more TZ friendliness

2014-05-05 Thread James Polley
Actually, it's probably best that we don't do the first meeting at a
US-unfriendly timezone in a week that the summit is on in Atlanta.

Maybe the week after summit, or even the second week after summit?


On Tue, May 6, 2014 at 10:49 AM, James Polley j...@jamezpolley.com wrote:

 To revive this thread... I'd really like to see us trying out alternate
 meeting times for a while.


  Tuesdays at 14:00 UTC on #openstack-meeting-alt is available.

 Looking at the iCal feed it looks like it's actually taken by Solum. But
 in any case:

  On Thu, Mar 20, 2014 at 11:17 AM, Robert Collins 
 robe...@robertcollins.net wrote:

 I think we need a timezone to cover the current impossible regions:

  - Australia
  - China / Japan
  - India


 1400UTC is midnight Sydney time, 11pm Tokyo, 7:30pm New Delhi. Personally
 that's worse for me than 1700UTC.

 I'd prefer to move it significantly later than 1900UTC, especially if we
 want a time that's civilized (or at least, achievable) in India. 0700UTC
 seems ideal to me - 5pm Sydney, 4pm Tokyo, 12:30pm New Delhi. It's not
  friendly to the US (midnight SF, 3am NYC). but it should work well for
 Europe - 8am London, and it gets later in the day as you go east.

 What would we need to do to try this time out next week?


 On Thu, Mar 20, 2014 at 8:03 PM, j...@ioctl.org wrote:

 On Wed, 19 Mar 2014, Sullivan, Jon Paul wrote:

   From: James Slagle [mailto:james.sla...@gmail.com]
   Sent: 18 March 2014 19:58
   Subject: [openstack-dev] [TripleO] Alternating meeting time for more
 TZ
   friendliness
  
   Our current meeting time is Tuesdays at 19:00 UTC.  I think this works
   ok for most folks in and around North America.
  
   It was proposed during today's meeting to see if there is interest is
 an
   alternating meeting time every other week so that we can be a bit more
   friendly to those folks that currently can't attend.
   If that interests you, speak up :).
 
  Speaking up! :D

 Also interested.

  Tuesdays at 14:00 UTC on #openstack-meeting-alt is available.

 Relatively speaking, that's actually sociable.

 --
 Update your address books: j...@ioctl.org  http://ioctl.org/jan/
 perl -e 's?ck?t??print:perl==pants if $_=Just Another Perl Hacker\n'

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Proposal: add local hacking for oslo-incubator

2014-05-05 Thread David Stanek
On Mon, May 5, 2014 at 5:28 PM, Doug Hellmann
doug.hellm...@dreamhost.comwrote:


  The assert ones do seem to fit the best practices as I understand them,
 but
  I suspect there's going to be quite a bit of work to get projects
 compliant.

 I've seen some work being done on that already, but I don't know how
 strongly we care about those specific rules as an overall project.


I created the Keystone blueprint[1] to automate the things we already check
for in reviews. My motivation was to make it faster for contributors to
contribute because they would get feedback before getting a bunch of -1s in
Gerrit. I also wanted to free up core dev resources so that we can focus on
more important parts of reviews.

I'd be happy to start putting some of these in hacking, but I don't know
which rules would be acceptable to all projects. Maybe there is a way to
make optional checks that can be enabled in the tox.ini

1.
https://blueprints.launchpad.net/keystone/+spec/more-code-style-automation

-- 
David
blog: http://www.traceback.org
twitter: http://twitter.com/dstanek
www: http://dstanek.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Question about addit log in nova-compute.log

2014-05-05 Thread Jay Pipes

On 05/05/2014 04:19 PM, Jiang, Yunhong wrote:

-Original Message-
From: Jay Pipes [mailto:jaypi...@gmail.com]
Sent: Monday, May 05, 2014 9:50 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [nova] Question about addit log in
nova-compute.log

On 05/04/2014 11:09 PM, Chen CH Ji wrote:

Hi
 I saw in my compute.log has following logs which looks
to me strange at first, Free resource is negative make me confused and I
take a look at the existing code
 looks to me the logic is correct and calculation doesn't
have problem ,but the output 'Free' is confusing

 Is this on purpose or might need to be enhanced?

2014-05-05 10:51:33.732 4992 AUDIT nova.compute.resource_tracker

[-]

Free ram (MB): -1559
2014-05-05 10:51:33.732 4992 AUDIT nova.compute.resource_tracker

[-]

Free disk (GB): 29
2014-05-05 10:51:33.732 4992 AUDIT nova.compute.resource_tracker

[-]

Free VCPUS: -3


Hi Kevin,

I think changing free to available might make things a little more
clear. In the above case, it may be that your compute worker has both
CPU and RAM overcommit enabled.

Best,
-jay


HI, Jay,
I don't think change 'free' to 'available' will make it clearer.
IMHO, the calculation of the 'free' is bogus. When report the status in 
the periodic task, the resource tracker has no idea of the over-commit ration 
at all, thus it simply subtract the total RAM number assigned to instances from 
the RAM number provided by hypervisor w/o considering the over-commitment at 
all. So this number really have meaningless.


Agreed that in it's current state, it's meaningless. But... that said, 
the numbers *could* be used to show oversubscription percentage, and you 
don't need to know the max overcommit ratio in order to calculate that 
with the numbers already known.


Best,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] Accessibility

2014-05-05 Thread Douglas Fish
Great article Liz!   Verifying contrast ratios is another aspect of
ensuring accessibility.  I've added the link you shared to the
accessibility wiki.

Doug Fish




From:   Liz Blanchard lsure...@redhat.com
To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org,
Date:   05/05/2014 09:21 AM
Subject:Re: [openstack-dev] [Horizon] Accessibility




On Apr 24, 2014, at 11:06 AM, Douglas Fish drf...@us.ibm.com wrote:


  I've proposed a design session for accessibility for the Juno summit,
  and
  I'd like to get a discussion started on the work that needs to be
  done.
  (Thanks Julie P for pushing that!)

  I've started to add information to the wiki that Joonwon Lee created:
  https://wiki.openstack.org/wiki/Horizon/WebAccessibility
  I think that's going to be a good place to gather material.  I'd like
  to
  see additional information added about what tools we can use to
  verify
  accessibility.  I'd like to try to summarize the WCAG guidelines into
  some
  broad areas where Horzion needs work.  I expect to add a checklist of
  accessibility-related items to consider while reviewing code.

  Joonwon (or anyone else with an interest in accessibility):  It would
  be
  great if you could re-inspect the icehouse level code and create bugs
  for
  any issues that remain.  I'll do the same for issues that I am aware
  of.
  In each bug we should include a link to the WCAG guideline that has
  been
  violated.  Also, we should describe the testing technique:  How was
  this
  bug discovered?  How could a developer (or reviewer) determine the
  issue
  has actually been fixed?  We should probably put that in each bug at
  first,
  but as we go they should be gathered up into the wiki page.

  There are some broad areas of need that might justify blueprints (I'm
  thinking of WAI-ARIA tagging, and making sure external UI widgets
  that we
  pull in are accessible).

  Any suggestions on how to best share info, or organize bugs and
  blueprints
  are welcome!

Doug,

Thanks very much for bringing this up as a topic and pushing it forward at
Summit. I think this will be a great step forward for Horizon in maturity
as we continue to have better accessibility support. I wanted to throw out
any article that I found helpful when it comes to testing sites for
accessibility for colorblindness:
http://css-tricks.com/accessibility-basics-testing-your-page-for-color-blindness/

I think we can make some very small/easy changes with the color scheme to
be sure there is enough contrast to target all types of colorblindness.

Looking forward to this session,
Liz


  Doug Fish
  IBM STG Cloud Solution Development
  T/L 553-6879, External Phone 507-253-6879


  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Introducing Project Graffiti

2014-05-05 Thread Angus Salkeld

On 05/05/14 20:26 +, Tripp, Travis S wrote:

Hello Everybody,

A challenge we've experienced with using OpenStack is discovering, sharing, and 
correlating metadata across services and different types of resources. We 
believe this affects both end users and administrators.


Hi

This seems neat, but also seems to have some overlap with glance's new catalog 
and some of the things Murano
are doing. Have you had a look at those efforts?

-Angus



For end users, the UI can be too technical and require too much pre-existing knowledge of OpenStack 
concepts. For example, when you launch instances, you should be able to just specify categories 
like Big Data or an OS Family and let the system find the boot source for 
you, whether that is an image, snapshot, or volume.  It should also allow finer grained filtering 
such as choosing specific versions of software that you want.

For administrators, we'd like there to be an easier way to meaningfully 
collaborate on properties across host aggregates, flavors, images, volumes, or 
other cloud resources. Today, this often involves searching wikis and opening 
the source code.

We, HP and Intel, believe that both of the above problems come back to needing 
a better way for users to collaborate on metadata across services and resource 
types.  We started project Graffiti to explore ideas and concepts for how to 
make this easier and more approachable for end users. We're asking for your 
input and participation to help us move forward!

To help explain the ideas of the project, we have created a quick screencast 
demonstrating the concepts running under POC code. Please take a look!


* Graffiti Concepts Overview:

o   http://youtu.be/f0SZtPgcxk4

Please join with us to help refine the concepts and identify where we can best 
fit in to the ecosystem. We have a few blueprints, but they need additional 
support outside of the Horizon UI. We believe our best path is one where we can 
contribute the Graffiti service as either a new project in an existing program 
or as a series of enhancements to existing projects.  Your insight and feedback 
is important to us and we look forward to growing this initiative with you!

We have a design session at the summit where we'd love to have open discussion 
with all who can attend:

Juno Summit Design Session
http://sched.co/1m7wghx

For more info, please visit our wiki:

Project Wiki
https://wiki.openstack.org/wiki/Graffiti

IRC
#graffiti on  Freenodehttp://freenode.net/

Related Blueprints
https://blueprints.launchpad.net/horizon/+spec/instance-launch-using-capability-filtering
https://blueprints.launchpad.net/horizon/+spec/faceted-search
https://blueprints.launchpad.net/horizon/+spec/tagging
https://blueprints.launchpad.net/horizon/+spec/host-aggregate-update-metadata
https://blueprints.launchpad.net/python-cinderclient/+spec/support-volume-image-metadata

Thank you,
Travis Tripp





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove] Request for transparent Gerrit approval process

2014-05-05 Thread Nikhil Manchanda

Hi Mat:

Some answers, and my perspective, are inline:

Lowery, Mathew writes:

 As a non-core, I would like to understand the process by which core
 prioritizes Gerrit changes.

I'm not aware of any standard process used by all core reviewers to
prioritize reviewing changes in Gerrit. My process, specifically,
involves looking through trove changes that are in-flight in gerrit, and
picking ones based on priority of the bug / bp fixed, whether or not the
patch is still work-in-progress and age.


 I'd also like to know any specific
 criteria used for approval. If such a process was transparent and
 followed consistently, wouldn't that eliminate the need for Hey core,
 can you review change? in IRC?

I'm not aware of any specific criteria that we use for approval other
than the more general cross-OpenStack criteria (hacking, PEP8,
etc). Frankly, any rules that constitute a common criteria should really
be enforced in the tox runs (for eg. through hacking). Reviewers should
review patches to primarily ensure that changes make sense in context,
and are sound design-wise -- something that automated tools can't do (at
least not yet).


 Specifically:

   *   Is a prioritized queue used such as the one found at
   http://status.openstack.org/reviews/? (This output comes from a
   project called
   ReviewDayhttps://github.com/openstack-infra/reviewday and it is
   prioritized based on the Launchpad ticket and age:
   http://git.io/h3QULg.) If not, how do you keep a Gerrit change from
   starving?

How reviewers prioritize what they want to review is something that is
currently left up to the reviewers. Personally, when I review changes,
I quickly look through all patches on review.o.o and do a quick triage
to decide on the review order. I do look at review-age as part of the
triage, and hopefully this prevents 'starvation' to some extent.

That said, this area definitely seems like one where having a
streamlined, team-wide process can help with getting more bang for our
review-buck. Using something like reviewday could definitely help here.

I'm also curious how some of the other OpenStack teams solve this issue,
and what we can do to align in this regard.


   *   Is there a baking period? In other words, is there a minimum
   amount of time that has to elapse before even being considered for
   approval?

There is no such baking period.


   *   What effect do -1 code reviews have on the chances of approval
   (or even looking at the change)? Sometimes, -1 code reviews seem to
   be given in a cavalier manner. In my mind, -1 code reviewers have a
   duty to respond to follow-up questions by the author. Changes not
   tainted with a -1 simply have a greater chance of getting looked at.

I always look at the comment that the -1 review has along with it. If
the concern is valid, and the patch author has not addressed it with a
reply, I will leave a comment (with a -1 in some cases) myself. If the
original -1 comment is not applicable / incorrect, I will +1 / +2 as
applicable, and usually, also leave a comment.


   *   To harp on this again: Isn't Hey core, can you review
   change? inherently unfair assuming that there is a process by
   which a Gerrit change would normally be prioritized and reviewed in
   an orderly fashion?

I don't necessarily see this as unfair. When someone asks me to review a
change, I don't usually do it immediately (unless it's a fix for a
blocking / breaking change). I tell them that I'll get to it soon, and
ensure that it's in the list of reviews that I triage and review as
usual.


   *   Are there specific actions that non-cores can take to assist in
   the orderly and timely approval of Gerrit changes? (e.g. don't give
   a -1 code review on a multiple +1'ed Gerrit change when it's a
   nice-to-have and don't leave a -1 and then vanish)?

IMHO purely 'nice-to-have' nitpicks should be +0, regardless of how
many +1s the review already has. And if someone leaves a -1 on a review
and subsequently refuses to discuss it, that is just bad form on
the part of the reviewer. Thankfully, this hasn't been a big problem
that we've had to contend with.



 Any clarification of the process would be greatly appreciated.

 Thanks,
 Mat
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Design Summit Sessions

2014-05-05 Thread Tina TSOU
Dear Kyle,

Thanks for leading this.

We filed a BP per new process
https://blueprints.launchpad.net/neutron/+spec/scaling-network-performance

Hope we can have a talk in the Pod area.


Thank you,
Tina

On Apr 25, 2014, at 9:19 PM, Kyle Mestery 
mest...@noironetworks.commailto:mest...@noironetworks.com wrote:

Hi everyone:

I've pushed out the Neutron Design Summit Schedule to 
sched.orghttp://sched.org [1].
Like the other projects, it was tough to fit everything in. If your
proposal didn't make it, there will still be opportunities to talk
about it at the Summit in the project Pod area. Also, I encourage
you to still file a BP using the new Neutron BP process [2].

I expect some slight juggling of the schedule may occur as the entire
Summit schedule is set, but this should be approximately where things
land.

Thanks!
Kyle

[1] http://junodesignsummit.sched.org/overview/type/neutron
[2] https://wiki.openstack.org/wiki/Blueprints#Neutron

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [gantt] scheduler sub-group meeting agenda 5/6

2014-05-05 Thread Dugger, Donald D
Unfortunately, I'll be on a plane this week (traveling to my daughter's college 
graduation) so, again, if people want to hand out on the IRC channel 
(#openstack-meeting at 1500 UTC) the list of things I wanted to go over was:

1) Go over action items from last week
2) Status on forklift efforts
3) Juno summit design sessions (last chance to discuss)
4) Opens


Topic vault (so we don't forget)

1- No-db scheduler

--
Don Dugger
Censeo Toto nos in Kansa esse decisse. - D. Gale
Ph: 303/443-3786


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Openstack access with Java SDKs

2014-05-05 Thread Vikas Kokare
I am looking for a standard, seamless way to access OpenStack APIs , most
likely using the Java SDKs that are summarized at
https://wiki.openstack.org/wiki/SDKs#Software_Development_Kits

There are various distributions of Openstack available today. Is this
possible using these SDK's to write an application that works seamlessly
across distributions?

If the answer to the above is yes, then how does one evaluate the pros/cons
of these SDK's?

-Vikas
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][olso] How is the trusted message going on?

2014-05-05 Thread Nathan Kinder


On 05/05/2014 03:29 PM, Jiang, Yunhong wrote:
 Hi, all
   The trusted messaging 
 (https://blueprints.launchpad.net/oslo.messaging/+spec/trusted-messaging) has 
 been removed from icehouse, does anyone know how is current status? I noticed 
 a summit session may cover it ( 
 http://junodesignsummit.sched.org/event/9a6b59be11cdeaacfea70fef34328931#.U2gMo_ldWrQ
  ), but would really appreciate if anyone can provide some information.
 

There is also an oslo.messaging session that has the trusted messaging
blueprint on the agenda:


http://junodesignsummit.sched.org/event/f3abf1f9d8ce7558a020fcd9ba07d166#.U2hS7jn5Gho

The key distribution service mentioned in the blueprint referenced
above has become the Kite project, which is currently working it's way
through implementation under the Barbican program.  I recently wrote up
an overview of how it works here:

  https://blog-nkinder.rhcloud.com/?p=62

I'm hoping to be able to discuss the changes that would be needed to
oslo.messaging to take advantage of Kite (or a different message
security approach like the PKI one mentioned in Adam's design session
proposal) at the Summit.  If we can use a plug-in approach in
oslo.messaging, then different message security mechanisms could be
swapped out relatively easily (in theory).

Thanks,
-NGK

 Thanks
 --jyh
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-dev][qa] Running tempest tests against existing openstack installation

2014-05-05 Thread Swapnil Kulkarni
David, Thanks for the reply. the configurations were misplaced in the
automation. Once corrected I am able to use.



On Mon, May 5, 2014 at 6:56 PM, David Kranz dkr...@redhat.com wrote:

  On 05/05/2014 02:26 AM, Swapnil Kulkarni wrote:

 Hello,

  I am trying to run tempest tests against an existing openstack
 deployment. I have configured tempest.conf for the environment details. But
 when I execute run_tempest.sh, it does not run any tests.

  Although when I run testr run, the tests fail with *NoSuchOptError: no
 such option: IMAGE_ID*


 This must be coming from not having changed this value in the [compute]
 section of tempest.conf:

 #image_ref={$IMAGE_ID}

 See etc/tempest.conf.sample.

  -David



  The trace has been added at [1]

  If anyone has tried that before, any pointers are much appreciated.

  [1] http://paste.openstack.org/show/78885/

  Best Regards,
 Swapnil Kulkarni
 irc : coolsvap
 cools...@gmail.com
 +91-87960 10622(c)
 http://in.linkedin.com/in/coolsvap
  *It's better to SHARE*


 ___
 OpenStack-dev mailing 
 listOpenStack-dev@lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Algorithm for Virtual machine assignment.

2014-05-05 Thread Hammad Haleem
Hello Folks,

This is Hammad, a Computer Science student from India. I was working on
this project that aim to make VM assignment more efficient.

By VM assigment, what I mean is, As soon as a request for new virtual
machine creation comes to the Openstack platforms how the platform handles
the request. How the framework decided on which physical machine a new
virtual machne would be spinned off.

I thought what would be the best place to ask about the feasiblity of any
such implementations on the OpenStack mailing list. Also, it would be
really helpful if someone could point me to any existing the code that
deals with VM assignment, with in the openstack framework.


-- 
Regards
Hammad Haleem
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Algorithm for Virtual machine assignment.

2014-05-05 Thread Joshua Harlow
U probably want to start by looking @ 
http://docs.openstack.org/trunk/config-reference/content/section_compute-scheduler.html

The code is @ https://github.com/openstack/nova/tree/master/nova/scheduler

Just fyi as long as u are talking about VM's the above will be correct, if u 
are talking about scheduling with other resources then this will be a bigger 
list of links ;-)

From: Hammad Haleem hammadhal...@gmail.commailto:hammadhal...@gmail.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Monday, May 5, 2014 at 8:58 PM
To: 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: [openstack-dev] Algorithm for Virtual machine assignment.

Hello Folks,

This is Hammad, a Computer Science student from India. I was working on this 
project that aim to make VM assignment more efficient.

By VM assigment, what I mean is, As soon as a request for new virtual machine 
creation comes to the Openstack platforms how the platform handles the request. 
How the framework decided on which physical machine a new virtual machne would 
be spinned off.

I thought what would be the best place to ask about the feasiblity of any such 
implementations on the OpenStack mailing list. Also, it would be really helpful 
if someone could point me to any existing the code that deals with VM 
assignment, with in the openstack framework.


--
[https://mail.google.com/mail/u/0/images/cleardot.gif]Regards
Hammad Haleem

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Algorithm for Virtual machine assignment.

2014-05-05 Thread Hammad Haleem
Thanks  for quick reply.
On 6 May 2014 09:47, Joshua Harlow harlo...@yahoo-inc.com wrote:

  U probably want to start by looking @
 http://docs.openstack.org/trunk/config-reference/content/section_compute-scheduler.html

  The code is @
 https://github.com/openstack/nova/tree/master/nova/scheduler

  Just fyi as long as u are talking about VM's the above will be correct,
 if u are talking about scheduling with other resources then this will be a
 bigger list of links ;-)

   From: Hammad Haleem hammadhal...@gmail.com
 Reply-To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Date: Monday, May 5, 2014 at 8:58 PM
 To: openstack-dev@lists.openstack.org openstack-dev@lists.openstack.org
 
 Subject: [openstack-dev] Algorithm for Virtual machine assignment.

   Hello Folks,

  This is Hammad, a Computer Science student from India. I was working on
 this project that aim to make VM assignment more efficient.

  By VM assigment, what I mean is, As soon as a request for new virtual
 machine creation comes to the Openstack platforms how the platform handles
 the request. How the framework decided on which physical machine a new
 virtual machne would be spinned off.

  I thought what would be the best place to ask about the feasiblity of
 any such implementations on the OpenStack mailing list. Also, it would be
 really helpful if someone could point me to any existing the code that
 deals with VM assignment, with in the openstack framework.


  --
   Regards
 Hammad Haleem


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gantt] scheduler sub-group meeting agenda 5/6

2014-05-05 Thread Sylvain Bauza
Congrats to your daughter, Don !

I'll chair this week meeting, no worries.

-Sylvain
Le 6 mai 2014 04:46, Dugger, Donald D donald.d.dug...@intel.com a écrit
:

 Unfortunately, I'll be on a plane this week (traveling to my daughter's
 college graduation) so, again, if people want to hand out on the IRC
 channel (#openstack-meeting at 1500 UTC) the list of things I wanted to go
 over was:

 1) Go over action items from last week
 2) Status on forklift efforts
 3) Juno summit design sessions (last chance to discuss)
 4) Opens


 Topic vault (so we don't forget)

 1- No-db scheduler

 --
 Don Dugger
 Censeo Toto nos in Kansa esse decisse. - D. Gale
 Ph: 303/443-3786


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Introducing Project Graffiti

2014-05-05 Thread Tripp, Travis S
Hi Angus,

 This seems neat, but also seems to have some overlap with glance's new catalog
 and some of the things Murano are doing. Have you had a look at those efforts?

Thanks! We have been keeping an eye on the Glance work and the Murano work, and 
your email reminded me to catch back up on both of them. I think in both cases 
that the Graffiti concepts are complementary. However, strictly speaking from a 
bird's eye view on application categorization, there is some reconciliation to 
work out on that aspect.
 
Regarding the Glance artifact repository, this looks like a nice revamp of its 
concepts. Most of it seems to be very related to handling artifacts, 
dependencies, versions, and relationships associated with packaging in a 
generic way so that external services can use it in a variety of ways. There is 
one feature that was discussed in the last Glance meeting logs that I think we 
might be able to leverage for some of the Graffiti concepts. That is a dynamic 
schemas API. Perhaps, we can build the Graffiti dictionary concepts on top of 
it.  We're definitely interested in anything that creates less moving parts for 
us and is already part of the standard OpenStack ecosystem.

For Murano, the pure application catalog UI is an interesting concept that 
today still seems to be intertwined with the Murano workflow engine. It isn't 
clear to me if the intent is for it to eventually become the UI for everything 
application related, including Solum and pure Heat templates? From the mailing 
list discussions, it seems that this is still a rather unresolved question. 

For us we'd like to be able to provide end user help with even the existing 
launch instance UI. Also, one of the goals of the Graffiti concepts is to be 
able to directly tag resources with metadata that comes from multiple 
sources, whether that is system provided (e.g. various compute capabilities) or 
user provided (e.g. categories or software tags) and then be able to search on 
them. This can be used for boot sources, flavors, host aggregates, etc, and 
perhaps even networks in the future.  It seems possible that Murano may just be 
a consumer of some of the same data?

-Travis

 -Original Message-
 From: Angus Salkeld [mailto:angus.salk...@rackspace.com]
 Sent: Monday, May 05, 2014 8:16 PM
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] Introducing Project Graffiti
 Importance: High
 
 On 05/05/14 20:26 +, Tripp, Travis S wrote:
 Hello Everybody,
 
 A challenge we've experienced with using OpenStack is discovering, sharing,
 and correlating metadata across services and different types of resources. We
 believe this affects both end users and administrators.
 
 Hi
 
 This seems neat, but also seems to have some overlap with glance's new catalog
 and some of the things Murano are doing. Have you had a look at those efforts?
 
 -Angus
 
 
 For end users, the UI can be too technical and require too much pre-existing
 knowledge of OpenStack concepts. For example, when you launch instances,
 you should be able to just specify categories like Big Data or an OS 
 Family
 and let the system find the boot source for you, whether that is an image,
 snapshot, or volume.  It should also allow finer grained filtering such as 
 choosing
 specific versions of software that you want.
 
 For administrators, we'd like there to be an easier way to meaningfully
 collaborate on properties across host aggregates, flavors, images, volumes, or
 other cloud resources. Today, this often involves searching wikis and opening
 the source code.
 
 We, HP and Intel, believe that both of the above problems come back to
 needing a better way for users to collaborate on metadata across services and
 resource types.  We started project Graffiti to explore ideas and concepts for
 how to make this easier and more approachable for end users. We're asking for
 your input and participation to help us move forward!
 
 To help explain the ideas of the project, we have created a quick screencast
 demonstrating the concepts running under POC code. Please take a look!
 
 
 * Graffiti Concepts Overview:
 
 o   http://youtu.be/f0SZtPgcxk4
 
 Please join with us to help refine the concepts and identify where we can 
 best
 fit in to the ecosystem. We have a few blueprints, but they need additional
 support outside of the Horizon UI. We believe our best path is one where we 
 can
 contribute the Graffiti service as either a new project in an existing 
 program or
 as a series of enhancements to existing projects.  Your insight and feedback 
 is
 important to us and we look forward to growing this initiative with you!
 
 We have a design session at the summit where we'd love to have open
 discussion with all who can attend:
 
 Juno Summit Design Session
 http://sched.co/1m7wghx
 
 For more info, please visit our wiki:
 
 Project Wiki
 https://wiki.openstack.org/wiki/Graffiti
 
 IRC
 #graffiti on  Freenodehttp://freenode.net/
 
 Related 

[openstack-dev] Fuel

2014-05-05 Thread Tizy Ninan
Hi

We are trying to integrate the openstack setup with the Microsoft Active
Directory(LDAP server).

As per openstack documentation,
http://docs.openstack.org/admin-guide-cloud/content/configuring-keystone-for-ldap-backend.html
 in
order to integrate with an LDAP server, an SELinux Boolean variable
‘authlogin_nsswitch_use_ldap’ needs to be set. We tried setting the
variable using the following command.
$ setsebool –P authlogin_nsswitch_use_ldap 1
It returned a message stating SElinux is disabled. We changed the status of
SElinux to permissive mode and tried setting the boolean variable, but it
returned a message stating ‘record not found in the database’.

We also tried retrieving all the boolean variables by using the following
command
$getsebool –a
It listed out all the boolean variables, but there was no variable named
‘authlogin_nsswitch_use_ldap’ in the list.
In order to add the variable we needed semanage. When executing the
‘semanage’ command it returned ‘command not found’. To install semanage we
tried installing policycoreutils-python. It showed no package
policycoreutils-python available.

We are using Mirantis Fuel v4.0. We have an openstack Havana deployment on
CentOS 6.4 and nova-network network service.
Can you please help us on why the SELinux boolean variable
(authlogin_nsswitch_use_ldap) is not available. Is it because the CentOS
image provided by the Fuel master node  does not provide the SELinux
settings?  Is there any alternative ways to set this boolean variable?

Kindly help us to resolve this issue.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] q-agt error

2014-05-05 Thread sonia verma
Hi all

I want to boot VM from openstack dashboard onto compute node using
devstack.When I'm booting VM from opensatck dashboard onto compute node,the
VM status is stuck at 'SPAWNING' and never changes to 'ACTIVE'.  I'm also
able to get tap interface of the VM onto compute node.I'm getting following
error in q-agt service...

Stdout:
'{data:[[qvo4b71e52b-d1,[map,[[attached-mac,fa:16:3e:83:6d:7c],[iface-id,4b71e52b-d14d-48cc-bbce-0eb07719e830],[iface-status,active],[vm-uuid,037cbf7a-e3b7-401d-b941-9e0b1c5aaf99,[br-tun,[map,[]]],[br-int,[map,[]]],[qvo2ce34d70-ab,[map,[[attached-mac,fa:16:3e:31:97:c4],[iface-id,2ce34d70-ab92-456a-aa12-8823df2c007f],[iface-status,active],[vm-uuid,370a2bec-7a56-4aa4-87d0-a22470e169fe],headings:[name,external_ids]}\n'

Stderr: '' execute /opt/stack/neutron/neutron/agent/linux/utils.py:73
2014-05-05 14:00:50.082 14236 DEBUG neutron.agent.linux.utils [-] Running
command: ['sudo', '/usr/local/bin/neutron-rootwrap',
'/etc/neutron/rootwrap.conf', 'ovs-vsctl', '--timeout=2', 'list-ports',
'br-int'] create_process /opt/stack/neutron/neutron/agent/linux/utils.py:47
2014-05-05 14:00:50.446 14236 DEBUG neutron.agent.linux.utils [-]
Command: ['sudo', '/usr/local/bin/neutron-rootwrap',
'/etc/neutron/rootwrap.conf', 'ovs-vsctl', '--timeout=2', 'list-ports',
'br-int']
Exit code: 0
Stdout: 'qvo2ce34d70-ab\nqvo4b71


Please help regarding this.

Thanks
Sonia
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev