Re: [openstack-dev] Backwards incompatible migration changes - Discussion

2013-09-12 Thread Roman Podolyaka
I can't agree more with Robert.

Even if it was possible to downgrade all migrations without data loss, it
would be required to make backups before DB schema upgrade/downgrade.

E.g. MySQL doesn't support transactional DDL. So if a migration script
can't be executed successfully for whatever reason (let's say we haven't
tested it well enough on real data and it's turned out it has a few bugs),
you will end up in a situation when the migration is partially applied...
And migrations can possibly fail before backup tables are created, during
this process or after it.

Thanks,
Roman


On Thu, Sep 12, 2013 at 8:30 AM, Robert Collins
robe...@robertcollins.netwrote:

 I think having backup tables adds substantial systematic complexity,
 for a small use case.

 Perhaps a better answer is to document in 'take a backup here' as part
 of the upgrade documentation and let sysadmins make a risk assessment.
 We can note that downgrades are not possible.

 Even in a public cloud doing trunk deploys, taking a backup shouldn't
 be a big deal: *those* situations are where you expect backups to be
 well understood; and small clouds don't have data scale issues to
 worry about.

 -Rob

 -Rob

 On 12 September 2013 17:09, Joshua Hesketh joshua.hesk...@rackspace.com
 wrote:
  On 9/4/13 6:47 AM, Michael Still wrote:
 
  On Wed, Sep 4, 2013 at 1:54 AM, Vishvananda Ishaya
  vishvana...@gmail.com wrote:
 
  +1 I think we should be reconstructing data where we can, but keeping
  track of
  deleted data in a backup table so that we can restore it on a downgrade
  seems
  like overkill.
 
  I guess it comes down to use case... Do we honestly expect admins to
  regret and upgrade and downgrade instead of just restoring from
  backup? If so, then we need to have backup tables for the cases where
  we can't reconstruct the data (i.e. it was provided by users and
  therefore not something we can calculate).
 
 
  So assuming we don't keep the data in some kind of backup state is there
 a
  way we should be documenting which migrations are backwards incompatible?
  Perhaps there should be different classifications for data-backwards
  incompatible and schema incompatibilities.
 
  Having given it some more thought, I think I would like to see migrations
  keep backups of obsolete data. I don't think it is unforeseeable that an
  administrator would upgrade a test instance (or less likely, a
 production)
  by accident or not realising their backups are corrupted, outdated or
  invalid. Being able to roll back from this point could be quite useful. I
  think potentially more useful than that though is that if somebody ever
  needs to go back and look at some data that would otherwise be lost it is
  still in the backup table.
 
  As such I think it might be good to see all migrations be downgradable
  through the use of backup tables where necessary. To couple this I think
 it
  would be good to have a standard for backup table naming and maybe schema
  (similar to shadow tables) as well as an official list of backup tables
 in
  the documentation stating which migration they were introduced and how to
  expire them.
 
  In regards to the backup schema, it could be exactly the same as the
 table
  being backed up (my preference) or the backup schema could contain just
 the
  lost columns/changes.
 
  In regards to the name, I quite like backup_table-name_migration_214.
 The
  backup table name could also contain a description of what is backed up
 (for
  example, 'uuid_column').
 
  In terms of expiry they could be dropped after a certain release/version
 or
  left to the administrator to clear out similar to shadow tables.
 
  Thoughts?
 
  Cheers,
  Josh
 
  --
  Rackspace Australia
 
 
  Michael
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 --
 Robert Collins rbtcoll...@hp.com
 Distinguished Technologist
 HP Converged Cloud

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] - Evacuate a agent node

2013-09-12 Thread Endre Karlson
Is it possible to get a evacuation command for a node running agents ?
Say moving all the agents owned resources from network node a  b?
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] How the autoscale API should control scaling in Heat

2013-09-12 Thread Steven Hardy
On Thu, Sep 12, 2013 at 04:15:39AM +, Joshua Harlow wrote:
 Ah, thx keith, that seems to make a little more sense with that context.
 
 Maybe that different instance will be doing other stuff also?
 
 Is that the general heat 'topology' that should/is recommended for trove?
 
 For say autoscaling trove, will trove emit a set of metrics via ceilometer
 that heat (or a separate autoscaling thing) will use to analyze if
 autoscaling should occur? I suppose nova would also emit its own set and
 it will be up to the autoscaler to merge those together (as trove
 instances are nova instances). Its a very interesting set of problems to
 make an autoscaling entity that works well without making that autoscaling
 entity to aware of the internals of the various projects. Making it to
 aware and then the whole system is fragile but not making it aware enough
 and then it will not do its job very well.

No, this is not how things work now we're integrated with Ceilometer
(*alarms*, not raw metrics)

Previously Head did do basic metric evaluation internally, but now we rely
on Ceilometer to do all of that for us, so we just pass a web-hook URL to
Ceilometer, which gets hit when an alarm happens (in Ceilometer).

So Trove, Nova, or whatever, just need to get metrics into Ceilometer, then
you can set up a Ceilometer alarm via Heat, associated with the Autoscaling
resource.

This has a number of advantages, in particular it removes any coupling
between heat and specific metrics or internals, and it provides a very
flexible interface if people want to drive Heat AutoScaling via something
other than Ceilometer (e.g the autoscale API under discussion here)

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] - Evacuate a agent node

2013-09-12 Thread Yongsheng Gong
set the agent's admin status to False will empty up the agent's resources.
but it will not move the owned resources.


On Thu, Sep 12, 2013 at 3:36 PM, Endre Karlson endre.karl...@gmail.comwrote:

 Is it possible to get a evacuation command for a node running agents ?
 Say moving all the agents owned resources from network node a  b?

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] - Evacuate a agent node

2013-09-12 Thread Yongsheng Gong
To implement the 'evacuation' is also possible.


On Thu, Sep 12, 2013 at 3:43 PM, Yongsheng Gong gong...@unitedstack.comwrote:

 set the agent's admin status to False will empty up the agent's resources.
 but it will not move the owned resources.


 On Thu, Sep 12, 2013 at 3:36 PM, Endre Karlson endre.karl...@gmail.comwrote:

 Is it possible to get a evacuation command for a node running agents ?
 Say moving all the agents owned resources from network node a  b?

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Backwards incompatible migration changes - Discussion

2013-09-12 Thread Morgan Fainberg
Another issue to consider with regards to backup tables is the length of
time that can occur between upgrade and downgrade functionally.  What if
you upgrade, then see an issue and downgrade in an hour.  Is the backup
table data still relevant?  Would you end up putting stale/broken data back
in place because other things changed?  At a certain point restore from
backup is really the only sane option.  That threshold isn't exactly a long
period of time.

Cheers,
Morgan Fainberg

IRC: morganfainberg


On Wed, Sep 11, 2013 at 10:30 PM, Robert Collins
robe...@robertcollins.netwrote:

 I think having backup tables adds substantial systematic complexity,
 for a small use case.

 Perhaps a better answer is to document in 'take a backup here' as part
 of the upgrade documentation and let sysadmins make a risk assessment.
 We can note that downgrades are not possible.

 Even in a public cloud doing trunk deploys, taking a backup shouldn't
 be a big deal: *those* situations are where you expect backups to be
 well understood; and small clouds don't have data scale issues to
 worry about.

 -Rob

 -Rob

 On 12 September 2013 17:09, Joshua Hesketh joshua.hesk...@rackspace.com
 wrote:
  On 9/4/13 6:47 AM, Michael Still wrote:
 
  On Wed, Sep 4, 2013 at 1:54 AM, Vishvananda Ishaya
  vishvana...@gmail.com wrote:
 
  +1 I think we should be reconstructing data where we can, but keeping
  track of
  deleted data in a backup table so that we can restore it on a downgrade
  seems
  like overkill.
 
  I guess it comes down to use case... Do we honestly expect admins to
  regret and upgrade and downgrade instead of just restoring from
  backup? If so, then we need to have backup tables for the cases where
  we can't reconstruct the data (i.e. it was provided by users and
  therefore not something we can calculate).
 
 
  So assuming we don't keep the data in some kind of backup state is there
 a
  way we should be documenting which migrations are backwards incompatible?
  Perhaps there should be different classifications for data-backwards
  incompatible and schema incompatibilities.
 
  Having given it some more thought, I think I would like to see migrations
  keep backups of obsolete data. I don't think it is unforeseeable that an
  administrator would upgrade a test instance (or less likely, a
 production)
  by accident or not realising their backups are corrupted, outdated or
  invalid. Being able to roll back from this point could be quite useful. I
  think potentially more useful than that though is that if somebody ever
  needs to go back and look at some data that would otherwise be lost it is
  still in the backup table.
 
  As such I think it might be good to see all migrations be downgradable
  through the use of backup tables where necessary. To couple this I think
 it
  would be good to have a standard for backup table naming and maybe schema
  (similar to shadow tables) as well as an official list of backup tables
 in
  the documentation stating which migration they were introduced and how to
  expire them.
 
  In regards to the backup schema, it could be exactly the same as the
 table
  being backed up (my preference) or the backup schema could contain just
 the
  lost columns/changes.
 
  In regards to the name, I quite like backup_table-name_migration_214.
 The
  backup table name could also contain a description of what is backed up
 (for
  example, 'uuid_column').
 
  In terms of expiry they could be dropped after a certain release/version
 or
  left to the administrator to clear out similar to shadow tables.
 
  Thoughts?
 
  Cheers,
  Josh
 
  --
  Rackspace Australia
 
 
  Michael
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 --
 Robert Collins rbtcoll...@hp.com
 Distinguished Technologist
 HP Converged Cloud

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Enforcing cert validation in auth_token middleware

2013-09-12 Thread David Chadwick



On 12/09/2013 04:46, Dolph Mathews wrote:


On Wed, Sep 11, 2013 at 10:25 PM, Jamie Lennox jlen...@redhat.com
mailto:jlen...@redhat.com wrote:

With the aim of replacing httplib and cert validation with requests[1]
I've put forward the following review to use the requests library for
auth_token middleware.

https://review.openstack.org/#/c/34161/

This adds 2 new config options.
- The ability to provide CAs to validate https connections against.
- The ability to set insecure to ignore https validation.

By default request will validate connections against the system CAs by
default. So given that we currently don't verify SSL connections, do we
need to default insecure to true?


I vote no; and yes to secure by default.


So do I

David




Maintaining compatibility should win here as i imagine there are a great
number of auth_token deployments using SSL with invalid/self-signed
certificates that would be broken, but defaulting to insecure just seems
wrong.

Given that keystone isn't the only project moving away from httplib, how
are other projects handling this?


The last time keystoneclient made this same change (thanks Dean!), we
provided no warning:

https://review.openstack.org/#/c/17624/

Which added the --insecure flag to opt back into the old behavior.

How do we end up with reasonable
defaults? Is there any amount of warning that we could give to change a
default like this - or is this another one of those version 1.0 issues?


Jamie



[1] https://bugs.launchpad.net/keystone/+bug/1188189


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
mailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--

-Dolph


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] How the autoscale API should control scaling in Heat

2013-09-12 Thread Joshua Harlow
Cool, thanks for the explanation and clarification :)

Sent from my really tiny device...

On Sep 12, 2013, at 12:41 AM, Steven Hardy sha...@redhat.com wrote:

 On Thu, Sep 12, 2013 at 04:15:39AM +, Joshua Harlow wrote:
 Ah, thx keith, that seems to make a little more sense with that context.
 
 Maybe that different instance will be doing other stuff also?
 
 Is that the general heat 'topology' that should/is recommended for trove?
 
 For say autoscaling trove, will trove emit a set of metrics via ceilometer
 that heat (or a separate autoscaling thing) will use to analyze if
 autoscaling should occur? I suppose nova would also emit its own set and
 it will be up to the autoscaler to merge those together (as trove
 instances are nova instances). Its a very interesting set of problems to
 make an autoscaling entity that works well without making that autoscaling
 entity to aware of the internals of the various projects. Making it to
 aware and then the whole system is fragile but not making it aware enough
 and then it will not do its job very well.
 
 No, this is not how things work now we're integrated with Ceilometer
 (*alarms*, not raw metrics)
 
 Previously Head did do basic metric evaluation internally, but now we rely
 on Ceilometer to do all of that for us, so we just pass a web-hook URL to
 Ceilometer, which gets hit when an alarm happens (in Ceilometer).
 
 So Trove, Nova, or whatever, just need to get metrics into Ceilometer, then
 you can set up a Ceilometer alarm via Heat, associated with the Autoscaling
 resource.
 
 This has a number of advantages, in particular it removes any coupling
 between heat and specific metrics or internals, and it provides a very
 flexible interface if people want to drive Heat AutoScaling via something
 other than Ceilometer (e.g the autoscale API under discussion here)
 
 Steve
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] reg. Multihost dhcp feature in Havana ?

2013-09-12 Thread Gopi Krishna B
On Tue, Sep 10, 2013 at 1:46 PM, Gopi Krishna B gopi97...@gmail.com wrote:

 Hi
 Was looking at the below link and checking if the feature to support
 Multihost networking is part of Havana.

 https://blueprints.launchpad.net/neutron/+spec/quantum-multihost

 I couldnot find out the feature in the Havana blue print. Could you let me
 know details reg. the feature ?

 https://blueprints.launchpad.net/neutron/havana/+specs?show=all

 --
 Regards
 Gopi Krishna


 Hi
Does anyone have info related to this feature being part of Havana.

Thanks
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] reg. Multihost dhcp feature in Havana ?

2013-09-12 Thread Yongsheng Gong
no, we don't get agreement on the implementation.
You can get some ideas from comments of
https://review.openstack.org/#/c/37919/

Yong Sheng Gong
Unitedstack Inc.


On Thu, Sep 12, 2013 at 4:57 PM, Gopi Krishna B gopi97...@gmail.com wrote:




 On Tue, Sep 10, 2013 at 1:46 PM, Gopi Krishna B gopi97...@gmail.comwrote:

 Hi
 Was looking at the below link and checking if the feature to support
 Multihost networking is part of Havana.

 https://blueprints.launchpad.net/neutron/+spec/quantum-multihost

 I couldnot find out the feature in the Havana blue print. Could you let
 me know details reg. the feature ?

 https://blueprints.launchpad.net/neutron/havana/+specs?show=all

 --
 Regards
 Gopi Krishna


 Hi
 Does anyone have info related to this feature being part of Havana.

 Thanks



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] - Evacuate a agent node

2013-09-12 Thread Emilien Macchi
At eNovance, we have implemented this feature as a service:
https://github.com/enovance/neutron-l3-healthcheck

It checks L3 agents status and reschedule routers if an agent is down.
It's working for Grizzly and Havana (two different branches).


Regards,

Emilien Macchi

# OpenStack Engineer
// eNovance Inc.  http://enovance.com
// ? emil...@enovance.com ? +33 (0)1 49 70 99 80
// 10 rue de la Victoire 75009 Paris

On 09/12/2013 09:45 AM, Yongsheng Gong wrote:
 To implement the 'evacuation' is also possible.


 On Thu, Sep 12, 2013 at 3:43 PM, Yongsheng Gong
 gong...@unitedstack.com mailto:gong...@unitedstack.com wrote:

 set the agent's admin status to False will empty up the agent's
 resources. but it will not move the owned resources. 


 On Thu, Sep 12, 2013 at 3:36 PM, Endre Karlson
 endre.karl...@gmail.com mailto:endre.karl...@gmail.com wrote:

 Is it possible to get a evacuation command for a node
 running agents ? Say moving all the agents owned resources
 from network node a  b?

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 mailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] TC Meeting / Savanna Incubation Follow-Up

2013-09-12 Thread Thierry Carrez
Sergey Lukjanov wrote:

 [...]
 As you can see, resources provisioning is just one of the features and the 
 implementation details are not critical for overall architecture. It performs 
 only the first step of the cluster setup. We’ve been considering Heat for a 
 while, but ended up direct API calls in favor of speed and simplicity. Going 
 forward Heat integration will be done by implementing extension mechanism [3] 
 and [4] as part of Icehouse release.
 
 The next part, Hadoop cluster configuration, already extensible and we have 
 several plugins - Vanilla, Hortonworks Data Platform and Cloudera plugin 
 started too. This allow to unify management of different Hadoop distributions 
 under single control plane. The plugins are responsible for correct Hadoop 
 ecosystem configuration at already provisioned resources and use different 
 Hadoop management tools like Ambari to setup and configure all cluster  
 services, so, there are no actual provisioning configs on Savanna side in 
 this case. Savanna and its plugins encapsulate the knowledge of Hadoop 
 internals and default configuration for Hadoop services.

My main gripe with Savanna is that it combines (in its upcoming release)
what sounds like to me two very different services: Hadoop cluster
provisioning service (like what Trove does for databases) and a
MapReduce+ data API service (like what Marconi does for queues).

Making it part of the same project (rather than two separate projects,
potentially sharing the same program) make discussions about shifting
some of its clustering ability to another library/project more complex
than they should be (see below).

Could you explain the benefit of having them within the same service,
rather than two services with one consuming the other ?

 The next topic is “Cluster API”.
 
 The concern that was raised is how to extract general clustering 
 functionality to the common library. Cluster provisioning and management 
 topic currently relevant for a number of projects within OpenStack ecosystem: 
 Savanna, Trove, TripleO, Heat, Taskflow.
 
 Still each of the projects has their own understanding of what the cluster 
 provisioning is. The idea of extracting common functionality sounds 
 reasonable, but details still need to be worked out. 
 
 I’ll try to highlight Savanna team current perspective on this question. 
 Notion of “Cluster management” in my perspective has several levels:
 1. Resources provisioning and configuration (like instances, networks, 
 storages). Heat is the main tool with possibly additional support from 
 underlying services. For example, instance grouping API extension [5] in Nova 
 would be very useful. 
 2. Distributed communication/task execution. There is a project in OpenStack 
 ecosystem with the mission to provide a framework for distributed task 
 execution - TaskFlow [6]. It’s been started quite recently. In Savanna we are 
 really looking forward to use more and more of its functionality in I and J 
 cycles as TaskFlow itself getting more mature.
 3. Higher level clustering - management of the actual services working on top 
 of the infrastructure. For example, in Savanna configuring HDFS data nodes or 
 in Trove setting up MySQL cluster with Percona or Galera. This operations are 
 typical very specific for the project domain. As for Savanna specifically, we 
 use lots of benefits of Hadoop internals knowledge to deploy and configure it 
 properly.
 
 Overall conclusion it seems to be that it make sense to enhance Heat 
 capabilities and invest in Taskflow development, leaving domain-specific 
 operations to the individual projects.

The thing we'd need to clarify (and the incubation period would be used
to achieve that) is how to reuse as much as possible between the various
cluster provisioning projects (Trove, the cluster side of Savanna, and
possibly future projects). Solution can be to create a library used by
Trove and Savanna, to extend Heat, to make Trove the clustering thing
beyond just databases...

One way of making sure smart and non-partisan decisions are taken in
that area would be to make Trove and Savanna part of the same program,
or make the clustering part of Savanna part of the same program as
Trove, while the data API part of Savanna could live separately (hence
my question about two different projects vs. one project above).

 I also would like to emphasize that in Savanna Hadoop cluster management is 
 already implemented including scaling support.
 
 With all this I do believe Savanna fills an important gap in OpenStack by 
 providing Data Processing capabilities in cloud environment in general and 
 integration with Hadoop ecosystem as the first particular step. 

For incubation we bless the goal of the project and the promise that it
will integrate well with the other existing projects. A
perfectly-working project can stay in incubation until it achieves
proper integration and avoids duplication of functionality with other
integrated projects. A 

Re: [openstack-dev] [heat] Comments/questions on the instance-group-api-extension blueprint

2013-09-12 Thread Gary Kotton
Hi,
For some reason I am unable to access your proceed talk. I am not 100% sure but 
I think that the voting may be closed. We have weekly scheduling meetings 
(https://wiki.openstack.org/wiki/Meetings#Scheduler_Sub-group_meeting). It 
would be nice if you could attend and it will give you a platform to raise and 
share ideas with the rest of the guys in the community.
At the moment the scheduling subgroup is working  on our ideas for the design 
summit sessions. Please see 
https://etherpad.openstack.org/IceHouse-Nova-Scheduler-Sessions
Thanks
Gary

From: Mike Spreitzer mspre...@us.ibm.commailto:mspre...@us.ibm.com
Reply-To: OpenStack Development Mailing List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Wednesday, September 11, 2013 9:59 PM
To: OpenStack Development Mailing List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [heat] Comments/questions on the 
instance-group-api-extension blueprint

Yes, I've seen that material.  In my group we have worked larger and more 
complex examples.  I have a proposed breakout session at the Hong Kong summit 
to talk about one, you might want to vote for it.  The URL is 
http://www.openstack.org/summit/openstack-summit-hong-kong-2013/become-a-speaker/TalkDetails/109
 and the title is Continuous Delivery of Lotus Connections on OpenStack.  We 
used our own technology to do the scheduling (make placement decisions) and 
orchestration, calling Nova and Quantum to carry out the decisions our software 
made.  Above the OpenStack infrastructure we used two layers of our own 
software, one focused on infrastructure and one adding concerns for the 
software running on that infrastructure.  Each used its own language for a 
whole topology AKA pattern AKA application AKA cluster.  For example, our 
pattern has 16 VMs running the WebSphere application server, organized into 
four homogenous groups (members are interchangeable) of four each.  For each 
group, we asked that it both (a) be spread across at least two racks, with no 
more than half the VMs on any one rack and (b) have no two VMs on the same 
hypervisor.  You can imagine how this would involve multiple levels of grouping 
and relationships between groups (and you will probably be surprised by the 
particulars).  We also included information on licensed products, so that the 
placement decision can optimize license cost (for the IBM sub-capacity 
licenses, placement of VMs can make a cost difference).  Thus, multiple 
policies per thing.  We are now extending that example to include storage, and 
we are also working examples with Hadoop.

Regards,
Mike



From:Gary Kotton gkot...@vmware.commailto:gkot...@vmware.com
To:OpenStack Development Mailing List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org,
Date:09/11/2013 06:06 AM
Subject:Re: [openstack-dev] [heat] Comments/questions on the 
instance-group-api-extension blueprint






From: Mike Spreitzer mspre...@us.ibm.commailto:mspre...@us.ibm.com
Reply-To: OpenStack Development Mailing List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Tuesday, September 10, 2013 11:58 PM
To: OpenStack Development Mailing List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: [openstack-dev] [heat] Comments/questions on the 
instance-group-api-extension blueprint

First, I'm a newbie here, wondering: is this the right place for 
comments/questions on blueprints?  Supposing it is...

[Gary Kotton] Yeah, as Russel said this is the correct place

I am referring to 
https://blueprints.launchpad.net/nova/+spec/instance-group-api-extension

In my own research group we have experience with a few systems that do 
something like that, and more (as, indeed, that blueprint explicitly states 
that it is only the start of a longer roadmap).  I would like to highlight a 
couple of differences that alarm me.  One is the general overlap between 
groups.  I am not saying this is wrong, but as a matter of natural conservatism 
we have shied away from unnecessary complexities.  The only overlap we have 
done so far is hierarchical nesting.  As the instance-group-api-extension 
explicitly contemplates groups of groups as a later development, this would 
cover the overlap that we have needed.  On the other hand, we already have 
multiple policies attached to a single group.  We have policies for a variety 
of concerns, so some can combine completely or somewhat independently.  We also 
have relationships (of various sorts) between groups (as well as between 
individuals, and between individuals and groups).  The policies and 
relationships, in general, are not simply names but also have parameters.

[Gary Kotton] The instance groups was meant to be the first step towards what 
we had presented in Portland. Please look at the presentation that we gave an 

Re: [openstack-dev] run_tests in debug mode fails

2013-09-12 Thread Rosa, Andrea (HP Cloud Services)
Hi Clark,

From: Clark Boylan [mailto:clark.boy...@gmail.com]
Sent: 11 September 2013 04:44
To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] run_tests in debug mode fails


I did manage to confirm that the attached patch mostly fixes the problem. 

with your patch I am able to run 
`python -m testtools.run 
nova.tests.integrated.test_servers.ServersTestV3.test_create_and_rebuild_server`

Thanks for that and for your time!
--
Andrea Rosa

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [novaclient] Core review request

2013-09-12 Thread Vitaliy Kolosov
Hi, guys.

Please, review my changes: https://review.openstack.org/#/c/45414/
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Question about create_port

2013-09-12 Thread Salvatore Orlando
Hi,

where are you observing such calls?
The vm boot process makes indeed several REST calls to Neutron, but about
90% of them are GETs, there should be only 1 POST for each port, and a few
puts.
I think you should not see DELETE in the boot process, so perhaps these
calls are coming from somewhere else (a DHCP agent acting a bit crazy
perhaps?)

Regards,
Salvatore


On 4 September 2013 15:03, Chandan Dutta Chowdhury chand...@juniper.netwrote:

 Hello All,

 I am trying to make my neutron plugin  to configure a physical
 switch(using vlans), while in create_port I am trying to configure the
 physical switch I see a lot of create_port and delete_port  calls appearing
 in server.log.
 I am assuming that this may be due to the amount of time required to
 configure/commit on the physical switch is higher, and nova may be trying
 multiple times to create port (and deleting port when response does not
 arrive within a timeout period).

 Is there a timeout value in neutron or nova that can be altered so that
 the client can wait for the create_port to finish instead of sending
 multiple create/delete port?

 Thanks
 Chandan


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] nominations for tempest-core

2013-09-12 Thread Sean Dague
Ok, with that all existing tempest-core members have voted, and all are 
+1. So I'm happy to welcome Mark and Giulio to Tempest core!


 -Sean

On 09/12/2013 07:19 AM, Attila Fazekas wrote:

+1 for both of them

- Original Message -
From: Sean Dague s...@dague.net
To: OpenStack Development Mailing List openstack-dev@lists.openstack.org
Sent: Wednesday, September 11, 2013 10:32:11 PM
Subject: [openstack-dev] [qa] nominations for tempest-core

We're in Feature Freeze for the Open Stack projects, which actually
means we're starting the busy cycle for Tempest in people landing
additional tests for verification of features that hadn't gone in until
recently. As such, I think now is a good time to consider some new core
members. There are two people I think have been doing an exceptional job
that we should include in the core group

Mark Koderer has been spear heading the stress testing in Tempest,
completing the new stress testing for the H3 milestone, and has gotten
very active in reviews over the last three months.

You can see his contributions here:
https://review.openstack.org/#/q/project:openstack/tempest+owner:m.koderer%2540telekom.de,n,z

And his code reviews here: his reviews here -
https://review.openstack.org/#/q/project:openstack/tempest+reviewer:m.koderer%2540telekom.de,n,z


Giulio Fidente did a lot of great work bringing our volumes testing up
to par early in the cycle, and has been very active in reviews since the
Havana cycle opened up.

You can see his contributions here:
https://review.openstack.org/#/q/project:openstack/tempest+owner:gfidente%2540redhat.com,n,z

And his code reviews here: his reviews here -
https://review.openstack.org/#/q/project:openstack/tempest+reviewer:gfidente%2540redhat.com,n,z


Both have been active in blueprints and the openstack-qa meetings all
summer long, and I think would make excellent additions to the Tempest
core team.

Current QA core members, please vote +1 or -1 to these nominations when
you get a chance. We'll keep the polls open for 5 days or until everyone
has voiced their votes.

For reference here are the 90 day review stats for Tempest as of today:

Reviews for the last 90 days in tempest
** -- tempest-core team member
+--+---+
|   Reviewer   | Reviews (-2|-1|+1|+2) (+/- ratio) |
+--+---+
| afazekas **  | 275 (1|29|18|227) (89.1%) |
|  sdague **   |  198 (4|60|0|134) (67.7%) |
|   gfidente   |  130 (0|55|75|0) (57.7%)  |
|david-kranz **|  112 (1|24|0|87) (77.7%)  |
| treinish **  |  109 (5|32|0|72) (66.1%)  |
|  cyeoh-0 **  |   87 (0|19|4|64) (78.2%)  |
|   mkoderer   |   69 (0|20|49|0) (71.0%)  |
| jaypipes **  |   65 (0|22|0|43) (66.2%)  |
|igawa |   49 (0|10|39|0) (79.6%)  |
|   oomichi|   30 (0|9|21|0) (70.0%)   |
| jogo |   26 (0|12|14|0) (53.8%)  |
|   adalbas|   22 (0|4|18|0) (81.8%)   |
| ravikumar-venkatesan |   22 (0|2|20|0) (90.9%)   |
|   ivan-zhu   |   21 (0|10|11|0) (52.4%)  |
|   mriedem|13 (0|4|9|0) (69.2%)   |
|   andrea-frittoli|12 (0|4|8|0) (66.7%)   |
|   mkollaro   |10 (0|5|5|0) (50.0%)   |
|  zhikunliu   |10 (0|4|6|0) (60.0%)   |
|Anju5 |9 (0|0|9|0) (100.0%)   |
|   anteaya|7 (0|3|4|0) (57.1%)|
| Anju |7 (0|0|7|0) (100.0%)   |
|   steve-stevebaker   |6 (0|3|3|0) (50.0%)|
|   prekarat   |5 (0|3|2|0) (40.0%)|
|rahmu |5 (0|2|3|0) (60.0%)|
|   psedlak|4 (0|3|1|0) (25.0%)|
|minsel|4 (0|3|1|0) (25.0%)|
|zhiteng-huang |3 (0|2|1|0) (33.3%)|
| maru |3 (0|1|2|0) (66.7%)|
|   iwienand   |3 (0|1|2|0) (66.7%)|
|FujiokaYuuichi|3 (0|1|2|0) (66.7%)|
|dolph |3 (0|0|3|0) (100.0%)   |
| cthiel-suse  |3 (0|0|3|0) (100.0%)   |
|walter-boring | 2 (0|2|0|0) (0.0%)|
|bnemec| 2 (0|2|0|0) (0.0%)|
|   lifeless   |2 (0|1|1|0) (50.0%)|
|fabien-boucher|2 (0|1|1|0) (50.0%)|
| alex_gaynor  |2 (0|1|1|0) (50.0%)|
|alaski|2 (0|1|1|0) (50.0%)|
|   krtaylor   |2 (0|0|2|0) (100.0%)   |
|   cbehrens   |2 (0|0|2|0) (100.0%)   |
|   Sumanth|2 (0|0|2|0) (100.0%)   |
| ttx  | 1 (0|1|0|0) (0.0%)|
|   rvaknin|   

[openstack-dev] [qa] Proposed Agenda for Today's QA meeting - 17:00 UTC (13:00 EDT)

2013-09-12 Thread Sean Dague

Reminder for today's QA meeting at 17:00 UTC:

Proposed Agenda for September 12 2013
* Leader for next week's meeting - sdague on a plane back from LinuxCon 
during this time

* New core members (sdague)
* Summit planning (sdague)
* Blueprints (sdague)
* neutron testing status (mlavalle)
* whitebox test removal (sdague)
* Critical Reviews (sdague)
* OpenStack qa repo for test plans that can be pointed at from 
blueprints (mkollaro)


Full agenda here - 
https://wiki.openstack.org/wiki/Meetings/QATeamMeeting, please feel free 
to edit and add proposed items to it if you think there is something we 
are missing. If you do so, please put your IRC nick on your item so that 
we know who will be leading the discussion.


Thanks all, and see you at the meeting.

-Sean

--
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Nova FFE request for get scheduler hints API

2013-09-12 Thread Gary Kotton
+1

From: Day, Phil philip@hp.commailto:philip@hp.com
Reply-To: OpenStack Development Mailing List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Thursday, September 12, 2013 3:40 PM
To: OpenStack Development Mailing List 
(openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: [openstack-dev] Nova FFE request for get scheduler hints API

Hi Folks,

I’d like the following change to be considered for an FFE please:  
https://review.openstack.org/#/c/34291/

This change adds a capability that was agreed as needed at the Havana design 
summit, is functionally complete (including the corresponding python-novaclient 
support (https://review.openstack.org/#/c/38847/), and has been through many 
review cycles since it was first posted in June.   The change includes both V2 
and V3 api changes.

The only remaining review comment at the point when it got hit by FF was 
whether it was appropriate for this change to introduce a secheduler/api.py 
layer rather than following the existing (and exceptional) process of calling 
the scheduler/rpcapi.py methods directly.   Since this is the first query to 
call (rather than cast) into the scheduler it seems to me a sensible 
abstraction to add, and makes the scheduler consistent with all other services 
in Nova.

The change is low risk in that it only adds a new query path to the scheduler, 
and does not alter any existing code paths.

Thanks for the consideration,
Phil
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [swift] ssbench - recreating connection messages on both master and remote clients

2013-09-12 Thread Snider, Tim
I'm running ssbench with remote clients. The following message appears on all 
remote nodes and the master. Is this an indication of networking problems? If 
so what should I look at?

INFO:root:ConnectionPool: re-creating connection...
INFO:root:{'block_size': None, 'container': 'ssbench_000194', 'name': 
'500mb.1_000211', 'size_str': '500mb.1', 'network_timeout': 20.0, 
'auth_kwargs': {'insecure': '', 'storage_urls': None, 'token': None, 
'auth_version': '1.0', 'os_options': {'region_name': None, 'tenant_id': None, 
'auth_token': None, 'endpoint_type': None, 'tenant_name': None, 'service_type': 
None, 'object_storage_url': None}, 'user': 'test:tester', 'key': 'testing', 
'cacert': None, 'auth_url': 'http://192.168.10.68:8080/auth/v1.0'}, 
'head_first': False, 'type': 'upload_object', 'connect_timeout': 10.0, 'size': 
524288000} succeeded after 8 tries

Thanks,
Tim

Timothy Snider
Strategic Planning  Architecture - Advanced Development
NetApp
316-636-8736 Direct Phone
316-213-0223 Mobile Phone
tim.sni...@netapp.com
netapp.comhttp://www.netapp.com/?ref_source=eSig
 [Description: http://media.netapp.com/images/netapp-logo-sig-5.gif]

inline: image001.gif___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] The road to RC1, and Skipping Release meeting next Tuesday

2013-09-12 Thread Thierry Carrez
Hi everyone,

I won't be able to run the Project/Release meeting next Tuesday, so I
propose we skip it and I'll follow up with the PTLs individually on
their projects status.

As a reminder, the focus at this point is to come up with comprehensive
release-critical bug lists, and to burn them fast enough to be able to
publish a first release candidate in the next weeks.

I quickly set up a graph to track how well we were doing in that area:
http://old-wiki.openstack.org/rc/

Note that each project can publish its RC1 at a different time, whenever
they fix all their RC bugs. The Icehouse development cycle opens for
them at that time : the fastest you fix bugs, the earliest you unfreeze :)

The bug lists can be found at:
https://launchpad.net/keystone/+milestone/havana-rc1
https://launchpad.net/glance/+milestone/havana-rc1
https://launchpad.net/nova/+milestone/havana-rc1
https://launchpad.net/horizon/+milestone/havana-rc1
https://launchpad.net/neutron/+milestone/havana-rc1
https://launchpad.net/cinder/+milestone/havana-rc1
https://launchpad.net/ceilometer/+milestone/havana-rc1
https://launchpad.net/heat/havana/+milestone/havana-rc1
https://launchpad.net/swift/+milestone/1.9.3

Ideally, we should fix all the bugs in those lists (and publish RC1)
before the end of the month.

Regards,

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Nova FFE request for get scheduler hints API

2013-09-12 Thread Thierry Carrez
Day, Phil wrote:
 [...]
 The change is low risk in that it only adds a new query path to the
 scheduler, and does not alter any existing code paths.
 [...]

At this point the main issue with this is the distraction it would
generate (it still needs a bit of review to clear the last comments) as
well as add to the docs and test work. I fail to see how including this
is essential to the success of the Havana release, so the trade-off
doesn't look worth it to me.

I'm happy to be overruled by Russell if he thinks that extra feature is
worth the limited disruption it would cause.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] FFE Request: image-multiple-location support

2013-09-12 Thread Thierry Carrez
lzy@gmail.com wrote:
 BP: https://blueprints.launchpad.net/nova/+spec/image-multiple-location
 
 Since a dependent patch getting merger delay
 (https://review.openstack.org/#/c/44316/), so the main patch
 https://review.openstack.org/#/c/33409/ been hold by FF. It's very
 close to get merger and waited about 3 months, could you pls take a
 look and let it go in H?

So, this is a significant feature... which paradoxically is a good
reason to accept it *and* to deny it. On one hand it would be nice to
complete this (with Glance support for it being landed), but on the
other it's not really a self-contained feature and I could see it have
bugs (or worse, create regressions).

My answer would probably have been different if this request had been
posted a week ago, but at this point, I would lean towards -1.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Need some clarity on security group protocol numbers vs names

2013-09-12 Thread Mark McClain
Names are not necessarily portable across implementations and this would be a 
major change to make this late in the cycle.  At this point in the cycle, we 
need to focus on ensuring fixes minimize disruption.

mark

On Sep 11, 2013, at 6:03 PM, Arvind Somya (asomya) aso...@cisco.com wrote:

 Ok, those were some good points. I personally like the approach of letting
 each implementation specify it's own set of supported protocols.
 
 I'll change my patch to simply convert all protocols to names (more
 readable).
 
 
 Thanks
 Arvind
 
 On 9/11/13 3:06 PM, Justin Hammond justin.hamm...@rackspace.com wrote:
 
 I agree with you. Plugin was a mere example and it does make sense to
 allow the provider to define custom protocols.
 
 +1
 
 On 9/11/13 12:46 PM, Akihiro Motoki amot...@gmail.com wrote:
 
 Hi Justin,
 
 My point is what
 
 On Thu, Sep 12, 2013 at 12:46 AM, Justin Hammond
 justin.hamm...@rackspace.com wrote:
 As it seems the review is no longer the place for this discussion, I
 will
 copy/paste my inline comments here:
 
 I dislike the idea of passing magical numbers around to define
 protocols
 (defined or otherwise). I believe there should be a common set of
 protocols with their numbers mapped (such as this constants business)
 and
 a well defined way to validate/list said common constants.
 
 I agree that value should be validated appropriately in general.
 A configurable list of allowed protocols looks good to me.
 
 wishes to add support for a protocol outside of the common case, it
 should
 be added to the list in a pluggable manner.
 Ex: common defines the constants 1, 6, 17 to be valid but
 my_cool_plugin
 wants to support 42. It should be my plugin's responsibility to add 42
 to
 the list of valid protocols by appending to the list given a pluggable
 interface to do so. I do not believe plugins should continue to update
 the
 common.constants file with new protocols, but I do believe explicitly
 stating which protocols are valid is better than allowing users to
 possibly submit protocols erroneously.
 
 I think this is just a case a backend plugin defines allowed protocols.
 
 I also see a different case: a cloud provider defines allowed protocols.
 For example VLAN network type of OVS plugin can convey any type of
 packets
 including GRE, STCP and so on if a provider wants to do so.
 We need to allow a provider to configure the list.
 
 Considering the above, what we need to do looks:
 (a) to validate values properly,
 (b) to allow a plugin to define what protocols should be allowed
   (I think we need two types of lists: possible protocols and
 default allowed protocols)
 (c) to allow a cloud provider (deployer) to customize allow protocols.
   (Of course (c) is a subnet of possible protocols in (b))
 
 Does it make sense?
 The above is just a start point of the discussion and some list can be
 omitted.
 
 # Whether (c) is needed or not depends on the default list of (b).
 # If it is wide enough (c) is not needed. The current list of (b) is
 [tcp, udp, icmp]
 # and it looks too small set to me, so it is better to have (c) too.
 
 If the plugins use a system such as this, it is possible that new,
 common,
 protocols can be found to be core. See NETWORK_TYPE constants.
 
 I think the situation is a bit different. What network types are
 allowed is tightly
 coupled with a plugin implementation, and a cloud provider choose a
 plugin
 based on their needs. Thus the mechanism of NETWORK_TYPE constants
 make sense to me too.
 
 tl;dr: magic constants are no good, but values should be validated in a
 pluggable and explicit manner.
 
 As I said above, I agree it is important to validate values properly in
 general.
 
 Thanks,
 Akihiro
 
 
 
 
 On 9/11/13 10:40 AM, Akihiro Motoki amot...@gmail.com wrote:
 
 Hi all,
 
 Arvind, thank you for initiate the discussion about the ip protocol in
 security group rules.
 I think the discussion point can be broken down into:
 
 (a) how to specify ip protocol : by name, number, or both
 (b) what ip protocols can be specified: known protocols only, all
 protocols (or some subset of protocols including unknown protocols)
where known protocols is defined as a list in Neutron (a list
 of constants or a configurable list)
 
 --
 (b) is the main topic Arvind and I discussed in the review.
 If only known protocols are allowed, we cannot allow protocols which
 are not listed in the known protocol list.
 For instance, if tcp, udp and icmp are registered as known
 protocols (this is the current neutron implementation),
 a tenant cannot allow stcp or gre.
 
 Pros of known protocols only is the infrastructure provider can
 control which protocols are allowed.
 Cons is that users cannot use ip protocols not listed in a known list
 and a provider needs to maintain a known protocol list.
 Pros and cons of all protocols allowed is vice versa.
 
 If a list of known protocols is configurable, we can cover both cases,
 e.g., an empty list or a list [ANY] means all 

Re: [openstack-dev] Nova FFE request for get scheduler hints API

2013-09-12 Thread Day, Phil
I don't think it needs any more reviewing - just a couple of +2's (but I could 
be biased) :-)

Phil

 -Original Message-
 From: Thierry Carrez [mailto:thie...@openstack.org]
 Sent: 12 September 2013 14:39
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] Nova FFE request for get scheduler hints API
 
 Day, Phil wrote:
  [...]
  The change is low risk in that it only adds a new query path to the
  scheduler, and does not alter any existing code paths.
  [...]
 
 At this point the main issue with this is the distraction it would generate 
 (it still
 needs a bit of review to clear the last comments) as well as add to the docs 
 and
 test work. I fail to see how including this is essential to the success of the
 Havana release, so the trade-off doesn't look worth it to me.
 
 I'm happy to be overruled by Russell if he thinks that extra feature is worth 
 the
 limited disruption it would cause.
 
 --
 Thierry Carrez (ttx)
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] TC Meeting / Savanna Incubation Follow-Up

2013-09-12 Thread Michael Basnight
On Sep 12, 2013, at 2:39 AM, Thierry Carrez wrote:

 Sergey Lukjanov wrote:
 
 [...]
 As you can see, resources provisioning is just one of the features and the 
 implementation details are not critical for overall architecture. It 
 performs only the first step of the cluster setup. We’ve been considering 
 Heat for a while, but ended up direct API calls in favor of speed and 
 simplicity. Going forward Heat integration will be done by implementing 
 extension mechanism [3] and [4] as part of Icehouse release.
 
 The next part, Hadoop cluster configuration, already extensible and we have 
 several plugins - Vanilla, Hortonworks Data Platform and Cloudera plugin 
 started too. This allow to unify management of different Hadoop 
 distributions under single control plane. The plugins are responsible for 
 correct Hadoop ecosystem configuration at already provisioned resources and 
 use different Hadoop management tools like Ambari to setup and configure all 
 cluster  services, so, there are no actual provisioning configs on Savanna 
 side in this case. Savanna and its plugins encapsulate the knowledge of 
 Hadoop internals and default configuration for Hadoop services.
 
 My main gripe with Savanna is that it combines (in its upcoming release)
 what sounds like to me two very different services: Hadoop cluster
 provisioning service (like what Trove does for databases) and a
 MapReduce+ data API service (like what Marconi does for queues).
 
 Making it part of the same project (rather than two separate projects,
 potentially sharing the same program) make discussions about shifting
 some of its clustering ability to another library/project more complex
 than they should be (see below).
 
 Could you explain the benefit of having them within the same service,
 rather than two services with one consuming the other ?

And for the record, i dont think that Trove is the perfect fit for it today. We 
are still working on a clustering API. But when we create it, i would love the 
Savanna team's input, so we can try to make a pluggable API thats usable for 
people who want MySQL or Cassandra or even Hadoop. Im less a fan of a 
clustering library, because in the end, we will both have API calls like POST 
/clusters, GET /clusters, and there will be API duplication between the 
projects.

 
 The next topic is “Cluster API”.
 
 The concern that was raised is how to extract general clustering 
 functionality to the common library. Cluster provisioning and management 
 topic currently relevant for a number of projects within OpenStack 
 ecosystem: Savanna, Trove, TripleO, Heat, Taskflow.
 
 Still each of the projects has their own understanding of what the cluster 
 provisioning is. The idea of extracting common functionality sounds 
 reasonable, but details still need to be worked out. 
 
 I’ll try to highlight Savanna team current perspective on this question. 
 Notion of “Cluster management” in my perspective has several levels:
 1. Resources provisioning and configuration (like instances, networks, 
 storages). Heat is the main tool with possibly additional support from 
 underlying services. For example, instance grouping API extension [5] in 
 Nova would be very useful. 
 2. Distributed communication/task execution. There is a project in OpenStack 
 ecosystem with the mission to provide a framework for distributed task 
 execution - TaskFlow [6]. It’s been started quite recently. In Savanna we 
 are really looking forward to use more and more of its functionality in I 
 and J cycles as TaskFlow itself getting more mature.
 3. Higher level clustering - management of the actual services working on 
 top of the infrastructure. For example, in Savanna configuring HDFS data 
 nodes or in Trove setting up MySQL cluster with Percona or Galera. This 
 operations are typical very specific for the project domain. As for Savanna 
 specifically, we use lots of benefits of Hadoop internals knowledge to 
 deploy and configure it properly.
 
 Overall conclusion it seems to be that it make sense to enhance Heat 
 capabilities and invest in Taskflow development, leaving domain-specific 
 operations to the individual projects.
 
 The thing we'd need to clarify (and the incubation period would be used
 to achieve that) is how to reuse as much as possible between the various
 cluster provisioning projects (Trove, the cluster side of Savanna, and
 possibly future projects). Solution can be to create a library used by
 Trove and Savanna, to extend Heat, to make Trove the clustering thing
 beyond just databases...
 
 One way of making sure smart and non-partisan decisions are taken in
 that area would be to make Trove and Savanna part of the same program,
 or make the clustering part of Savanna part of the same program as
 Trove, while the data API part of Savanna could live separately (hence
 my question about two different projects vs. one project above).

Trove is not, nor will be, a data API. Id like to keep Savanna in its own 

Re: [openstack-dev] TC Meeting / Savanna Incubation Follow-Up

2013-09-12 Thread Sudarshan Acharya

On Sep 12, 2013, at 10:30 AM, Michael Basnight wrote:

 On Sep 12, 2013, at 2:39 AM, Thierry Carrez wrote:
 
 Sergey Lukjanov wrote:
 
 [...]
 As you can see, resources provisioning is just one of the features and the 
 implementation details are not critical for overall architecture. It 
 performs only the first step of the cluster setup. We’ve been considering 
 Heat for a while, but ended up direct API calls in favor of speed and 
 simplicity. Going forward Heat integration will be done by implementing 
 extension mechanism [3] and [4] as part of Icehouse release.
 
 The next part, Hadoop cluster configuration, already extensible and we have 
 several plugins - Vanilla, Hortonworks Data Platform and Cloudera plugin 
 started too. This allow to unify management of different Hadoop 
 distributions under single control plane. The plugins are responsible for 
 correct Hadoop ecosystem configuration at already provisioned resources and 
 use different Hadoop management tools like Ambari to setup and configure 
 all cluster  services, so, there are no actual provisioning configs on 
 Savanna side in this case. Savanna and its plugins encapsulate the 
 knowledge of Hadoop internals and default configuration for Hadoop services.
 
 My main gripe with Savanna is that it combines (in its upcoming release)
 what sounds like to me two very different services: Hadoop cluster
 provisioning service (like what Trove does for databases) and a
 MapReduce+ data API service (like what Marconi does for queues).
 
 Making it part of the same project (rather than two separate projects,
 potentially sharing the same program) make discussions about shifting
 some of its clustering ability to another library/project more complex
 than they should be (see below).
 
 Could you explain the benefit of having them within the same service,
 rather than two services with one consuming the other ?
 
 And for the record, i dont think that Trove is the perfect fit for it today. 
 We are still working on a clustering API. But when we create it, i would love 
 the Savanna team's input, so we can try to make a pluggable API thats usable 
 for people who want MySQL or Cassandra or even Hadoop. Im less a fan of a 
 clustering library, because in the end, we will both have API calls like POST 
 /clusters, GET /clusters, and there will be API duplication between the 
 projects.


+1. I am looking at the new cluster provisioning API in Trove [1] and the one 
in Savanna [2], and they look quite different right now. Definitely some 
collaboration is needed even the API spec, not just the backend.

[1] 
https://wiki.openstack.org/wiki/Trove-Replication-And-Clustering-API#POST_.2Fclusters
[2] 
https://savanna.readthedocs.org/en/latest/userdoc/rest_api_v1.0.html#start-cluster


 
 
 The next topic is “Cluster API”.
 
 The concern that was raised is how to extract general clustering 
 functionality to the common library. Cluster provisioning and management 
 topic currently relevant for a number of projects within OpenStack 
 ecosystem: Savanna, Trove, TripleO, Heat, Taskflow.
 
 Still each of the projects has their own understanding of what the cluster 
 provisioning is. The idea of extracting common functionality sounds 
 reasonable, but details still need to be worked out. 
 
 I’ll try to highlight Savanna team current perspective on this question. 
 Notion of “Cluster management” in my perspective has several levels:
 1. Resources provisioning and configuration (like instances, networks, 
 storages). Heat is the main tool with possibly additional support from 
 underlying services. For example, instance grouping API extension [5] in 
 Nova would be very useful. 
 2. Distributed communication/task execution. There is a project in 
 OpenStack ecosystem with the mission to provide a framework for distributed 
 task execution - TaskFlow [6]. It’s been started quite recently. In Savanna 
 we are really looking forward to use more and more of its functionality in 
 I and J cycles as TaskFlow itself getting more mature.
 3. Higher level clustering - management of the actual services working on 
 top of the infrastructure. For example, in Savanna configuring HDFS data 
 nodes or in Trove setting up MySQL cluster with Percona or Galera. This 
 operations are typical very specific for the project domain. As for Savanna 
 specifically, we use lots of benefits of Hadoop internals knowledge to 
 deploy and configure it properly.
 
 Overall conclusion it seems to be that it make sense to enhance Heat 
 capabilities and invest in Taskflow development, leaving domain-specific 
 operations to the individual projects.
 
 The thing we'd need to clarify (and the incubation period would be used
 to achieve that) is how to reuse as much as possible between the various
 cluster provisioning projects (Trove, the cluster side of Savanna, and
 possibly future projects). Solution can be to create a library used by
 Trove and Savanna, to extend Heat, to make Trove the 

Re: [openstack-dev] [Heat] How the autoscale API should control scaling in Heat

2013-09-12 Thread Christopher Armstrong
I apologize that this mail will appear at the incorrect position in
the thread, but I somehow got unsubscribed from openstack-dev due to
bounces and didn't receive the original email.

On 9/11/13 03:15 UTC, Adrian Otto adrian.o...@rackspace.com wrote:
 So, I don't intend to argue the technical minutia of each design point, but I 
 challenge you to make sure that we
 (1) arrive at a simple system that any OpenStack user can comprehend,

I think there is tension between simplicity of the stack and
simplicity of the components in that stack. We're making sure that the
components will be simple, self-contained, and easy to understand, and
the stack will need to plug them together in an interesting way.

 (2) responds quickly to alarm stimulus,

Like Zane, I don't really buy the argument that the API calls to Heat
will make any significant impact on the speed of autoscaling. There
are MUCH bigger wins in e.g. improving the ability for people to use
cached, pre-configured images vs a couple of API calls. Even once
you've optimized all of that, booting an instance still takes much,
much longer than running the control code.

 (3) is unlikely to fail,

I know this isn't exactly what you mentioned, but I have some things
to say not about resilience but instead about reaction to failures.

The traceability and debuggability of errors is something that
unfortunately plagues all of OpenStack, both for developers and
end-users. It is fortunate that OpenStack compononts make good use of
each other, but unfortunate that

1. there's no central, filterable logging facility (without investing
significant ops effort to deploy a third-party one yourself);
2. not enough consistent tagging of requests throughout the system
that allows operators looking at logs to understand how a user's
original request led to some ultimate error;
3. no ubiquitous mechanisms for propagating errors between service
APIs in a way that ultimately lead back to the consumer of the
service;
4. many services don't even report detailed information with errors
that happen internally.

I believe we'll have to do what we can, especially in #3 and #4, to
make sure that the users of autoscaling and Heat have good visibility
into the system when errors occur.

 (4) can be easily customized with user-supplied logic that controls how the 
 scaling happens, and under what conditions.

I think this is a good argument for using Heat for the scaling
resources instead of doing it separately. One of the biggest new
features that the new AS design provides is the ability to scale *any*
resource, not just AWS::EC2::Instance. This means you can write your
own custom resource with custom logic and scale it trivially. Doing it
in terms of resources instead of launch configurations provides a lot
of flexibility, and a Resource implementation is a nice way to wrap up
that custom logic. If we implemented this in the AS service without
using Heat, we'd either be constrained to nova instances again, or
have to come up with our own API for customization.

As far as customizing the conditions under which scaling happens,
that's provided at the lowest common denominator by providing a
webhook trigger for scaling policies (on top of which will be
implemented convenient Ceilometer integration support).  Users will be
able to provide their own logic and hit the webhook whenever they want
to execute the policy.


 It would be better if we could explain Autoscale like this:

 Heat - Autoscale - Nova, etc.
 -or-
 User - Autoscale - Nova, etc.

 This approach allows use cases where (for whatever reason) the end user does 
 not want to use Heat at all, but still wants something simple to be 
 auto-scaled for them. Nobody would be scratching their heads wondering why 
 things are going in circles.

The Heat behind Autoscale isn't something that the *consumer* of
the service knows about, only the administrator. Granted, the API
design that we're running with *does* currently require the user to
provide snippets of heat resource templates -- just specifying the
individual resources that should be scaled -- but I think it would be
trivial to support an alternative type of launch configuration that
does the translation to heat templates in the background, if we really
want to hide all the Heatiness from a user who just wants the
simplicity of knowing only about Nova and autoscaling.

To conclude, I'd like to just say I basically agree with what Clint,
Keith, and Steven have said in other messages in this thread. I
doesn't appear that the design of Heat autoscaling (informed by Zane,
Clint, Angus and others) fails to meet the criteria you've brought up.

-- 
IRC: radix
Christopher Armstrong
Rackspace

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [DevStack] Generalize config file settings

2013-09-12 Thread John Griffith
On Thu, Sep 12, 2013 at 9:36 AM, Dean Troyer dtro...@gmail.com wrote:

 DevStack has long had a config setting in localrc called EXTRA_OPTS that
 allowed arbitrary settings to be added to /etc/nova/nova.conf [DEFAULT]
 section.  Additional files and sections have recently been implemented with
 a similar scheme.  I don't think this scales well as at a minimum every
 config file and section needs to be individually handled.

 I'd like to get some feedback on the following two proposals, or hear
 other ideas on how to generalize solving the problem of setting arbitrary
 configuration values.


 a) Create conf.d/*.conf files as needed and process each file present into
 a corresponding config file.  These files would not be supplied by DevStack
 but created and maintained locally.

 Example: conf.d/etc/nova/nova.conf:
 [DEFAULT]
 use_syslog = True

 [osapi_v3]
 enabled = False


 b) Create a single service.local.conf file for each project (Nova, Cinder,
 etc) that contains a list of settings to be applied to the config files for
 that service.

 Example: nova.local.conf:
 # conf file names are parsed out of the section name below between '[' and
 the first ':'
 [/etc/nova/nova.conf:DEFAULT]
 use_syslog = True

 [/etc/nova/nova.conf:osapi_v3]
 enabled = False


 Both cases need to be able to specify the destination config file and
 section in addition to the attribute name and value.

 Thoughts?
 dt

 [Prompted by review https://review.openstack.org/44266]

 --

 Dean Troyer
 dtro...@gmail.com

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 +1 for not dumping in lib/xxx

Option 'a' seems like a bit easier to manage in terms of number of files
etc but I wouldn't have a strong preference between the two options
presented.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Flash storage article

2013-09-12 Thread John Griffith
http://tinyurl.com/ljexdyk
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [DevStack]

2013-09-12 Thread XINYU ZHAO
see if there is any pip install error before this failure. Search for
Traceback. Copy and paste screen-g-api.txt  and error_log here also helps.


On Thu, Sep 12, 2013 at 8:46 AM, Shake Chen shake.c...@gmail.com wrote:


 Now I try to install devstack in CentOS 6.4 and meet the error


 2013-09-12 23:26:06 + screen -S stack -p g-api -X stuff 'cd
 /opt/stack/glance; /usr/bin/glance-api
 --config-file=/etc/glance/glance-api.conf || echo g-api 'ailed to start |
 tee /opt/stack/status/stack/g-api.failure
 2013-09-12 23:26:06 + echo 'Waiting for g-api (10.1.199.8:9292) to
 start...'
 2013-09-12 23:26:06 Waiting for g-api (10.1.199.8:9292) to start...
 2013-09-12 23:26:06 + timeout 60 sh -c 'while ! http_proxy= wget -q -O-
 http://10.1.199.8:9292; do sleep 1; done'
 2013-09-12 23:27:06 + die 195 'g-api did not start'
 2013-09-12 23:27:06 + local exitcode=0
 [root@node08 devstack]# 2013-09-12 23:27:06 + set +o xtrace
 2013-09-12 23:27:06 [Call Trace]
 2013-09-12 23:27:06 stack.sh:1159:start_glance
 2013-09-12 23:27:06 /opt/stack/devstack/lib/glance:195:die
 2013-09-12 23:27:06 [ERROR] /opt/stack/devstack/lib/glance:195 g-api did
 not start


 the other person in Ubuntu 12.04, also have same problem.



 --
 Shake Chen


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [DevStack] Generalize config file settings

2013-09-12 Thread Dean Troyer
DevStack has long had a config setting in localrc called EXTRA_OPTS that
allowed arbitrary settings to be added to /etc/nova/nova.conf [DEFAULT]
section.  Additional files and sections have recently been implemented with
a similar scheme.  I don't think this scales well as at a minimum every
config file and section needs to be individually handled.

I'd like to get some feedback on the following two proposals, or hear other
ideas on how to generalize solving the problem of setting arbitrary
configuration values.


a) Create conf.d/*.conf files as needed and process each file present into
a corresponding config file.  These files would not be supplied by DevStack
but created and maintained locally.

Example: conf.d/etc/nova/nova.conf:
[DEFAULT]
use_syslog = True

[osapi_v3]
enabled = False


b) Create a single service.local.conf file for each project (Nova, Cinder,
etc) that contains a list of settings to be applied to the config files for
that service.

Example: nova.local.conf:
# conf file names are parsed out of the section name below between '[' and
the first ':'
[/etc/nova/nova.conf:DEFAULT]
use_syslog = True

[/etc/nova/nova.conf:osapi_v3]
enabled = False


Both cases need to be able to specify the destination config file and
section in addition to the attribute name and value.

Thoughts?
dt

[Prompted by review https://review.openstack.org/44266]

-- 

Dean Troyer
dtro...@gmail.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] [savanna] Host information for non admin users

2013-09-12 Thread Alexander Kuznetsov
Hi folks,

Currently Nova doesn’t provide information about the host of virtual
machine for non admin users. Is it possible to change this situation? This
information is needed in Hadoop deployment case. Because now Hadoop aware
about virtual environment and this knowledge help Hadoop to achieve a
better performance and robustness.

Alexander Kuznetsov.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Keystone and Multiple Identity Sources

2013-09-12 Thread Dolph Mathews
On Thu, Sep 12, 2013 at 3:15 AM, David Chadwick d.w.chadw...@kent.ac.ukwrote:



 On 11/09/2013 22:05, Adam Young wrote:


 What's the use case for including providers in the service catalog?
 i.e. why do Identity API clients need to be aware of the Identity
 Providers?

 In the federation protocol API, the user can specify the IdP that they
 are using. Keystone needs to know what are the set of acceptable IdPs,
 somehow.  The first thought was reuse of the Service catalog.
 It probably makes sense to let an administrator enumerate the IdPs
 registered with Keystone, and what protocol each supports.


 There are several reasons why Keystone needs to be aware of which IDPs are
 out there.
 1. Trust. Keystone administrators will only trust a subset of available
 IDPs, and this information needs to be configured into Keystone in some way
 2. Discovery. Keystone needs to be able to discover details of which IDPs
 are trusted and how to contact them (meta data). This needs to be
 configured into Keystone somehow
 3. Helping the user. The user might needs to know which IdPs it can use
 and which it cannot, so Keystone may need to provide the user with a list
 of IdPs to choose from.


Of all these use cases, only the last one actually involves API clients,
and it's a might. Even then, it doesn't feel like an obvious use case for
the service catalog. The Identity API doesn't currently have a great way to
advertise support authentication methods (401 response lists them, but you
can't just GET that list).


https://github.com/openstack/identity-api/blob/master/openstack-identity-api/v3/src/markdown/identity-api-v3.md#authentication-failures

I suspect these two challenges are related and should be solved together?
i.e. GET /auth/methods - returns a dict of auth methods containing
metadata about each



 Using the service catalog to hold the above information was a pragmatic
 implementation decision that Kent made. What is conceptually needed is a
 directory service that Keystone can contact to find out the required
 information. So we should design federated keystone to have a directory
 service interface, and then implementors may choose to use the service
 catalog or something else to fulfil this function

 regards

 David


 __**_
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.**org OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/**cgi-bin/mailman/listinfo/**openstack-devhttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 

-Dolph
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat] Questions about plans for heat wadls moving forward

2013-09-12 Thread Mike Asthalter
Hello,

Can someone please explain the plans for our 2 wadls moving forward:

  *   wadl in original heat repo: 
https://github.com/openstack/heat/blob/master/doc/docbkx/api-ref/src/wadls/heat-api/src/heat-api-1.0.wadl%22https://github.com/openstack/heat/blob/master/doc/docbkx/api-ref/src/wadls/heat-api/src/heat-api-1.
  *   wadl in api-site repo: 
https://github.com/openstack/api-site/blob/master/api-ref/src/wadls/orchestration-api/src/v1/orchestration-api.wadl

1. Is there a need to maintain 2 wadls moving forward, with the wadl in the 
original heat repo containing calls that may not be implemented, and the wadl 
in the api-site repo containing implemented calls only?
Anne Gentle advises as follows in regard to these 2 wadls:

I'd like the WADL in api-site repo to be user-facing. The other WADL can be 
truth if it needs to be a specification that's not yet implemented. If the WADL 
in api-site repo is true and implemented, please just maintain one going 
forward.

2. If we maintain 2 wadls, what are the consequences (gerrit reviews, docs out 
of sync, etc.)?

3. If we maintain only the 1 orchestration wadl, how do we want to pull in the 
wadl content to the api-ref doc 
(https://github.com/openstack/heat/blob/master/doc/docbkx/api-ref/src/docbkx/api-ref.xml%22https://github.com/openstack/heat/blob/master/doc/docbkx/api-ref/src/docb)
 from the orchestration wadl in the api-site repo: subtree merge, other?

Please excuse if these questions have already been discussed.

Thanks!

Mike
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [savanna] team meeting reminder September 12 18:00 UTC

2013-09-12 Thread Sergey Lukjanov
Hi folks,

We'll be have the Savanna team meeting as usual in #openstack-meeting-alt 
channel.

Agenda: 
https://wiki.openstack.org/wiki/Meetings/SavannaAgenda#Agenda_for_September.2C_12

http://www.timeanddate.com/worldclock/fixedtime.html?msg=Savanna+Meetingiso=20130912T18

Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Nova FFE request for get scheduler hints API

2013-09-12 Thread Russell Bryant
On 09/12/2013 09:39 AM, Thierry Carrez wrote:
 Day, Phil wrote:
 [...]
 The change is low risk in that it only adds a new query path to the
 scheduler, and does not alter any existing code paths.
 [...]
 
 At this point the main issue with this is the distraction it would
 generate (it still needs a bit of review to clear the last comments) as
 well as add to the docs and test work. I fail to see how including this
 is essential to the success of the Havana release, so the trade-off
 doesn't look worth it to me.
 
 I'm happy to be overruled by Russell if he thinks that extra feature is
 worth the limited disruption it would cause.
 

I like this feature and I really do want it to go in, but I have the
same concern.  We don't have *that* much time to produce the RC1 bug
list and get everything fixed.  I think we need to place the bar really
high at this point for any further distractions from that.  Hopefully we
can get this in early in Icehouse, at least.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Tuskar] Tuskar Names Clarification Unification

2013-09-12 Thread Jaromir Coufal

Hello everybody,

I just started and etherped with various names of concepts in Tuskar. It 
is important to get all of us on the same page, so the usage and 
discussions around Tuskar concepts are clear and easy to use (also for 
users, not just contributors!).


https://etherpad.openstack.org/tuskar-naming

Keep in mind, that we will use these names in API, CLI and UI as well, 
so they should be as descriptive as possible and not very long or 
difficult though.


Etherpad is not the best tool for mark up, but I did my best. Each 
concept which needs name is bold and is followed with bunch of bullets - 
description, suggestion of names, plus discussion under each suggestion, 
why yes or not.


Name suggestions are in underlined italic font.

Feel free to add  update  discuss anything in the document, because I 
might have forgotten bunch of stuff.


Thank you all and follow the etherpad :)
-- Jarda


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Serious Performance Issue in OVS plugin

2013-09-12 Thread Robin Wang
Hi all,

In our Grizzly deployment, we found a distinct performance reduction on
networking throughput.

While using hybrid vif-driver, together with ovs plugin, the throughput is
dramatically reduced to 2.34 Gb/s. If we turn it back to common vif-driver,
throughput is 12.7Gb/s.

A bug is filed on it: https://bugs.launchpad.net/neutron/+bug/1223267

Hybrid vif-driver makes it possible to leverage iptables-based security
group feature.  However, this dramatical performance reduction might be a
big cost, especially in 10 GE environment.

It'll be appricated, if you share some suggestions on how to solve this
issue. Thanks very much.

Best,
Robin Wang
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Comments/questions on the instance-group-api-extension blueprint

2013-09-12 Thread Mike Spreitzer
We are currently explicitly considering location and space.  For example, 
a template can require that a volume be in a disk that is directly 
attached to the machine hosting the VM to which the volume is attached. 
Spinning rust bandwidth is much trickier because it is not something you 
can simply add up when you combine workloads.  The IOPS, as well as the 
B/S, that a disk will deliver depends on the workload mix on that disk. 
While the disk may deliver X IOPS when serving only application A, and Y 
when serving only application B, you cannot conclude that it will serve 
(X+Y)/2 when serving (A+B)/2.  While we hope to do better in the future, 
we currently handle disk bandwidth in non-quantitative ways.  One is that 
a template may request that a volume be placed such that it does not 
compete with any other volume (i.e., is the only one on its disk). Another 
is that a template may specify a type for a volume, which effectively 
maps to a Cinder volume type that has been pre-defined to correspond to a 
QoS defined in an enterprise storage subsystem.

The choice between fastexpensive vs slowcheap storage is currently left 
to higher layers.  That could be pushed down, supposing there is a 
suitably abstract yet accurate way of describing how the tradeoff choice 
should be made.

I think Savanna people are on this list too, so I presume it's a good 
place for this discussion.

Thanks,
Mike



From:   shalz sh...@hotmail.com
To: OpenStack Development Mailing List 
openstack-dev@lists.openstack.org, 
Date:   09/11/2013 09:55 PM
Subject:Re: [openstack-dev] [heat] Comments/questions on the 
instance-group-api-extension blueprint



Mike,

You mention  We are now extending that example to include storage, and we 
are also working examples with Hadoop. 

In the context of your examples / scenarios, do these placement decisions 
consider storage performance and capacity on a physical node?

For example: Based on application needs, and IOPS, latency requirements - 
carving out a SSD storage or a traditional spinning disk block volume?  Or 
say for cost-efficiency reasons using SSD caching on Hadoop name nodes? 

I'm investigating  a) Per node PCIe SSD deployment need in Openstack 
environment /  Hadoop environment and ,b) selected node SSD caching, 
specifically for OpenStack Cinder.  Hope this is the right forum to ask 
this question.

rgds,
S

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Comments/questions on the instance-group-api-extension blueprint

2013-09-12 Thread Mike Spreitzer
Gary Kotton gkot...@vmware.com wrote on 09/12/2013 05:40:59 AM:

 From: Gary Kotton gkot...@vmware.com
 To: OpenStack Development Mailing List 
openstack-dev@lists.openstack.org, 
 Date: 09/12/2013 05:46 AM
 Subject: Re: [openstack-dev] [heat] Comments/questions on the 
 instance-group-api-extension blueprint
 
 Hi,
 For some reason I am unable to access your proceed talk. I am not 
 100% sure but I think that the voting may be closed. We have weekly 
 scheduling meetings (https://wiki.openstack.org/wiki/
 Meetings#Scheduler_Sub-group_meeting). It would be nice if you could
 attend and it will give you a platform to raise and share ideas with
 the rest of the guys in the community.
 At the moment the scheduling subgroup is working  on our ideas for 
 the design summit sessions. Please see https://
 etherpad.openstack.org/IceHouse-Nova-Scheduler-Sessions
 Thanks
 Gary

Worse yet, I know of no way to navigate to a list of design summit 
proposals.  What am I missing?

The scheduler group meeting conflicts with another meeting that I already 
have and will be difficult to move.  I will see what I can do 
asynchronously.

Thanks,
Mike___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] How the autoscale API should control scaling in Heat

2013-09-12 Thread Keith Bray
Steve, I think I see where I introduced some confusion...   Below, when
you draw:
User - Trove - (Heat - Nova)
I come at it from a view that the version of Nova that Trove talks to (via
Heat or not) is not necessarily a publicly available Nova endpoint (I.e.
Not in the catalog), although it *could* be. For example, there are
reasons that Trove may provision to an internal-only Nova end-point that
is tricked out with custom scheduler or virt driver (e.g. Containers) or
special DB performant hardware, etc.  This Nova endpoint would be
different than the Nova endpoint in the end-user's catalog.  But, I
realize that Trove could interact with the catalog endpoint for Nova as
well. I'm sorry for the confusion I introduced by how I was thinking about
that.  I guess this is one of those differences between a default
OpenStack setup vs. how a service provider might want to run the system
for scale and performance.  The cool part is, I think Heat and all these
general services can work in a  variety of cool configurations!

-Keith  

On 9/12/13 2:30 AM, Steven Hardy sha...@redhat.com wrote:

On Thu, Sep 12, 2013 at 01:07:03AM +, Keith Bray wrote:
 There is context missing here.  heat==trove interaction is through the
 trove API.  trove==heat interaction is a _different_ instance of Heat,
 internal to trove's infrastructure setup, potentially provisioning
 instances.   Public Heat wouldn't be creating instances and then telling
 trove to make them into databases.

Well that's a deployer decision, you wouldn't need (or necessarily want)
to
run an additional heat service (if that's what you mean by instance in
this case).

What you may want is for the trove-owned stacks to be created in
a different tenant (owned by the trove service user in the services
tenant?)

So the top level view would be:

User - Trove - (Heat - Nova)

Or if the user is interacting via a Trove Heat resource

User - Heat - Trove - (Heat - Nova)

There is nothing circular here, Trove uses Heat as an internal
implementation detail:

* User defines a Heat template, and passes it to Heat
* Heat parses the template and translates a Trove resource into API calls
* Trove internally defines a stack, which is passes to Heat

In the last step, although Trove *could* just pass on the user token it
has
from the top level API interaction to Heat, you may not want it to,
particularly in public cloud environments.

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Questions about plans for heat wadls moving forward

2013-09-12 Thread Steve Baker
On 09/13/2013 08:28 AM, Mike Asthalter wrote:
 Hello,

 Can someone please explain the plans for our 2 wadls moving forward:

   * wadl in original heat
 repo: 
 https://github.com/openstack/heat/blob/master/doc/docbkx/api-ref/src/wadls/heat-api/src/heat-api-1.0.wadl
 
 %22https://github.com/openstack/heat/blob/master/doc/docbkx/api-ref/src/wadls/heat-api/src/heat-api-1.
   * wadl in api-site
 repo: 
 https://github.com/openstack/api-site/blob/master/api-ref/src/wadls/orchestration-api/src/v1/orchestration-api.wadl

The original intention was to delete the heat wadl when the api-site one
became merged.
 1. Is there a need to maintain 2 wadls moving forward, with the wadl
 in the original heat repo containing calls that may not be
 implemented, and the wadl in the api-site repo containing implemented
 calls only? 

 Anne Gentle advises as follows in regard to these 2 wadls:

 I'd like the WADL in api-site repo to be user-facing. The other
 WADL can be truth if it needs to be a specification that's not yet
 implemented. If the WADL in api-site repo is true and implemented,
 please just maintain one going forward.


 2. If we maintain 2 wadls, what are the consequences (gerrit reviews,
 docs out of sync, etc.)?

 3. If we maintain only the 1 orchestration wadl, how do we want to
 pull in the wadl content to the api-ref doc
 (https://github.com/openstack/heat/blob/master/doc/docbkx/api-ref/src/docbkx/api-ref.xml
 %22https://github.com/openstack/heat/blob/master/doc/docbkx/api-ref/src/docb)
 from the orchestration wadl in the api-site repo: subtree merge, other?


These are good questions, and could apply equally to other out-of-tree
docs as features get added during the development cycle.

I still think that our wadl should live only in api-site.  If api-site
has no branching policy to maintain separate Havana and Icehouse
versions then maybe Icehouse changes should be posted as WIP reviews
until they can be merged.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] How to create vmdk for openstack usage

2013-09-12 Thread Dan Wendlandt
Hi Jason,

The best place to look is the official openstack compute documentation that
covers vSphere in Nova:
http://docs.openstack.org/trunk/openstack-compute/admin/content/vmware.html

In particular, check out the section titled Images with VMware vSphere
(pasted below).  As that text suggests, the most likely issue with your
VMDK not booting is that you may have passed the wrong vmware_adaptertype
to glance when creating the image.  Also note the statement indicating that
all VMDK images must be flat (i.e., single file), otherwise Glance will
be confused.

Dan


 Images with VMware vSphere

When using either VMware driver, images should be uploaded to the OpenStack
Image Service in the VMDK format. Both thick and thin images are currently
supported and all images must be flat (i.e. contained within 1 file). For
example

To load a thick image with a SCSI adaptor:


$ glance image-create name=ubuntu-thick-scsi disk_format=vmdk
container_format=bare \
is_public=true --property vmware_adaptertype=lsiLogic \
--property vmware_disktype=preallocated \
--property vmware_ostype=ubuntu64Guest  ubuntuLTS-flat.vmdk

To load a thin image with an IDE adaptor:


$ glance image-create name=unbuntu-thin-ide disk_format=vmdk
container_format=bare \
is_public=true --property vmware_adaptertype=ide \
--property vmware_disktype=thin \
--property vmware_ostype=ubuntu64Guest  unbuntuLTS-thin-flat.vmdk

The complete list of supported vmware disk properties is documented in the
Image Management section. It's critical that the adaptertype is correct; In
fact, the image will not boot with the incorrect adaptertype. If you have
the meta-data VMDK file the the ddb.adapterType property specifies the
adaptertype. The default adaptertype is lsilogic which is SCSI.







On Thu, Sep 12, 2013 at 11:29 AM, Jason Zhang bearovercl...@gmail.comwrote:


 Hi Dears,

 In the document https://wiki.openstack.org/**
 wiki/NovaVMware/DeveloperGuidehttps://wiki.openstack.org/wiki/NovaVMware/DeveloperGuide
 under the 'Get an initial VMDK to work with',
 its said, 'There are a lot of “gotchas” around what VMDK disks work with
 OpenStack + vSphere,'.
 The appendix section lists one of the gotchas. Are there any more gotchas?

 During our testing, the vmdk instance on boot-up gives a 'Operating System
 not found' error,
 I am not sure whether this is a already known issue or not.

 Thanks in advance!

 Best regards,

 Jason



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
~~~
Dan Wendlandt
Nicira, Inc: www.nicira.com
twitter: danwendlandt
~~~
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ceilometer][IceHouse] Ceilometer + Kibana + ElasticSearch Integration

2013-09-12 Thread Nachi Ueno
Hi Folks

Is anyone interested in Kibana + ElasticSearch Integration with ceilometer?
# Note: This discussion is not for Havana.

I have registered BP. (for IceHouse)
https://blueprints.launchpad.net/ceilometer/+spec/elasticsearch-driver

This is demo video.
http://www.youtube.com/watch?v=8SmA0W0hd4Ifeature=youtu.be

I wrote some sample storage driver for elastic search in ceilometer.
This is WIP - https://review.openstack.org/#/c/46383/

This integration sounds cool for me, because if we can integrate then,
we can use it as Log as a service.

IMO, there are some discussion points.

[1] We should add elastic search query api for ceilometer? or we
should let user kick ElasticSearch api directory?

Note that ElasticSearch has no tenant based authentication, in that
case we need to integrate Keystone and ElasticSearch. (or Horizon)

[2] Log (syslog or any application log) should be stored in
Ceilometer? (or it should be new OpenStack project? )

Best
Nachi

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] AUTO: kolod...@il.ibm.com is out of the office. (returning 02/10/2013)

2013-09-12 Thread Hillel Kolodner

I am out of the office until 02/10/2013.

I am out of the office starting Thursday, March 21, and returning on
Tuesday, April 2.  My access to email will be sporadic.


Note: This is an automated response to your message  Re: [openstack-dev]
run_tests in debug mode fails sent on 12/09/2013 11:50:00.

This is the only notification you will receive while this person is away.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] FFE Request: image-multiple-location support

2013-09-12 Thread Russell Bryant
On 09/12/2013 09:43 AM, Thierry Carrez wrote:
 lzy@gmail.com wrote:
 BP: https://blueprints.launchpad.net/nova/+spec/image-multiple-location

 Since a dependent patch getting merger delay
 (https://review.openstack.org/#/c/44316/), so the main patch
 https://review.openstack.org/#/c/33409/ been hold by FF. It's very
 close to get merger and waited about 3 months, could you pls take a
 look and let it go in H?
 
 So, this is a significant feature... which paradoxically is a good
 reason to accept it *and* to deny it. On one hand it would be nice to
 complete this (with Glance support for it being landed), but on the
 other it's not really a self-contained feature and I could see it have
 bugs (or worse, create regressions).
 
 My answer would probably have been different if this request had been
 posted a week ago, but at this point, I would lean towards -1.
 

I'm also fairly concerned with any further distractions away from
focusing on building and fixing an RC1 bug list.  I'm -1 on any more
FFEs unless we really, really, really need it to go in.  This sounds
like a case that can wait for Icehouse.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Backwards incompatible migration changes - Discussion

2013-09-12 Thread Dolph Mathews
On Thu, Sep 12, 2013 at 2:34 AM, Roman Podolyaka rpodoly...@mirantis.comwrote:

 I can't agree more with Robert.

 Even if it was possible to downgrade all migrations without data loss, it
 would be required to make backups before DB schema upgrade/downgrade.

 E.g. MySQL doesn't support transactional DDL. So if a migration script
 can't be executed successfully for whatever reason (let's say we haven't
 tested it well enough on real data and it's turned out it has a few bugs),
 you will end up in a situation when the migration is partially applied...
 And migrations can possibly fail before backup tables are created, during
 this process or after it.


++ Data backups are a solved problem, and no DB admin should trust an
application to perform its own backups. If the application-specific
migration code is buggy (and therefore you need to restore from a
backup...), would you really trust the application-specific backup solution
to be any more reliable?



 Thanks,
 Roman


 On Thu, Sep 12, 2013 at 8:30 AM, Robert Collins robe...@robertcollins.net
  wrote:

 I think having backup tables adds substantial systematic complexity,
 for a small use case.

 Perhaps a better answer is to document in 'take a backup here' as part
 of the upgrade documentation and let sysadmins make a risk assessment.
 We can note that downgrades are not possible.

 Even in a public cloud doing trunk deploys, taking a backup shouldn't
 be a big deal: *those* situations are where you expect backups to be
 well understood; and small clouds don't have data scale issues to
 worry about.

 -Rob

 -Rob

 On 12 September 2013 17:09, Joshua Hesketh joshua.hesk...@rackspace.com
 wrote:
  On 9/4/13 6:47 AM, Michael Still wrote:
 
  On Wed, Sep 4, 2013 at 1:54 AM, Vishvananda Ishaya
  vishvana...@gmail.com wrote:
 
  +1 I think we should be reconstructing data where we can, but keeping
  track of
  deleted data in a backup table so that we can restore it on a
 downgrade
  seems
  like overkill.
 
  I guess it comes down to use case... Do we honestly expect admins to
  regret and upgrade and downgrade instead of just restoring from
  backup? If so, then we need to have backup tables for the cases where
  we can't reconstruct the data (i.e. it was provided by users and
  therefore not something we can calculate).
 
 
  So assuming we don't keep the data in some kind of backup state is
 there a
  way we should be documenting which migrations are backwards
 incompatible?
  Perhaps there should be different classifications for data-backwards
  incompatible and schema incompatibilities.
 
  Having given it some more thought, I think I would like to see
 migrations
  keep backups of obsolete data. I don't think it is unforeseeable that an
  administrator would upgrade a test instance (or less likely, a
 production)
  by accident or not realising their backups are corrupted, outdated or
  invalid. Being able to roll back from this point could be quite useful.
 I
  think potentially more useful than that though is that if somebody ever
  needs to go back and look at some data that would otherwise be lost it
 is
  still in the backup table.
 
  As such I think it might be good to see all migrations be downgradable
  through the use of backup tables where necessary. To couple this I
 think it
  would be good to have a standard for backup table naming and maybe
 schema
  (similar to shadow tables) as well as an official list of backup tables
 in
  the documentation stating which migration they were introduced and how
 to
  expire them.
 
  In regards to the backup schema, it could be exactly the same as the
 table
  being backed up (my preference) or the backup schema could contain just
 the
  lost columns/changes.
 
  In regards to the name, I quite like backup_table-name_migration_214.
 The
  backup table name could also contain a description of what is backed up
 (for
  example, 'uuid_column').
 
  In terms of expiry they could be dropped after a certain
 release/version or
  left to the administrator to clear out similar to shadow tables.
 
  Thoughts?
 
  Cheers,
  Josh
 
  --
  Rackspace Australia
 
 
  Michael
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 --
 Robert Collins rbtcoll...@hp.com
 Distinguished Technologist
 HP Converged Cloud

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 

-Dolph
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Questions about plans for heat wadls moving forward

2013-09-12 Thread Monty Taylor


On 09/12/2013 04:33 PM, Steve Baker wrote:
 On 09/13/2013 08:28 AM, Mike Asthalter wrote:
 Hello,

 Can someone please explain the plans for our 2 wadls moving forward:

   * wadl in original heat
 repo: 
 https://github.com/openstack/heat/blob/master/doc/docbkx/api-ref/src/wadls/heat-api/src/heat-api-1.0.wadl
 
 %22https://github.com/openstack/heat/blob/master/doc/docbkx/api-ref/src/wadls/heat-api/src/heat-api-1.
   * wadl in api-site
 repo: 
 https://github.com/openstack/api-site/blob/master/api-ref/src/wadls/orchestration-api/src/v1/orchestration-api.wadl

 The original intention was to delete the heat wadl when the api-site one
 became merged.
 1. Is there a need to maintain 2 wadls moving forward, with the wadl
 in the original heat repo containing calls that may not be
 implemented, and the wadl in the api-site repo containing implemented
 calls only? 

 Anne Gentle advises as follows in regard to these 2 wadls:

 I'd like the WADL in api-site repo to be user-facing. The other
 WADL can be truth if it needs to be a specification that's not yet
 implemented. If the WADL in api-site repo is true and implemented,
 please just maintain one going forward.


 2. If we maintain 2 wadls, what are the consequences (gerrit reviews,
 docs out of sync, etc.)?

 3. If we maintain only the 1 orchestration wadl, how do we want to
 pull in the wadl content to the api-ref doc
 (https://github.com/openstack/heat/blob/master/doc/docbkx/api-ref/src/docbkx/api-ref.xml
 %22https://github.com/openstack/heat/blob/master/doc/docbkx/api-ref/src/docb)
 from the orchestration wadl in the api-site repo: subtree merge, other?


 These are good questions, and could apply equally to other out-of-tree
 docs as features get added during the development cycle.
 
 I still think that our wadl should live only in api-site.  If api-site
 has no branching policy to maintain separate Havana and Icehouse
 versions then maybe Icehouse changes should be posted as WIP reviews
 until they can be merged.

I believe there is no branching in api-site because it's describing API
and there is no such thing as a havana or icehouse version of an API -
there are the API versions and they are orthogonal to server release
versions. At least in theory. :)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] How to create vmdk for openstack usage

2013-09-12 Thread Jason Zhang


Hi Dears,

In the documenthttps://wiki.openstack.org/wiki/NovaVMware/DeveloperGuide 
https://wiki.openstack.org/wiki/NovaVMware/DeveloperGuideunder the'Get 
an initial VMDK to work with',
its said, 'There are a lot of gotchas around what VMDK disks work with 
OpenStack + vSphere,'.

The appendix section lists one of the gotchas. Are there any more gotchas?

During our testing, the vmdk instance on boot-up gives a 'Operating 
System not found' error,

I am not sure whether this is a already known issue or not.

Thanks in advance!

Best regards,

Jason

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] FFE Request: image-multiple-location support

2013-09-12 Thread lzy....@gmail.com
On Thu, Sep 12, 2013 at 9:43 PM, Thierry Carrez thie...@openstack.org wrote:
 lzy@gmail.com wrote:
 BP: https://blueprints.launchpad.net/nova/+spec/image-multiple-location

 Since a dependent patch getting merger delay
 (https://review.openstack.org/#/c/44316/), so the main patch
 https://review.openstack.org/#/c/33409/ been hold by FF. It's very
 close to get merger and waited about 3 months, could you pls take a
 look and let it go in H?

 So, this is a significant feature... which paradoxically is a good
 reason to accept it *and* to deny it. On one hand it would be nice to
 complete this (with Glance support for it being landed), but on the
 other it's not really a self-contained feature and I could see it have
 bugs (or worse, create regressions).

Hello Thierry Carrez, two questions, whether we pass FFE or not.
1. why you think it's not a self-contained feature/patch, do you think
the patch miss something?
2. I'd very like to know what's wrong in current patch # 33409, can
you point the bugs out which you mentioned above?


 My answer would probably have been different if this request had been
 posted a week ago, but at this point, I would lean towards -1.


I have two points here:
1. The dependent patch #44316 just been merged on this Monday so I
could not send this FFE request out early.
2. I have committed the patch #33409 on June and followed up any
comments on time, so at this point I can only say the review progress
let down me TBH.

 --
 Thierry Carrez (ttx)

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Thanks for you input ttx.

zhiyan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [DevStack] Generalize config file settings

2013-09-12 Thread John Griffith
On Thu, Sep 12, 2013 at 9:44 PM, Monty Taylor mord...@inaugust.com wrote:

 os-apply-config


Doesn't that just convert a json syntax to a file with the syntax Dean was
describing?  Maybe it's changed, but that's what I *thought* it did.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [savanna] Program name and Mission statement

2013-09-12 Thread Nirmal Ranganathan
On Wed, Sep 11, 2013 at 8:39 AM, Erik Bergenholtz 
ebergenho...@hortonworks.com wrote:


 On Sep 10, 2013, at 8:50 PM, Jon Maron jma...@hortonworks.com wrote:

 Openstack Big Data Platform


 On Sep 10, 2013, at 8:39 PM, David Scott david.sc...@cloudscaling.com
 wrote:

 I vote for 'Open Stack Data'


 On Tue, Sep 10, 2013 at 5:30 PM, Zhongyue Luo zhongyue@intel.comwrote:

 Why not OpenStack MapReduce? I think that pretty much says it all?


 On Wed, Sep 11, 2013 at 3:54 AM, Glen Campbell g...@glenc.io wrote:

 performant isn't a word. Or, if it is, it means having performance.
 I think you mean high-performance.


 On Tue, Sep 10, 2013 at 8:47 AM, Matthew Farrellee m...@redhat.comwrote:

 Rough cut -

 Program: OpenStack Data Processing
 Mission: To provide the OpenStack community with an open, cutting edge,
 performant and scalable data processing stack and associated management
 interfaces.


 Proposing a slightly different mission:

 To provide a simple, reliable and repeatable mechanism by which to deploy
 Hadoop and related Big Data projects, including management, monitoring and
 processing mechanisms driving further adoption of OpenStack.


+1. I liked the data processing aspect as well, since EDP api directly
relates to that, maybe a combination of both.



 On 09/10/2013 09:26 AM, Sergey Lukjanov wrote:

 It sounds too broad IMO. Looks like we need to define Mission Statement
 first.

 Sincerely yours,
 Sergey Lukjanov
 Savanna Technical Lead
 Mirantis Inc.

 On Sep 10, 2013, at 17:09, Alexander Kuznetsov 
 akuznet...@mirantis.com
 mailto:akuznetsov@mirantis.**com akuznet...@mirantis.com wrote:

  My suggestion OpenStack Data Processing.


 On Tue, Sep 10, 2013 at 4:15 PM, Sergey Lukjanov
 slukja...@mirantis.com mailto:slukja...@mirantis.com** wrote:

 Hi folks,

 due to the Incubator Application we should prepare Program name
 and Mission statement for Savanna, so, I want to start mailing
 thread about it.

 Please, provide any ideas here.

 P.S. List of existing programs:
 
 https://wiki.openstack.org/**wiki/Programshttps://wiki.openstack.org/wiki/Programs
 P.P.S. 
 https://wiki.openstack.org/**wiki/Governance/NewProgramshttps://wiki.openstack.org/wiki/Governance/NewPrograms

 Sincerely yours,
 Sergey Lukjanov
 Savanna Technical Lead
 Mirantis Inc.


 __**_
 OpenStack-dev mailing list
 
 OpenStack-dev@lists.openstack.**orgOpenStack-dev@lists.openstack.org
 
 mailto:OpenStack-dev@lists.**openstack.orgOpenStack-dev@lists.openstack.org
 

 http://lists.openstack.org/**cgi-bin/mailman/listinfo/**
 openstack-devhttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 __**_
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.**orgOpenStack-dev@lists.openstack.org
 mailto:OpenStack-dev@lists.**openstack.orgOpenStack-dev@lists.openstack.org
 
 http://lists.openstack.org/**cgi-bin/mailman/listinfo/**openstack-devhttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 __**_
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.**orgOpenStack-dev@lists.openstack.org
 http://lists.openstack.org/**cgi-bin/mailman/listinfo/**openstack-devhttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __**_
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.**org OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/**cgi-bin/mailman/listinfo/**openstack-devhttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 *Glen Campbell*
 http://glenc.io • @glenc

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 *Intel SSG/STOD/DCST/CIT*
 880 Zixing Road, Zizhu Science Park, Minhang District, 200241, Shanghai,
 China
 +862161166500

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 CONFIDENTIALITY NOTICE
 NOTICE: This message is intended for the use of the individual or entity
 to which it is addressed and may contain information that is confidential,
 privileged and exempt from disclosure under applicable law. If the reader
 of this message is not the intended recipient, you are hereby notified that
 any printing, copying, dissemination, distribution, disclosure or
 forwarding of this communication is strictly prohibited. If you have
 received this communication in error, please contact the sender immediately
 and delete it from your system. Thank 

Re: [openstack-dev] [nova] [savanna] Host information for non admin users

2013-09-12 Thread Nirmal Ranganathan
You can use hostId which is a hash of the host and tenant and part of the
core nova api as well.


On Thu, Sep 12, 2013 at 10:45 AM, Alexander Kuznetsov 
akuznet...@mirantis.com wrote:

 Hi folks,

 Currently Nova doesn’t provide information about the host of virtual
 machine for non admin users. Is it possible to change this situation? This
 information is needed in Hadoop deployment case. Because now Hadoop aware
 about virtual environment and this knowledge help Hadoop to achieve a
 better performance and robustness.

 Alexander Kuznetsov.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Nirmal

http://rnirmal.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Questions about plans for heat wadls moving forward

2013-09-12 Thread Anne Gentle
On Thu, Sep 12, 2013 at 10:41 PM, Monty Taylor mord...@inaugust.com wrote:



 On 09/12/2013 04:33 PM, Steve Baker wrote:
  On 09/13/2013 08:28 AM, Mike Asthalter wrote:
  Hello,
 
  Can someone please explain the plans for our 2 wadls moving forward:
 
* wadl in original heat
  repo:
 https://github.com/openstack/heat/blob/master/doc/docbkx/api-ref/src/wadls/heat-api/src/heat-api-1.0.wadl
  %22
 https://github.com/openstack/heat/blob/master/doc/docbkx/api-ref/src/wadls/heat-api/src/heat-api-1
 .
* wadl in api-site
  repo:
 https://github.com/openstack/api-site/blob/master/api-ref/src/wadls/orchestration-api/src/v1/orchestration-api.wadl
 
  The original intention was to delete the heat wadl when the api-site one
  became merged.


Sounds good.


  1. Is there a need to maintain 2 wadls moving forward, with the wadl
  in the original heat repo containing calls that may not be
  implemented, and the wadl in the api-site repo containing implemented
  calls only?
 
  Anne Gentle advises as follows in regard to these 2 wadls:
 
  I'd like the WADL in api-site repo to be user-facing. The other
  WADL can be truth if it needs to be a specification that's not yet
  implemented. If the WADL in api-site repo is true and implemented,
  please just maintain one going forward.
 
 
  2. If we maintain 2 wadls, what are the consequences (gerrit reviews,
  docs out of sync, etc.)?
 
  3. If we maintain only the 1 orchestration wadl, how do we want to
  pull in the wadl content to the api-ref doc
  (
 https://github.com/openstack/heat/blob/master/doc/docbkx/api-ref/src/docbkx/api-ref.xml
  %22
 https://github.com/openstack/heat/blob/master/doc/docbkx/api-ref/src/docb
 )
  from the orchestration wadl in the api-site repo: subtree merge, other?
 
 


Thanks Mike for asking these questions.

I've been asking the infrastructure team for help with pulling content like
the current nova request/response examples into the api-site repo. No
subtree merges please. We'll find some way. Right now it's manual.


  These are good questions, and could apply equally to other out-of-tree
  docs as features get added during the development cycle.
 
  I still think that our wadl should live only in api-site.  If api-site
  has no branching policy to maintain separate Havana and Icehouse
  versions then maybe Icehouse changes should be posted as WIP reviews
  until they can be merged.

 I believe there is no branching in api-site because it's describing API
 and there is no such thing as a havana or icehouse version of an API -
 there are the API versions and they are orthogonal to server release
 versions. At least in theory. :)


Yep, that's our working theory. :)

Anne


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Anne Gentle
annegen...@justwriteclick.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [savanna] Host information for non admin users

2013-09-12 Thread Nirmal Ranganathan
Not host capacity, just a opaque reference to distinguish a host is enough.
Hadoop can use that information to appropriately place block replicas. For
example if the replication count is 3, and if a host/rack topology is
provided to Hadoop, it will place each replica on a different host/rack
granted one is available.


On Thu, Sep 12, 2013 at 11:29 PM, Mike Spreitzer mspre...@us.ibm.comwrote:

  From: Alexander Kuznetsov akuznet...@mirantis.com
  ...

  Currently Nova doesn’t provide information about the host of virtual
  machine for non admin users. Is it possible to change this
  situation? This information is needed in Hadoop deployment case.
  Because now Hadoop aware about virtual environment and this
  knowledge help Hadoop to achieve a better performance and robustness.


 How must host information are you looking for?  Do you want to know the
 capacities of the hosts?  Do you want to know about all the guests of the
 hosts?  If so, why is this only a Savanna issue?  If not, how much good is
 limited host information going to do Savanna?

 Thanks,
 Mike
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Nirmal

http://rnirmal.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [savanna] Host information for non admin users

2013-09-12 Thread Mike Spreitzer
 From: Nirmal Ranganathan rnir...@gmail.com
 ...
 Not host capacity, just a opaque reference to distinguish a host is 
 enough. Hadoop can use that information to appropriately place block
 replicas. For example if the replication count is 3, and if a host/
 rack topology is provided to Hadoop, it will place each replica on a
 different host/rack granted one is available.

What if there are more than three racks, but some are better choices than 
others (perhaps even some are ruled out) due to considerations of various 
sorts of capacity and usage?

Thanks,
Mike___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Cookiecutter repo for ease in making new projects

2013-09-12 Thread Monty Taylor
Hey everybody!

You know how, when you want to make a new project, you basically take an
existing one, like nova, copy files, and then start deleting? Nobody
likes that.

Recently, cookiecutter came to my attention, so we put together a
cookiecutter repo for openstack projects to make creating a new one easier:

https://git.openstack.org/cgit/openstack-dev/cookiecutter

It's pretty easy to use. First, install cookiecutter:

sudo pip install cookiecutter

Next, tell cookiecutter you'd like to create a new project based on the
openstack template:

cookiecutter git://git.openstack.org/openstack-dev/cookiecutter.git

Cookiecutter will then ask you three questions:

a) What repo groups should it go in? (eg. openstack, openstack-infra,
stackforge)
b) What is the name of the repo? (eg. mynewproject)
c) What is the project's short description? (eg. OpenStack Wordpress as
a Service)

And boom, you'll have a directory all set up with your new project ready
and waiting for a git init ; git add . ; git commit

Hope this helps folks out - and we'll try to keep it up to date with
things that become best practices - patches welcome on that front.

Monty

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [savanna] Host information for non admin users

2013-09-12 Thread Nirmal Ranganathan
On Fri, Sep 13, 2013 at 12:05 AM, Mike Spreitzer mspre...@us.ibm.comwrote:

  From: Nirmal Ranganathan rnir...@gmail.com
  ...

  Not host capacity, just a opaque reference to distinguish a host is
  enough. Hadoop can use that information to appropriately place block
  replicas. For example if the replication count is 3, and if a host/
  rack topology is provided to Hadoop, it will place each replica on a
  different host/rack granted one is available.


 What if there are more than three racks, but some are better choices than
 others (perhaps even some are ruled out) due to considerations of various
 sorts of capacity and usage?


Well that's left upto the specific block placement policies in hdfs, all we
are providing with the topology information is a hint on node/rack
placement.



 Thanks,
 Mike
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Cookiecutter repo for ease in making new projects

2013-09-12 Thread John Griffith
On Thu, Sep 12, 2013 at 11:08 PM, Monty Taylor mord...@inaugust.com wrote:

 Hey everybody!

 You know how, when you want to make a new project, you basically take an
 existing one, like nova, copy files, and then start deleting? Nobody
 likes that.

 Recently, cookiecutter came to my attention, so we put together a
 cookiecutter repo for openstack projects to make creating a new one easier:

 https://git.openstack.org/cgit/openstack-dev/cookiecutter

 It's pretty easy to use. First, install cookiecutter:

 sudo pip install cookiecutter

 Next, tell cookiecutter you'd like to create a new project based on the
 openstack template:

 cookiecutter git://git.openstack.org/openstack-dev/cookiecutter.git

 Cookiecutter will then ask you three questions:

 a) What repo groups should it go in? (eg. openstack, openstack-infra,
 stackforge)
 b) What is the name of the repo? (eg. mynewproject)
 c) What is the project's short description? (eg. OpenStack Wordpress as
 a Service)

 And boom, you'll have a directory all set up with your new project ready
 and waiting for a git init ; git add . ; git commit

 Hope this helps folks out - and we'll try to keep it up to date with
 things that become best practices - patches welcome on that front.

 Monty

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Nice!!  Just took it for a spin, worked great!
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Cookiecutter repo for ease in making new projects

2013-09-12 Thread Michael Basnight


Sent from my digital shackles

On Sep 12, 2013, at 10:20 PM, John Griffith john.griff...@solidfire.com wrote:

 
 
 
 On Thu, Sep 12, 2013 at 11:08 PM, Monty Taylor mord...@inaugust.com wrote:
 Hey everybody!
 
 You know how, when you want to make a new project, you basically take an
 existing one, like nova, copy files, and then start deleting? Nobody
 likes that.
 
 Recently, cookiecutter came to my attention, so we put together a
 cookiecutter repo for openstack projects to make creating a new one easier:
 
 https://git.openstack.org/cgit/openstack-dev/cookiecutter
 
 It's pretty easy to use. First, install cookiecutter:
 
 sudo pip install cookiecutter
 
 Next, tell cookiecutter you'd like to create a new project based on the
 openstack template:
 
 cookiecutter git://git.openstack.org/openstack-dev/cookiecutter.git
 
 Cookiecutter will then ask you three questions:
 
 a) What repo groups should it go in? (eg. openstack, openstack-infra,
 stackforge)
 b) What is the name of the repo? (eg. mynewproject)
 c) What is the project's short description? (eg. OpenStack Wordpress as
 a Service)
 
 And boom, you'll have a directory all set up with your new project ready
 and waiting for a git init ; git add . ; git commit
 
 Hope this helps folks out - and we'll try to keep it up to date with
 things that become best practices - patches welcome on that front.
 
 Monty
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 Nice!!  Just took it for a spin, worked great!
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev