[openstack-dev] OpenStack Oslo Project - thoughts about a new blueprint: Fault Verbosity

2013-08-29 Thread GROSZ, Maty (Maty)
Hey *,

I have registered a new blueprint regarding additional verbosity information 
within API fault messages (can be viewed here: 
https://blueprints.launchpad.net/oslo/+spec/additional-fault-verbos).
Your thoughts and comments are more than welcome!

Thanks,

Maty.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] what's in scope of Ceilometer

2013-08-29 Thread Julien Danjou
On Thu, Aug 29 2013, Gordon Chung wrote:

 the first question is, Ceilometer currently does metering/alarming/maybe a 
 few other things... will it go beyond that? specifically: capacity 
 planning, optimization, dashboard(i assume this falls under 
 horizon/ceilometer plugin work), analytics. 
 they're pretty broad items so i would think they would probably end up 
 being separate projects?

I think we can extend Ceilometer API to help build such tools, but I
don't think we should build these tools inside Ceilometer.

 another question is what metrics will we capture.  some of the product 
 teams we have collect metrics on datacenter memory/cpu utilization, 
 cluster cpu/memory/vm, and a bunch of other clustered stuff.
 i'm a nova-idiot, but is this info possible to retrieve? is the consensus 
 that Ceilometer will collect anything and everything the other projects 
 allow for?

Yeah, I think that Ceilometer's the place to collect anything. I don't
know if that metrics you are talking about are collectable through Nova
though.

-- 
Julien Danjou
-- Free Software hacker - independent consultant
-- http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Frustrations with review wait times

2013-08-29 Thread Robert Collins
On 29 August 2013 17:33, Christopher Yeoh cbky...@gmail.com wrote:
 On Wed, 28 Aug 2013 15:56:33 +
 Joshua Harlow harlo...@yahoo-inc.com wrote:

 Shrinking that rotation granularity would be reasonable to. Rotate
 once every 2 weeks or some other time period still seems useful to me.


 I wonder if the quality of reviewing would drop if someone was doing it
 all day long though. IIRC the link that Robert pointed to in another
 thread seemed to indicate that the ability for someone to pick up bugs
 reduces significantly if they are doing code reviews continuously.

Right, it did - 30m or something from memory; so we have an upper
bound on reviews in a day - review, rest (e.g. hack), review, ...

-Rob

Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] what's in scope of Ceilometer

2013-08-29 Thread Lu, Lianhao

Gordon Chung wrote on 2013-08-29:
 so we're in the process of selling Ceilometer to product teams so that 
 they'll adopt it and we'll get more funding :).  one item that comes
 up from product teams is 'what will Ceilometer be able to do and where does 
 the product takeover and add value?'
 
 the first question is, Ceilometer currently does metering/alarming/maybe a 
 few other things... will it go beyond that? specifically: capacity
 planning, optimization, dashboard(i assume this falls under 
 horizon/ceilometer plugin work), analytics.
 they're pretty broad items so i would think they would probably end up being 
 separate projects?
 
 another question is what metrics will we capture.  some of the product teams 
 we have collect metrics on datacenter memory/cpu
 utilization, cluster cpu/memory/vm, and a bunch of other clustered stuff.
 i'm a nova-idiot, but is this info possible to retrieve? is the consensus 
 that Ceilometer will collect anything and everything the other projects
 allow for?
 
We're currently implementing a plugin-able framework in nova to collect metrics 
from nova compute nodes and send them into message bus, see 
https://blueprints.launchpad.net/nova/+spec/utilization-aware-scheduling for 
the patches of that, and also the corresponding ceilometer's notification 
listener https://review.openstack.org/42838 for that.

Besides, the ceilometer hardware 
agent(https://blueprints.launchpad.net/ceilometer/+spec/monitoring-physical-devices)is
 the place where to poll for data from any other physical hosts. 

-Lianhao



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Incubation Request: Marconi

2013-08-29 Thread Flavio Percoco

On 28/08/13 14:28 -0400, Joe Gordon wrote:

On Thu, Aug 22, 2013 at 12:29 PM, Kurt Griffiths kurt.griffi...@rackspace.com
wrote:

What was wrong with qpid, rabbitmq, activemq, zeromq, ${your favorite
queue here} that required marconi?

   That's a good question. The features supported by AMQP brokers, ZMQ, and
   Marconi certainly do overlap in some areas. At the same time, however, each
   of these options offer distinct features that may or may not align with
   what a web developer is trying to accomplish.

   Here are a few of Marconi's unique features, relative to the other options
   you mentioned:

 *  Multi-tenant
 *  Keystone integration
 *  100% Python
 *  First-class, stateless, firewall-friendly HTTP(S) transport driver
 *  Simple protocol, easy for clients to implement
 *  Scales to an unlimited number of queues and clients
 *  Per-queue stats, useful for monitoring and autoscale
 *  Tag-based message filtering (planned)

   Relative to SQS, Marconi:

 *  Is open-source and community-driven
 *  Supports private and hybrid deployments
 *  Offers hybrid pub-sub and producer-consumer semantics
 *  Provides a clean, modern HTTP API
 *  Can route messages to multiple queues (planned)
 *  Can perform custom message transformations (planned)

   Anyway, that's my $0.02 - others may chime in with their own thoughts.


I assume the rabbitmq vs sqs debate (http://notes.variogr.am/post/67710296/
replacing-amazon-sqs-with-something-faster-and-cheaper) is the same for
rabbitmq vs marconi?



As for speed, it may but we're not able to tell what the trade-off
is just yet. The reasoning comes based on the fact that we're adding
an extra layer on top of existing technologies, which will slow down
operations a bit.

--
@flaper87
Flavio Percoco

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [DevStack] Python dependencies: PyPI vs distro packages

2013-08-29 Thread Daniel P. Berrange
On Tue, Aug 06, 2013 at 11:36:44PM -0300, Monty Taylor wrote:
 
 
 On 08/06/2013 11:14 PM, Robert Collins wrote:
  On 7 August 2013 11:22, Jay Buffington m...@jaybuff.com wrote:
  
  ln -s /usr/lib64/python2.6/site-packages/libvirtmod_qemu.so
  $(VENV)/lib/python2.6/site-packages/
 
  Why isn't libvirt-python on pypi?  AFAICT, nothing is stopping us from
  uploading it.  Maybe we should just stick it on there and this issue
  will be resolved once and for all.
  
  Please please oh yes please :).
 
 It doesn't build from a setup.py, so there is nothing to upload. It's
 built as part of the libvirt C library, and its build depends on scripts
 that autogenerate source code from the C library headers (think swig,
 except homegrown)

FYI, I have raised the issue of separating libvirt python into a
separate module for PyPI on the upstream libvirt mailing list. OpenStack
are not the only people asking for this, so I think it is inevitable
that libvirt upstream will start to offer the python binding on PyPI
in the future. eg it is a question of when, not if.

  https://www.redhat.com/archives/libvir-list/2013-August/msg01525.html


Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] New Nova hypervisor: Docker

2013-08-29 Thread Jaume Devesa
Hi Sam,

that's a great work and it will be for sure my default driver for my
development environment.

I have a question: once the review will be approved and the code merged
into master, do you plan to create a driver nova subteam as Xen, HyperV and
others do? I would be glad to cooperate on it.


On 29 August 2013 07:54, Sam Alba sam.a...@gmail.com wrote:

 On Wed, Aug 28, 2013 at 9:12 AM, Sam Alba sam.a...@gmail.com wrote:
  Thanks a lot everyone for the nice feedback. I am going to work hard
  to get all those new comments addressed to be able to re-submit a new
  patchset today or tomorrow (the later).
 
  On Wed, Aug 28, 2013 at 7:02 AM, Russell Bryant rbry...@redhat.com
 wrote:
  On 08/28/2013 05:18 AM, Daniel P. Berrange wrote:
  On Wed, Aug 28, 2013 at 06:00:50PM +1000, Michael Still wrote:
  On Wed, Aug 28, 2013 at 4:18 AM, Sam Alba sam.a...@gmail.com wrote:
  Hi all,
 
  We've been working hard during the last couple of weeks with some
  people. Brian Waldon helped a lot designing the Glance integration
 and
  driver testing. Dean Troyer helped a lot on bringing Docker support
 in
  Devstack[1]. On top of that, we got several feedback on the Nova code
  review which definitely helped to improve the code.
 
  The blueprint[2] explains what Docker brings to Nova and how to use
 it.
 
  I have to say that this blueprint is a fantastic example of how we
  should be writing design documents. It addressed almost all of my
  questions about the integration.
 
  Yes, Sam ( any of the other Docker guys involved) have been great at
  responding to reviewers' requests to expand their design document. The
  latest update has really helped in understanding how this driver works
  in the context of openstack from an architectural and functional POV.
 
  They've been great in responding to my requests, as well.  The biggest
  thing was that I wanted to see devstack support so that it's easily
  testable, both by developers and by CI.  They delivered.
 
  So, in general, I'm good with this going in.  It's just a matter of
  getting the code review completed in the next week before feature
  freeze.  I'm going to try to help with it this week.
 

 If someone wants to take another look at
 https://review.openstack.org/#/c/32960/, we answered/fixed all
 previous comments.


 --
 @sam_alba

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [savanna] cluster scaling on the 0.2 branch

2013-08-29 Thread Nadezhda Privalova
Hi Jon,

Unfortunately, I'm not able to reproduce this issue with vanilla plugin.
The behavior you described is not correct. Here a json I used to repro an
issue:

{
add_node_groups: [
{
name: worker-tasktracker,
count:1,
node_processes: [
tasktracker
],
flavor_id: 42
}
],

resize_node_groups: [
{
name: worker-datanode,
count:2
}
]

}

I added 'print instances' in vanilla plugin's scaling cluster method. Here
is a result:

[savanna.db.models.Instance[object at 10e59fe90]
{created=datetime.datetime(2013, 8, 29, 12, 9, 24, 307150),
updated=datetime.datetime(2013, 8, 29, 12, 11, 45, 614216), extra=None,
node_group_id=u'2b892060-b53e-4224-98ae-911aa014',
instance_id=u'1f50ceca-ee84-477d-a730-79839af59d08', instance_name=u*
'np-oozie-old-0.2-worker-tasktracker-001'*, internal_ip=u'10.155.0.108',
management_ip=u'172.18.79.248', volumes=[]},
savanna.db.models.Instance[object at 10e588590]
{created=datetime.datetime(2013, 8, 29, 12, 9, 25, 751723),
updated=datetime.datetime(2013, 8, 29, 12, 11, 45, 614467), extra=None,
node_group_id=u'9935bf76-d08b-4cdd-ad20-bbcb2ea5666f',
instance_id=u'67092d96-9808-4830-be4a-9f7f54e04b58', instance_name=u'*
np-oozie-old-0.2-worker-datanode-002'*, internal_ip=u'10.155.0.110',
management_ip=u'172.18.79.254', volumes=[]}]

So the behavior as expected.
We may try to debug this together if you want. Please feel free to ping me.

Thanks,
Nadya






On Thu, Aug 29, 2013 at 5:49 AM, Jon Maron jma...@hortonworks.com wrote:

 Hi,

   I am trying to back port the HDP scaling implementation to the 0.2
 branch and have run into a number of differences.  At this point I am
 trying to figure out whether what I am observing is intended or symptoms of
 a bug.

   For a case in which I am adding one instance to an existing node group
 as well as an additional node group with one instance I am seeing the
 following arguments being passed to the scale_cluster method of the plugin:

 - A cluster object that contains the following set of node groups:

 [savanna.db.models.NodeGroup[object at 10d8bdd90]
 {created=datetime.datetime(2013, 8, 28, 21, 50, 5, 208003),
 updated=datetime.datetime(2013, 8, 28, 21, 50, 5, 208007),
 id=u'd6fadb7a-367b-41ed-989c-af40af2d3e3d', name=u'master', flavor_id=u'3',
 image_id=None, node_processes=[u'NAMENODE', u'SECONDARY_NAMENODE',
 u'GANGLIA_SERVER', u'GANGLIA_MONITOR', u'AMBARI_SERVER', u'AMBARI_AGENT',
 u'JOBTRACKER', u'NAGIOS_SERVER'], node_configs={}, volumes_per_node=0,
 volumes_size=10, volume_mount_prefix=u'/volumes/disk', *count=1*,
 cluster_id=u'e086d444-2a0f-4105-8ef2-51c56cdb70d2',
 node_group_template_id=u'15344a5c-5e83-496a-9648-d7b58f40ad1f'},
 savanna.db.models.NodeGroup[object at 10d8bd950]
 {created=datetime.datetime(2013, 8, 28, 21, 50, 5, 210962),
 updated=datetime.datetime(2013, 8, 28, 22, 5, 1, 728402),
 id=u'672e5597-2a8d-4470-8f5d-8cc43c7bb28e', name=u'slave', flavor_id=u'3',
 image_id=None, node_processes=[u'DATANODE', u'HDFS_CLIENT',
 u'GANGLIA_MONITOR', u'AMBARI_AGENT', u'TASKTRACKER', u'MAPREDUCE_CLIENT'],
 node_configs={}, volumes_per_node=0, volumes_size=10,
 volume_mount_prefix=u'/volumes/disk', *count=2*,
 cluster_id=u'e086d444-2a0f-4105-8ef2-51c56cdb70d2',
 node_group_template_id=u'5dd6aa5a-496c-4dda-b94c-3b3752eb0efb'},
 savanna.db.models.NodeGroup[object at 10d897f90]
 {created=datetime.datetime(2013, 8, 28, 22, 4, 59, 871379),
 updated=datetime.datetime(2013, 8, 28, 22, 4, 59, 871388),
 id=u'880e1b17-f4e4-456d-8421-31bf8ef1fb65', name=u'slave2', flavor_id=u'1',
 image_id=None, node_processes=[u'DATANODE', u'HDFS_CLIENT',
 u'GANGLIA_MONITOR', u'AMBARI_AGENT', u'TASKTRACKER', u'MAPREDUCE_CLIENT'],
 node_configs={}, volumes_per_node=0, volumes_size=10,
 volume_mount_prefix=u'/volumes/disk', *count=1*,
 cluster_id=u'e086d444-2a0f-4105-8ef2-51c56cdb70d2',
 node_group_template_id=u'd67da924-792b-4558-a5cb-cb97bba4107f'}]

   So it appears that the cluster is already configured with the three node
 groups (two original, one new) and the associated counts.

 - The list of instances.  However, whereas the master branch was passing
 me two instances (one instance representing the addition to the existing
 group, one representing the new instance associated with the added node
 group), in the 0.2 branch I am only seeing one instance being passed (the
 one instance being added to the existing node group):

 (Pdb) p instances
 [savanna.db.models.Instance[object at 10d8bf050]
 {created=datetime.datetime(2013, 8, 28, 22, 5, 1, 725343),
 updated=datetime.datetime(2013, 8, 28, 22, 5, 47, 286665), extra=None,
 node_group_id=u'672e5597-2a8d-4470-8f5d-8cc43c7bb28e',
 instance_id=u'377694a2-a589-479b-860f-f1541d249624',
 instance_name=u'scale-slave-002', internal_ip=u'192.168.32.4',
 

Re: [openstack-dev] [Ceilometer] what's in scope of Ceilometer

2013-08-29 Thread Alan Kavanagh
+1

I believe the important point here is to identify additional metrics required 
and the relevant attributes which can be specified and have them returned to 
the Collector. Then in turn the collector can either push/pull those 
metrics/etc into an Anlytics Engine and tools. Its not a good idea to start 
designing and building analytics engines etc into Ceilometer, it should just 
the monitor and collecting project.

For the metrics Ceilometer is definitely the place to set and collect what 
metrics you need to know of for both the Hardware functions 
(cpu/mem/disk-i/o/nic utilisation etc etc) and also for some base application 
sets for example on an Apache server the number of TCP-sessions in utilisation 
etc etc. 

BR
Alan


-Original Message-
From: Julien Danjou [mailto:jul...@danjou.info] 
Sent: August-29-13 4:20 AM
To: Gordon Chung
Cc: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Ceilometer] what's in scope of Ceilometer

On Thu, Aug 29 2013, Gordon Chung wrote:

 the first question is, Ceilometer currently does 
 metering/alarming/maybe a few other things... will it go beyond that? 
 specifically: capacity planning, optimization, dashboard(i assume this 
 falls under horizon/ceilometer plugin work), analytics.
 they're pretty broad items so i would think they would probably end up 
 being separate projects?

I think we can extend Ceilometer API to help build such tools, but I don't 
think we should build these tools inside Ceilometer.

 another question is what metrics will we capture.  some of the product 
 teams we have collect metrics on datacenter memory/cpu utilization, 
 cluster cpu/memory/vm, and a bunch of other clustered stuff.
 i'm a nova-idiot, but is this info possible to retrieve? is the 
 consensus that Ceilometer will collect anything and everything the 
 other projects allow for?

Yeah, I think that Ceilometer's the place to collect anything. I don't know if 
that metrics you are talking about are collectable through Nova though.

--
Julien Danjou
-- Free Software hacker - independent consultant
-- http://julien.danjou.info

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] what's in scope of Ceilometer

2013-08-29 Thread Sandy Walsh


On 08/29/2013 05:20 AM, Julien Danjou wrote:
 On Thu, Aug 29 2013, Gordon Chung wrote:
 
 the first question is, Ceilometer currently does metering/alarming/maybe a 
 few other things... will it go beyond that? specifically: capacity 
 planning, optimization, dashboard(i assume this falls under 
 horizon/ceilometer plugin work), analytics. 
 they're pretty broad items so i would think they would probably end up 
 being separate projects?
 
 I think we can extend Ceilometer API to help build such tools, but I
 don't think we should build these tools inside Ceilometer.

sniff sniff I'm so happy to hear that I could cry.

+1000


 another question is what metrics will we capture.  some of the product 
 teams we have collect metrics on datacenter memory/cpu utilization, 
 cluster cpu/memory/vm, and a bunch of other clustered stuff.
 i'm a nova-idiot, but is this info possible to retrieve? is the consensus 
 that Ceilometer will collect anything and everything the other projects 
 allow for?
 
 Yeah, I think that Ceilometer's the place to collect anything. I don't
 know if that metrics you are talking about are collectable through Nova
 though.

Some of this data is collected from the vm and sent to the scheduler,
but not exposed in the normal nova notifications (iirc).

Internally we've been playing with Diamond for extracting this metrics
from our compute nodes. A Diamond - CM bridge would be awesome.

We're really pushing hard for the other openstack projects to make more
use of the oslo notification framework and it's been well adopted so
far. For some high volume services, like Swift, the notification system
doesn't make sense, so there are efforts around things like Meniscus and
logstash - notifications (or similar).

Hope it helps!
-S



 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] ./run_tests.sh fails with db type could not be determined error

2013-08-29 Thread Sumanth Suresh Nagadavalli
Hi,

From sometime today, I am not any more able to run ./run_tests.sh. It gives
the following error,

*Running `tools/with_venv.sh python setup.py testr --testr-args='--subunit
 '`*
*db type could not be determined*
*error: testr failed (3)*
*
*
*Ran 0 tests in 6.989s*

I tried recreating my virtual environment, running tests with/without site
packages but the result was same.

When I run tests with -d(debug) option, I see the below error.

*ERROR: unittest.loader.ModuleImportFailure.virt.xenapi.test_xenapi*
*--*
*ImportError: Failed to import test module: virt.xenapi.test_xenapi*
*Traceback (most recent call last):*
*  File
/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/unittest/loader.py,
line 252, in _find_tests*
*module = self._get_module_from_name(name)*
*  File
/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/unittest/loader.py,
line 230, in _get_module_from_name*
*__import__(name)*
*  File /Users/sumansn/Work/nova/nova/tests/virt/xenapi/test_xenapi.py,
line 43, in module*
*from nova import test*
*  File nova/test.py, line 52, in module*
*from nova.tests import conf_fixture*
*  File nova/tests/__init__.py, line 39, in module*
*% os.environ.get('EVENTLET_NO_GREENDNS'))*
*ImportError: eventlet imported before nova/cmd/__init__ (env var set to
None)*


My python version is 2.7.3 and I am using a Mac OSx 10.7.5

Has anything changes recently that I have to account for? Any help would be
appreciated?

Thanks
-- 
Sumanth N S
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Frustrations with review wait times

2013-08-29 Thread Ben Nemec

On 2013-08-29 00:28, Christopher Yeoh wrote:

On Wed, 28 Aug 2013 09:58:48 -0400
Joe Gordon joe.gord...@gmail.com wrote:


On a related note, I really like when the developer adds a gerrit
comment saying why the revision, that makes my life as a reviewer
easier.


+1 - I try to remember to do this and from a reviewer point of view 
this

is especially useful when there has been a rebase involved.


+1 from me too.  In fact, this might be a useful tip to add to the 
Gerrit workflow wiki page.  Would also be a good place to mention that 
patch set-specific comments don't belong in the commit message.  If 
there are no objections I'll try to remember to add something in the 
near future.


-Ben

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Cells - Neutron Service

2013-08-29 Thread Addepalli Srini-B22160
Hi,

While developing some  neutron extensions, one question came up on Cells. 
Appreciate any comments.

According to this table in operations guide,  a cell shares nova-api and 
keystone, but does not talk about other services.

I understand from few that Neutron service need to be shared across cells if 
virtual networks are to be extended to multiple cells.   Otherwise,  neutron 
service can be dedicated to each cell.

I guess anybody developing  neutron related extensions need to take care both 
scenarios.

Is that understanding correct?

Also which deployments are more common - Shared Neutron or dedicated neutrons?

Thanks
Srini



Cells

Regions

Availability Zones

Host Aggregates


Use when you need


A single API 
endpointhttp://docs.openstack.org/trunk/openstack-ops/content/scaling.html 
for compute, or you require a second level of scheduling.


Discrete regions with separate API endpoints and no coordination between 
regions.


Logical separation within your nova deployment for physical isolation or 
redundancy.


To schedule a group of hosts with common features.


Example


A cloud with multiple sites where you can schedule VMs anywhere or on a 
particular site.


A cloud with multiple sites, where you schedule VMs to a particular site and 
you want a shared infrastructure.


A single site cloud with equipment fed by separate power supplies.


Scheduling to hosts with trusted hardware support.


Overhead


* A new service, nova-cells

* Each cell has a full nova installation except nova-api


* A different API endpoint for every region.

* Each region has a full nova installation.


* Configuration changes to nova.conf


* Configuration changes to nova.conf


Shared services


Keystone

nova-api


Keystone


Keystone

All nova services


Keystone

All nova services


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [swift] getting middleware connected again

2013-08-29 Thread Clay Gerrard
add-apt-repository -y ppa:swift-core/release

^ is that a thing?

How sure are you that you're running 1.9.2.6.g3b48a71 ?

https://launchpad.net/~swift-core/+archive/release

Try:

python -c 'import swift; print swift.__version__'
python -c 'import swift.common.middleware.catch_errors; print SUCCESS'



On Thu, Aug 29, 2013 at 6:52 AM, Snider, Tim tim.sni...@netapp.com wrote:

 I’m having problems getting Swift / Python to find and load middleware
for the proxy-server. As I remove entries from the pipeline line, the next
entry gets an error. So something itsn’t setup correctly anymore. Looking
for suggestions on what needs to be done to get Swift and Python to play
nice with each other again.

 Theortically I’m at  version 1.9.2.6.g3b48a71.

 End messages from the start command:

 File /usr/lib/python2.7/dist-packages/pkg_resources.py, line 1989, in
load

 entry = __import__(self.module_name, globals(),globals(),
['__name__'])

 ImportError: No module named middleware.catch_errors

 http://paste.openstack.org/show/45372/

 Thanks,

 Tim




 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Frustrations with review wait times

2013-08-29 Thread Russell Bryant
On 08/29/2013 04:25 AM, Robert Collins wrote:
 On 29 August 2013 17:33, Christopher Yeoh cbky...@gmail.com wrote:
 On Wed, 28 Aug 2013 15:56:33 +
 Joshua Harlow harlo...@yahoo-inc.com wrote:

 Shrinking that rotation granularity would be reasonable to. Rotate
 once every 2 weeks or some other time period still seems useful to me.


 I wonder if the quality of reviewing would drop if someone was doing it
 all day long though. IIRC the link that Robert pointed to in another
 thread seemed to indicate that the ability for someone to pick up bugs
 reduces significantly if they are doing code reviews continuously.
 
 Right, it did - 30m or something from memory; so we have an upper
 bound on reviews in a day - review, rest (e.g. hack), review, ...

And I have certainly felt this in my experience lately.  I've been
wanting to help the review load so much that I've spent _a lot_ of time
on reviews.  I noticed that the quality of my reviews dropped as I did
more and more of them.  I need to hack more.  :-)

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] New Nova hypervisor: Docker

2013-08-29 Thread Sam Alba
Hi Jaume,

On Thu, Aug 29, 2013 at 5:11 AM, Jaume Devesa devv...@gmail.com wrote:
 Hi Sam,

 that's a great work and it will be for sure my default driver for my
 development environment.

Thanks!

 I have a question: once the review will be approved and the code merged into
 master, do you plan to create a driver nova subteam as Xen, HyperV and
 others do? I would be glad to cooperate on it.

Yes, being merged is the step 0 but there is much more to do after
that. Docker is moving quickly and I aim to improve this driver along
the releases. Some other people at dotCloud will be involved in this
work and obviously anyone else is more than welcome to join the effort
like it happened with Brian and Dean. The work has been shipped by the
3 of us so far.

 On 29 August 2013 07:54, Sam Alba sam.a...@gmail.com wrote:

 On Wed, Aug 28, 2013 at 9:12 AM, Sam Alba sam.a...@gmail.com wrote:
  Thanks a lot everyone for the nice feedback. I am going to work hard
  to get all those new comments addressed to be able to re-submit a new
  patchset today or tomorrow (the later).
 
  On Wed, Aug 28, 2013 at 7:02 AM, Russell Bryant rbry...@redhat.com
  wrote:
  On 08/28/2013 05:18 AM, Daniel P. Berrange wrote:
  On Wed, Aug 28, 2013 at 06:00:50PM +1000, Michael Still wrote:
  On Wed, Aug 28, 2013 at 4:18 AM, Sam Alba sam.a...@gmail.com wrote:
  Hi all,
 
  We've been working hard during the last couple of weeks with some
  people. Brian Waldon helped a lot designing the Glance integration
  and
  driver testing. Dean Troyer helped a lot on bringing Docker support
  in
  Devstack[1]. On top of that, we got several feedback on the Nova
  code
  review which definitely helped to improve the code.
 
  The blueprint[2] explains what Docker brings to Nova and how to use
  it.
 
  I have to say that this blueprint is a fantastic example of how we
  should be writing design documents. It addressed almost all of my
  questions about the integration.
 
  Yes, Sam ( any of the other Docker guys involved) have been great at
  responding to reviewers' requests to expand their design document. The
  latest update has really helped in understanding how this driver works
  in the context of openstack from an architectural and functional POV.
 
  They've been great in responding to my requests, as well.  The biggest
  thing was that I wanted to see devstack support so that it's easily
  testable, both by developers and by CI.  They delivered.
 
  So, in general, I'm good with this going in.  It's just a matter of
  getting the code review completed in the next week before feature
  freeze.  I'm going to try to help with it this week.
 

 If someone wants to take another look at
 https://review.openstack.org/#/c/32960/, we answered/fixed all
 previous comments.


 --
 @sam_alba

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
@sam_alba

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Frustrations with review wait times

2013-08-29 Thread Matt Dietz
To add to this, the majority of my reviews come out of the period in the
morning before my team's daily standup. I've found that sufficient for
getting some reviews in, and conversely, fights off the tremendous burnout
I used to get when we had review days back in the beginning of the project.

-Original Message-
From: Robert Collins robe...@robertcollins.net
Reply-To: OpenStack Development Mailing List
openstack-dev@lists.openstack.org
Date: Thursday, August 29, 2013 3:25 AM
To: OpenStack Development Mailing List openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Nova] Frustrations with review wait times

On 29 August 2013 17:33, Christopher Yeoh cbky...@gmail.com wrote:
 On Wed, 28 Aug 2013 15:56:33 +
 Joshua Harlow harlo...@yahoo-inc.com wrote:

 Shrinking that rotation granularity would be reasonable to. Rotate
 once every 2 weeks or some other time period still seems useful to me.


 I wonder if the quality of reviewing would drop if someone was doing it
 all day long though. IIRC the link that Robert pointed to in another
 thread seemed to indicate that the ability for someone to pick up bugs
 reduces significantly if they are doing code reviews continuously.

Right, it did - 30m or something from memory; so we have an upper
bound on reviews in a day - review, rest (e.g. hack), review, ...

-Rob

Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposal for Raksha, a Data Protection As a Service project

2013-08-29 Thread Ronen Kat
Hi Murali,

I think the idea to provide enhanced data protection in OpenStack is a
great idea, and I have been thinking about  backup in OpenStack for a while
now.
I just not sure a new project is the only way to do.

(as disclosure, I contributed code to enable IBM TSM as a Cinder backup
driver)

I wonder what is the added-value of a project approach versus enhancements
to the current Nova and Cinder implementations of backup. Let me elaborate.

Nova has a nova backup feature that performs a backup of a VM to Glance,
the backup is managed by tenants in the same way that you propose.
While today it provides only point-in-time full backup, it seems reasonable
that it can be extended support incremental and consistent backup as well -
as the actual work is done either by the Storage or Hypervisor in any case.

Cinder has a cinder backup command that performs a volume backup to Swift,
Ceph or TSM. The Ceph implementation also support incremental backup (Ceph
to Ceph).
I envision that Cinder could be expanded to support incremental backup (for
persistent storage) by adding drivers/plug-ins that will leverage
incremental backup features of either the storage or Hypervisors.
Independently, in Havana the ability to do consistent volume snapshots was
added to GlusterFS. I assume that this consistency support could be
generalized to support other volume drivers, and be utilized as part of a
backup code.

Looking at the key features in Raksha, it seems that the main features
(2,3,4,7) could be addressed by improving the current mechanisms in Nova
and Cinder. I didn't included 1 as a feature as it is more a statement of
intent (or goal) than a feature.
Features 5 (dedup) and 6 (scheduler) are indeed new in your proposal.

Looking at the source de-duplication feature, and taking Swift as an
example, it seems reasonable that if Swift will implement de-duplication,
then doing backup to Swift will give us de-duplication for free.
In fact it would make sense to do the de-duplication at the Swift level
instead of just the backup layer to gain more duplication opportunities.

Following the above, and assuming it all come true (at times I am known to
be an optimistic), then we are left with backup job scheduling, and I
wonder if that is enough for a new project.

My question is, would it make sense to add to the current mechanisms in
Nova and Cinder than add the complexity of a new project?

Thanks,

Regards,
__
Ronen I. Kat
Storage Research
IBM Research - Haifa
Phone: +972.3.7689493
Email: ronen...@il.ibm.com

From:   Murali Balcha murali.bal...@triliodata.com
To: openstack-dev@lists.openstack.org
openstack-dev@lists.openstack.org,
openst...@list.openstack.org openst...@list.openstack.org,
Date:   29/08/2013 01:18 AM
Subject:[openstack-dev] Proposal for Raksha, a Data Protection As a
Service project



Hello Stackers,
We would like to introduce a new project Raksha, a Data Protection As a
Service (DPaaS) for OpenStack Cloud.
Raksha’s primary goal is to provide a comprehensive Data Protection for
OpenStack by leveraging Nova, Swift, Glance and Cinder. Raksha has
following key features:
  1.   Provide an enterprise grade data protection for OpenStack
  based clouds
  2.   Tenant administered backups and restores
  3.   Application consistent backups
  4.   Point In Time(PiT) full and incremental backups and restores
  5.   Dedupe at source for efficient backups
  6.   A job scheduler for periodic backups
  7.   Noninvasive backup solution that does not require service
  interruption during backup window

You will find the rationale behind the need for Raksha in OpenStack in its
Wiki. The wiki also has the preliminary design and the API description.
Some of the Raksha functionality may overlap with Nova and Cinder projects
and as a community lets work together to coordinate the features among
these projects. We would like to seek out early feedback so we can address
as many issues as we can in the first code drop. We are hoping to enlist
the OpenStack community help in making Raksha a part of OpenStack.
Raksha’s project resources:
Wiki: https://wiki.openstack.org/wiki/Raksha
Launchpad: https://launchpad.net/raksha
Github: https://github.com/DPaaS-Raksha/Raksha (We will upload a prototype
code in few days)
If you want to talk to us, send an email to
openstack-...@lists.launchpad.net with [raksha] in the subject or use
#openstack-raksha irc channel.

Best Regards,
Murali Balcha___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [savanna] team meeting minutes August 29

2013-08-29 Thread Sergey Lukjanov
Thanks everyone who have joined Savanna meeting.

Here are the logs from the meeting:

Minutes: 
http://eavesdrop.openstack.org/meetings/savanna/2013/savanna.2013-08-29-18.07.html
Minutes (text): 
http://eavesdrop.openstack.org/meetings/savanna/2013/savanna.2013-08-29-18.07.txt
Log: 
http://eavesdrop.openstack.org/meetings/savanna/2013/savanna.2013-08-29-18.07.log.html

Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposal for Raksha, a Data Protection As a Service project

2013-08-29 Thread Caitlin Bestler

On 8/28/2013 3:12 PM, Murali Balcha wrote:

Hello Stackers,

We would like to introduce a new project Raksha, a Data Protection As a
Service (DPaaS) for OpenStack Cloud.

Raksha’s primary goal is to provide a comprehensive Data Protection for
OpenStack by leveraging Nova, Swift, Glance and Cinder. Raksha has
following key features:

1.Provide an enterprise grade data protection for OpenStack based clouds

2.Tenant administered backups and restores

3.Application consistent backups

4.Point In Time(PiT) full and incremental backups and restores

5.Dedupe at source for efficient backups

6.A job scheduler for periodic backups

7.Noninvasive backup solution that does not require service interruption
during backup window



These are all features that should be provideds of Cinder and Swift 
backends. Attempting to provide these services outside of the actual
storage services will never be as efficient as they can be when 
integrated with the actual storage service.


Even worse, defining this as a separate service duplicates work already
done by Swift for Objects, also duplicates work done by Swift plug-in
replacements such as CEPH and Nexenta Object Store and would pre-empt
many features of Cinder backends.

In particular, any Cinder Volume Manager that supports snapshots in an
efficient way already has a backup solution that does not require 
service interruption during a backup window.


In my opinion working with the storage vendors on these features within
the context of the Swift API and Cinder would be more useful than 
starting a new project.




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Horizon - Mockup tool

2013-08-29 Thread Endre Karlson
Does anyone know what too is used to do mockups ?

Endre
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat] Multi-engine design feedback requested

2013-08-29 Thread Jason Dunsmore
Heat devs,

Liang pointed out a race-condition in the current multi-engine
implementation that will be difficult to fix without a DB lock.  I've
discussed the multi-engine design with my teammates and written up a few
alternative designs here:
https://etherpad.openstack.org/vJKcZcQOU9

Every design has its own downsides, so I was hoping to get some feedback
from the core devs as to which one is preferable.

Feel free to add comments in-line.  Please don't click Clear Authorship
Colors ;)

Thanks,
Jason

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] About multihost patch review

2013-08-29 Thread Baldwin, Carl (HPCS Neutron)
On 8/28/13 11:28 AM, Vishvananda Ishaya vishvana...@gmail.com wrote:


On Aug 26, 2013, at 6:14 PM, Maru Newby ma...@redhat.com wrote:

 
 On Aug 26, 2013, at 4:06 PM, Edgar Magana emag...@plumgrid.com wrote:
 
 Hi Developers,
 
 Let me explain my point of view on this topic and please share your
thoughts in order to merge this new feature ASAP.
 
 My understanding is that multi-host is nova-network HA  and we are
implementing this bp
https://blueprints.launchpad.net/neutron/+spec/quantum-multihost for
the same reason.
 So, If in neutron configuration admin enables multi-host:
 etc/dhcp_agent.ini
 
 # Support multi host networks
 # enable_multihost = False
 
 Why do tenants needs to be aware of this? They should just create
networks in the way they normally do and not by adding the multihost
extension.
 
 I was pretty confused until I looked at the nova-network HA doc [1].
The proposed design would seem to emulate nova-network's multi-host HA
option, where it was necessary to both run nova-network on every compute
node and create a network explicitly as multi-host.  I'm not sure why
nova-network was implemented in this way, since it would appear that
multi-host is basically all-or-nothing.  Once nova-network services are
running on every compute node, what does it mean to create a network
that is not multi-host?

Just to add a little background to the nova-network multi-host: The fact
that the multi_host flag is stored per-network as opposed to a
configuration was an implementation detail. While in theory this would
support configurations where some networks are multi_host and other ones
are not, I am not aware of any deployments where both are used together.

That said, If there is potential value in offering both, it seems like it
should be under the control of the deployer not the user. In other words
the deployer should be able to set the default network type and enforce
whether setting the type is exposed to the user at all.

+1 for leaving it to the deployer and not the user.


Also, one final point. In my mind, multi-host is strictly better than
single host, if I were to redesign nova-network today, I would get rid of
the single host mode completely.

+1 again.

Vish

 
 So, to Edgar's question - is there a reason other than 'be like
nova-network' for requiring neutron multi-host to be configured
per-network?
 
 
 m.
 
 1: 
http://docs.openstack.org/trunk/openstack-compute/admin/content/existing-
ha-networking-options.html
 
 
 I could be totally wrong and crazy, so please provide some feedback.
 
 Thanks,
 
 Edgar
 
 
 From: Yongsheng Gong gong...@unitedstack.com
 Date: Monday, August 26, 2013 2:58 PM
 To: Kyle Mestery (kmestery) kmest...@cisco.com, Aaron Rosen
aro...@nicira.com, Armando Migliaccio amigliac...@vmware.com,
Akihiro MOTOKI amot...@gmail.com, Edgar Magana
emag...@plumgrid.com, Maru Newby ma...@redhat.com, Nachi Ueno
na...@nttmcl.com, Salvatore Orlando sorla...@nicira.com, Sumit
Naiksatam sumit.naiksa...@bigswitch.com, Mark McClain
mark.mccl...@dreamhost.com, Gary Kotton gkot...@vmware.com, Robert
Kukura rkuk...@redhat.com
 Cc: OpenStack List openstack-dev@lists.openstack.org
 Subject: Re: About multihost patch review
 
 Hi,
 Edgar Magana has commented to say:
 'This is the part that for me is confusing and I will need some
clarification from the community. Do we expect to have the multi-host
feature as an extension or something that will natural work as long as
the deployment include more than one Network Node. In my opinion,
Neutron deployments with more than one Network Node by default should
call DHCP agents in all those nodes without the need to use an
extension. If the community has decided to do this by extensions, then
I am fine' at
 
https://review.openstack.org/#/c/37919/11/neutron/extensions/multihostne
twork.py
 
 I have commented back, what is your opinion about it?
 
 Regards,
 Yong Sheng Gong
 
 
 On Fri, Aug 16, 2013 at 9:28 PM, Kyle Mestery (kmestery)
kmest...@cisco.com wrote:
 Hi Yong:
 
 I'll review this and try it out today.
 
 Thanks,
 Kyle
 
 On Aug 15, 2013, at 10:01 PM, Yongsheng Gong
gong...@unitedstack.com wrote:
 
 The multihost patch is there for a long long time, can someone help
to review?
 https://review.openstack.org/#/c/37919/
 
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Cells - Neutron Service

2013-08-29 Thread Ravi Chunduru
Its an interesting discussion you brought up today. I agree there is no
clear definition of neutron service in that table. The cell goes by its
definition of ability to create instance anywhere. Then there needs to be
inter-vm communication for a given network.

I feel Neutron must be shared service in Cells. Such depth is missing in
Neutron today.

Any thoughts?

Thanks,
-Ravi.


On Thu, Aug 29, 2013 at 8:00 AM, Addepalli Srini-B22160 
b22...@freescale.com wrote:

  Hi,

 ** **

 While developing some  neutron extensions, one question came up on Cells.
 Appreciate any comments.

 ** **

 According to this table in operations guide,  a cell shares nova-api and
 keystone, but does not talk about other services.

 ** **

 I understand from few that Neutron service need to be shared across cells
 if virtual networks are to be extended to multiple cells.   Otherwise,
 neutron service can be dedicated to each cell.

 ** **

 I guess anybody developing  neutron related extensions need to take care
 both scenarios.

 ** **

 Is that understanding correct?  

 ** **

 Also which deployments are more common – Shared Neutron or dedicated
 neutrons?

 ** **

 Thanks
 Srini

 ** **

 ** **

 *Cell**s*

 *Regions*

 *Availability Zones*

 *Host Aggregates*

 *Use when you need* 

 A single API 
 endpointhttp://docs.openstack.org/trunk/openstack-ops/content/scaling.htmlfor
  compute, or you require a second level of scheduling.
 

 Discrete regions with separate API endpoints and no coordination between
 regions.

 Logical separation within your nova deployment for physical isolation or
 redundancy.

 To schedule a group of hosts with common features.

 *Example* 

 A cloud with multiple sites where you can schedule VMs anywhere or on a
 particular site.

 A cloud with multiple sites, where you schedule VMs to a particular site
 and you want a shared infrastructure.

 A single site cloud with equipment fed by separate power supplies.

 Scheduling to hosts with trusted hardware support.

 *Overhead* 

 **· **A new service, nova-cells

 **· **Each cell has a full nova installation except nova-api

 **· **A different API endpoint for every region. 

 **· **Each region has a full nova installation.

 **· **Configuration changes to nova.conf

 **· **Configuration changes to nova.conf

 *Shared services* 

 Keystone

 nova-api 

 Keystone

 Keystone

 All nova services

 Keystone

 All nova services

 ** **

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Ravi
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposal for Raksha, a Data Protection As a Service project

2013-08-29 Thread Murali Balcha


 From: Ronen Kat ronen...@il.ibm.com
 Sen: Thursday, August 29, 2013 2:55 PM
 To: openstack-dev@lists.openstack.org; openstack-...@lists.launchpad.net
 Subject: Re: [openstack-dev] Proposal for Raksha, a Data Protection As a 
 Service project

 Hi Murali,

 I think the idea to provide enhanced data protection in OpenStack is a
 great idea, and I have been thinking about  backup in OpenStack for a while
 now.
 I just not sure a new project is the only way to do.

 (as disclosure, I contributed code to enable IBM TSM as a Cinder backup
 driver)

Hi Kat,
Consider the following use cases that Raksha will addresses. I will discuss 
from simple to complex use case and then address your specific questions with 
inline comments.
1.  VM1 that is created on the local file system with a cinder volume 
attached
2.  VM2 that is booted off from a cinder volume and has couple of cinder 
volumes attached
3.  VM1 and VM2 all booted from cinder volumes and has couple of volumes 
attached. They also share a private network for internal communication.
4.  
In all these cases Raksha will take a consistent snap of VMs, walk thru each VM 
resources and backup the resources to swift end point. 
In case 1, that means backup VM image and Cinder volume image to swift
In case 2 is an extension of case 1.
In case 3, Raksha not only backup VM1 and VM2 and its associated resources, it 
also backup the network configuration

Now lets consider the restore case. The restore operation walks thru the backup 
resources and calls into respective openstack services to restore those 
objects. In case1, it first calls Nova API to restore the VM, it calls into 
Cinder to restore the volume and attach the volume to the newly restored VM 
instance. In case of 3, it also calls into Neutron API to restore the 
networking. Hence my argument is that not one OpenStack project has a global 
view of VM and all its resources to implement an effective backup and restore 
services.


 I wonder what is the added-value of a project approach versus enhancements
 to the current Nova and Cinder implementations of backup. Let me elaborate.

 Nova has a nova backup feature that performs a backup of a VM to Glance,
 the backup is managed by tenants in the same way that you propose.
 While today it provides only point-in-time full backup, it seems reasonable
 that it can be extended support incremental and consistent backup as well -
 as the actual work is done either by the Storage or Hypervisor in any case.

Though Nova has API to upload a snapshot of the VM to glance, it does not 
snapshot any volumes associated with the VM. When a snapshot is uploaded to 
glance, Nova creates an image by collapsing the qemu image with delta file and 
uploads the larger file to glance. If we were to perform periodic backups of 
VMs, this is a very inefficient way to do backup. Also having to manage two end 
points, one for Nova and Cinder is inefficient. These are the gaps I called out 
in Raksha wiki page.


 Cinder has a cinder backup command that performs a volume backup to Swift,
 Ceph or TSM. The Ceph implementation also support incremental backup (Ceph
 to Ceph).
 I envision that Cinder could be expanded to support incremental backup (for
 persistent storage) by adding drivers/plug-ins that will leverage
 incremental backup features of either the storage or Hypervisors.
 Independently, in Havana the ability to do consistent volume snapshots was
 added to GlusterFS. I assume that this consistency support could be
 generalized to support other volume drivers, and be utilized as part of a
 backup code.

I think we are talking specific implementations here. Yes, I am aware of Ceph 
blueprint to support incremental backup, but Cinder backup APIs are volume 
specific. That means if a VM has multiple volumes mapped as in the case 2 I 
discussed, tenant need to call backup api three times. Also if you look at the 
swift layout of the cinder, it is very difficult to tie the swift images back 
to a particular VM. Imagine a tenant were to restore a VM and all its resources 
from a backup copy that was performed a week ago. The restore operation is not 
straight forward.
It is my understanding that consistency should be maintained at the VM, not at 
individual volume. It is very difficult to assume how the application data 
inside VM is laid out.

 Looking at the key features in Raksha, it seems that the main features
 (2,3,4,7) could be addressed by improving the current mechanisms in Nova
 and Cinder. I didn't included 1 as a feature as it is more a statement of
 intent (or goal) than a feature.
 Features 5 (dedup) and 6 (scheduler) are indeed new in your proposal.

 Looking at the source de-duplication feature, and taking Swift as an
 example, it seems reasonable that if Swift will implement de-duplication,
 then doing backup to Swift will give us de-duplication for free.
 In fact it would make sense to do the de-duplication 

Re: [openstack-dev] [Neutron] Resource URL support for more than two levels

2013-08-29 Thread Mark McClain
Stay tuned.  There are folks working on a proposed set of API framework 
changes.  This will be something that we'll discuss as part of deciding the 
features in the Icehouse release.

mark

On Aug 29, 2013, at 10:08 AM, Justin Hammond justin.hamm...@rackspace.com 
wrote:

 I find that kind of flexibility quite valuable for plugin developers. +1 to 
 this. I'd like to be involved if possible with helping you with it.
 
 From: balaji patnala patnala...@gmail.com
 Reply-To: OpenStack Development Mailing List 
 openstack-dev@lists.openstack.org
 Date: Thu, 29 Aug 2013 11:02:33 +0530
 To: OpenStack Development Mailing List openstack-dev@lists.openstack.org
 Cc: openst...@lists.openstack.org openst...@lists.openstack.org
 Subject: Re: [openstack-dev] [Neutron] Resource URL support for more than two 
 levels
 
 Hi, 
 
 When compared to Nova URL implementations, It is observed that the Neutron 
 URL support cannot be used for more than TWO levels.
 
 Applications which want to add as PLUG-IN may be restricted with this.
 
 We want to add support for changes required for supporting more than TWO 
 Levels of URL by adding the support changes required in Core Neutron Files.
 
 Any comments/interest in this.?
 
 Regards,
 Balaji.P
 
 
 
 
 
 On Tue, Aug 27, 2013 at 5:04 PM, B Veera-B37207 b37...@freescale.com wrote:
 Hi,
 
  
 The current infrastructure provided in Quantum [Grizzly], while building 
 Quantum API resource URL using the base function ‘base.create_resource()’ and 
 RESOURCE_ATTRIBUTE_MAP/SUB_RESOURCE_ATTRIBUTE_MAP, supports only two level 
 URI.
 
 Example:
 
 GET  /lb/pools/pool_id/members/member_id
 
  
 
 Some applications may need more than two levels of URL support. Example: GET  
 /lb/pools/pool_id/members/member_id/xyz/xyz_id
 
  
 
 If anybody is interested in this, we want to contribute for this as BP and 
 make it upstream.
 
  
 
 Regards,
 
 Veera.
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___ OpenStack-dev mailing list 
 OpenStack-dev@lists.openstack.org 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposal for Raksha, a Data Protection As a Service project

2013-08-29 Thread Murali Balcha

 My question is, would it make sense to add to the current mechanisms in
 Nova and Cinder than add the complexity of a new project?
 
 I think the answer is yes  :)


I meant there is a clear need for Raksha project. :)

Thanks,
Murali Balcha

On Aug 29, 2013, at 7:45 PM, Murali Balcha murali.bal...@triliodata.com 
wrote:

 
 
 From: Ronen Kat ronen...@il.ibm.com
 Sen: Thursday, August 29, 2013 2:55 PM
 To: openstack-dev@lists.openstack.org; openstack-...@lists.launchpad.net
 Subject: Re: [openstack-dev] Proposal for Raksha, a Data Protection As a 
 Service project
 
 Hi Murali,
 
 I think the idea to provide enhanced data protection in OpenStack is a
 great idea, and I have been thinking about  backup in OpenStack for a while
 now.
 I just not sure a new project is the only way to do.
 
 (as disclosure, I contributed code to enable IBM TSM as a Cinder backup
 driver)
 
 Hi Kat,
 Consider the following use cases that Raksha will addresses. I will discuss 
 from simple to complex use case and then address your specific questions with 
 inline comments.
 1.VM1 that is created on the local file system with a cinder volume 
 attached
 2.VM2 that is booted off from a cinder volume and has couple of cinder 
 volumes attached
 3.VM1 and VM2 all booted from cinder volumes and has couple of volumes 
 attached. They also share a private network for internal communication.
 4.
 In all these cases Raksha will take a consistent snap of VMs, walk thru each 
 VM resources and backup the resources to swift end point. 
 In case 1, that means backup VM image and Cinder volume image to swift
 In case 2 is an extension of case 1.
 In case 3, Raksha not only backup VM1 and VM2 and its associated resources, 
 it also backup the network configuration
 
 Now lets consider the restore case. The restore operation walks thru the 
 backup resources and calls into respective openstack services to restore 
 those objects. In case1, it first calls Nova API to restore the VM, it calls 
 into Cinder to restore the volume and attach the volume to the newly restored 
 VM instance. In case of 3, it also calls into Neutron API to restore the 
 networking. Hence my argument is that not one OpenStack project has a global 
 view of VM and all its resources to implement an effective backup and restore 
 services.
 
 
 I wonder what is the added-value of a project approach versus enhancements
 to the current Nova and Cinder implementations of backup. Let me elaborate.
 
 Nova has a nova backup feature that performs a backup of a VM to Glance,
 the backup is managed by tenants in the same way that you propose.
 While today it provides only point-in-time full backup, it seems reasonable
 that it can be extended support incremental and consistent backup as well -
 as the actual work is done either by the Storage or Hypervisor in any case.
 
 Though Nova has API to upload a snapshot of the VM to glance, it does not 
 snapshot any volumes associated with the VM. When a snapshot is uploaded to 
 glance, Nova creates an image by collapsing the qemu image with delta file 
 and uploads the larger file to glance. If we were to perform periodic backups 
 of VMs, this is a very inefficient way to do backup. Also having to manage 
 two end points, one for Nova and Cinder is inefficient. These are the gaps I 
 called out in Raksha wiki page.
 
 
 Cinder has a cinder backup command that performs a volume backup to Swift,
 Ceph or TSM. The Ceph implementation also support incremental backup (Ceph
 to Ceph).
 I envision that Cinder could be expanded to support incremental backup (for
 persistent storage) by adding drivers/plug-ins that will leverage
 incremental backup features of either the storage or Hypervisors.
 Independently, in Havana the ability to do consistent volume snapshots was
 added to GlusterFS. I assume that this consistency support could be
 generalized to support other volume drivers, and be utilized as part of a
 backup code.
 
 I think we are talking specific implementations here. Yes, I am aware of Ceph 
 blueprint to support incremental backup, but Cinder backup APIs are volume 
 specific. That means if a VM has multiple volumes mapped as in the case 2 I 
 discussed, tenant need to call backup api three times. Also if you look at 
 the swift layout of the cinder, it is very difficult to tie the swift images 
 back to a particular VM. Imagine a tenant were to restore a VM and all its 
 resources from a backup copy that was performed a week ago. The restore 
 operation is not straight forward.
 It is my understanding that consistency should be maintained at the VM, not 
 at individual volume. It is very difficult to assume how the application data 
 inside VM is laid out.
 
 Looking at the key features in Raksha, it seems that the main features
 (2,3,4,7) could be addressed by improving the current mechanisms in Nova
 and Cinder. I didn't included 1 as a feature as it is more a statement 

Re: [openstack-dev] About multihost patch review

2013-08-29 Thread Yongsheng Gong
On Thu, Aug 29, 2013 at 1:28 AM, Vishvananda Ishaya
vishvana...@gmail.comwrote:


 On Aug 26, 2013, at 6:14 PM, Maru Newby ma...@redhat.com wrote:

 
  On Aug 26, 2013, at 4:06 PM, Edgar Magana emag...@plumgrid.com wrote:
 
  Hi Developers,
 
  Let me explain my point of view on this topic and please share your
 thoughts in order to merge this new feature ASAP.
 
  My understanding is that multi-host is nova-network HA  and we are
 implementing this bp
 https://blueprints.launchpad.net/neutron/+spec/quantum-multihost for the
 same reason.
  So, If in neutron configuration admin enables multi-host:
  etc/dhcp_agent.ini
 
  # Support multi host networks
  # enable_multihost = False
 
  Why do tenants needs to be aware of this? They should just create
 networks in the way they normally do and not by adding the multihost
 extension.
 
  I was pretty confused until I looked at the nova-network HA doc [1].
  The proposed design would seem to emulate nova-network's multi-host HA
 option, where it was necessary to both run nova-network on every compute
 node and create a network explicitly as multi-host.  I'm not sure why
 nova-network was implemented in this way, since it would appear that
 multi-host is basically all-or-nothing.  Once nova-network services are
 running on every compute node, what does it mean to create a network that
 is not multi-host?

 Just to add a little background to the nova-network multi-host: The fact
 that the multi_host flag is stored per-network as opposed to a
 configuration was an implementation detail. While in theory this would
 support configurations where some networks are multi_host and other ones
 are not, I am not aware of any deployments where both are used together.

 That said, If there is potential value in offering both, it seems like it
 should be under the control of the deployer not the user. In other words
 the deployer should be able to set the default network type and enforce
 whether setting the type is exposed to the user at all.

yes, the default is not multihost, admin (by policy) can set up multihost
network


 Also, one final point. In my mind, multi-host is strictly better than
 single host, if I were to redesign nova-network today, I would get rid of
 the single host mode completely.

 problem is: the current design of neutron is single host already (If I get
your point). To do multihost automatically, it needs much effort .

 Vish

 
  So, to Edgar's question - is there a reason other than 'be like
 nova-network' for requiring neutron multi-host to be configured per-network?
 
 
  m.
 
  1:
 http://docs.openstack.org/trunk/openstack-compute/admin/content/existing-ha-networking-options.html
 
 
  I could be totally wrong and crazy, so please provide some feedback.
 
  Thanks,
 
  Edgar
 
 
  From: Yongsheng Gong gong...@unitedstack.com
  Date: Monday, August 26, 2013 2:58 PM
  To: Kyle Mestery (kmestery) kmest...@cisco.com, Aaron Rosen 
 aro...@nicira.com, Armando Migliaccio amigliac...@vmware.com, Akihiro
 MOTOKI amot...@gmail.com, Edgar Magana emag...@plumgrid.com, Maru
 Newby ma...@redhat.com, Nachi Ueno na...@nttmcl.com, Salvatore
 Orlando sorla...@nicira.com, Sumit Naiksatam 
 sumit.naiksa...@bigswitch.com, Mark McClain mark.mccl...@dreamhost.com,
 Gary Kotton gkot...@vmware.com, Robert Kukura rkuk...@redhat.com
  Cc: OpenStack List openstack-dev@lists.openstack.org
  Subject: Re: About multihost patch review
 
  Hi,
  Edgar Magana has commented to say:
  'This is the part that for me is confusing and I will need some
 clarification from the community. Do we expect to have the multi-host
 feature as an extension or something that will natural work as long as the
 deployment include more than one Network Node. In my opinion, Neutron
 deployments with more than one Network Node by default should call DHCP
 agents in all those nodes without the need to use an extension. If the
 community has decided to do this by extensions, then I am fine' at
 
 https://review.openstack.org/#/c/37919/11/neutron/extensions/multihostnetwork.py
 
  I have commented back, what is your opinion about it?
 
  Regards,
  Yong Sheng Gong
 
 
  On Fri, Aug 16, 2013 at 9:28 PM, Kyle Mestery (kmestery) 
 kmest...@cisco.com wrote:
  Hi Yong:
 
  I'll review this and try it out today.
 
  Thanks,
  Kyle
 
  On Aug 15, 2013, at 10:01 PM, Yongsheng Gong gong...@unitedstack.com
 wrote:
 
  The multihost patch is there for a long long time, can someone help
 to review?
  https://review.openstack.org/#/c/37919/
 
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org

Re: [openstack-dev] Proposal for Raksha, a Data Protection As a Service project

2013-08-29 Thread John Griffith
On Thu, Aug 29, 2013 at 6:36 PM, Murali Balcha murali.bal...@triliodata.com
 wrote:


  My question is, would it make sense to add to the current mechanisms in
  Nova and Cinder than add the complexity of a new project?
 
  I think the answer is yes  :)


 I meant there is a clear need for Raksha project. :)

 Thanks,
 Murali Balcha

 On Aug 29, 2013, at 7:45 PM, Murali Balcha murali.bal...@triliodata.com
 wrote:

 
  
  From: Ronen Kat ronen...@il.ibm.com
  Sen: Thursday, August 29, 2013 2:55 PM
  To: openstack-dev@lists.openstack.org;
 openstack-...@lists.launchpad.net
  Subject: Re: [openstack-dev] Proposal for Raksha, a Data Protection As
 a Service project
 
  Hi Murali,
 
  I think the idea to provide enhanced data protection in OpenStack is a
  great idea, and I have been thinking about  backup in OpenStack for a
 while
  now.
  I just not sure a new project is the only way to do.
 
  (as disclosure, I contributed code to enable IBM TSM as a Cinder backup
  driver)
 
  Hi Kat,
  Consider the following use cases that Raksha will addresses. I will
 discuss from simple to complex use case and then address your specific
 questions with inline comments.
  1.VM1 that is created on the local file system with a cinder volume
 attached
  2.VM2 that is booted off from a cinder volume and has couple of
 cinder volumes attached
  3.VM1 and VM2 all booted from cinder volumes and has couple of
 volumes attached. They also share a private network for internal
 communication.
  4.
  In all these cases Raksha will take a consistent snap of VMs, walk thru
 each VM resources and backup the resources to swift end point.
  In case 1, that means backup VM image and Cinder volume image to swift
  In case 2 is an extension of case 1.
  In case 3, Raksha not only backup VM1 and VM2 and its associated
 resources, it also backup the network configuration
 
  Now lets consider the restore case. The restore operation walks thru the
 backup resources and calls into respective openstack services to restore
 those objects. In case1, it first calls Nova API to restore the VM, it
 calls into Cinder to restore the volume and attach the volume to the newly
 restored VM instance. In case of 3, it also calls into Neutron API to
 restore the networking. Hence my argument is that not one OpenStack project
 has a global view of VM and all its resources to implement an effective
 backup and restore services.
 
 
  I wonder what is the added-value of a project approach versus
 enhancements
  to the current Nova and Cinder implementations of backup. Let me
 elaborate.
 
  Nova has a nova backup feature that performs a backup of a VM to
 Glance,
  the backup is managed by tenants in the same way that you propose.
  While today it provides only point-in-time full backup, it seems
 reasonable
  that it can be extended support incremental and consistent backup as
 well -
  as the actual work is done either by the Storage or Hypervisor in any
 case.
 
  Though Nova has API to upload a snapshot of the VM to glance, it does
 not snapshot any volumes associated with the VM. When a snapshot is
 uploaded to glance, Nova creates an image by collapsing the qemu image with
 delta file and uploads the larger file to glance. If we were to perform
 periodic backups of VMs, this is a very inefficient way to do backup. Also
 having to manage two end points, one for Nova and Cinder is inefficient.
 These are the gaps I called out in Raksha wiki page.
 
 
  Cinder has a cinder backup command that performs a volume backup to
 Swift,
  Ceph or TSM. The Ceph implementation also support incremental backup
 (Ceph
  to Ceph).
  I envision that Cinder could be expanded to support incremental backup
 (for
  persistent storage) by adding drivers/plug-ins that will leverage
  incremental backup features of either the storage or Hypervisors.
  Independently, in Havana the ability to do consistent volume snapshots
 was
  added to GlusterFS. I assume that this consistency support could be
  generalized to support other volume drivers, and be utilized as part
 of a
  backup code.
 
  I think we are talking specific implementations here. Yes, I am aware of
 Ceph blueprint to support incremental backup, but Cinder backup APIs are
 volume specific. That means if a VM has multiple volumes mapped as in the
 case 2 I discussed, tenant need to call backup api three times. Also if you
 look at the swift layout of the cinder, it is very difficult to tie the
 swift images back to a particular VM. Imagine a tenant were to restore a VM
 and all its resources from a backup copy that was performed a week ago. The
 restore operation is not straight forward.
  It is my understanding that consistency should be maintained at the VM,
 not at individual volume. It is very difficult to assume how the
 application data inside VM is laid out.
 
  Looking at the key features in Raksha, it seems that the main features
  (2,3,4,7) could be addressed 

Re: [openstack-dev] [Neutron] Resource URL support for more than two levels

2013-08-29 Thread P Balaji-B37839
Thanks Justin.

Sure, we will take your help as it is required on this. We will prepare 
blue-print capturing all the details and will assign you as reviewer.

Regards,
Balaji.P

From: Justin Hammond [mailto:justin.hamm...@rackspace.com]
Sent: Thursday, August 29, 2013 7:39 PM
To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] [Neutron] Resource URL support for more than two 
levels

I find that kind of flexibility quite valuable for plugin developers. +1 to 
this. I'd like to be involved if possible with helping you with it.

From: balaji patnala patnala...@gmail.commailto:patnala...@gmail.com
Reply-To: OpenStack Development Mailing List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Thu, 29 Aug 2013 11:02:33 +0530
To: OpenStack Development Mailing List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Cc: openst...@lists.openstack.orgmailto:openst...@lists.openstack.org 
openst...@lists.openstack.orgmailto:openst...@lists.openstack.org
Subject: Re: [openstack-dev] [Neutron] Resource URL support for more than two 
levels

Hi,

When compared to Nova URL implementations, It is observed that the Neutron URL 
support cannot be used for more than TWO levels.

Applications which want to add as PLUG-IN may be restricted with this.

We want to add support for changes required for supporting more than TWO Levels 
of URL by adding the support changes required in Core Neutron Files.

Any comments/interest in this.?

Regards,
Balaji.P




On Tue, Aug 27, 2013 at 5:04 PM, B Veera-B37207 
b37...@freescale.commailto:b37...@freescale.com wrote:
Hi,

The current infrastructure provided in Quantum [Grizzly], while building 
Quantum API resource URL using the base function 'base.create_resource()' and 
RESOURCE_ATTRIBUTE_MAP/SUB_RESOURCE_ATTRIBUTE_MAP, supports only two level URI.
Example:
GET  /lb/pools/pool_id/members/member_id

Some applications may need more than two levels of URL support. Example: GET  
/lb/pools/pool_id/members/member_id/xyz/xyz_id

If anybody is interested in this, we want to contribute for this as BP and make 
it upstream.

Regards,
Veera.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___ OpenStack-dev mailing list 
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Horizon - Mockup tool

2013-08-29 Thread Sergey Lukjanov
I'm using balsamiq to make mockups.

Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.
(sent from my phone)
30.08.2013 0:37 пользователь Endre Karlson endre.karl...@gmail.com
написал:

 Does anyone know what too is used to do mockups ?

 Endre

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TRIPLEO] Derekh for tripleo core

2013-08-29 Thread Clint Byrum
Excerpts from Robert Collins's message of 2013-08-27 14:25:47 -0700:
 http://russellbryant.net/openstack-stats/tripleo-reviewers-30.txt
 http://russellbryant.net/openstack-stats/tripleo-reviewers-90.txt
 
 - Derek is reviewing fairly regularly and has got a sense of the
 culture etc now, I think.
 
 So - calling for votes for Derek to become a TripleO core reviewer!

+1

 
 I think we're nearly at the point where we can switch to the 'two
 +2's' model - what do you think?
 

I am reluctant to go there just yet. We have a couple of absentee
members of core at the moment and thus we really only have 4 active
reviewers. That is n+1, but I worry about reviewers getting fatigued
or feeling pressure to get to more reviews rather than feeling pressure
to do a few really thorough reviews.

So I think we should let derekh's presence on core be felt for a few
weeks and then re-visit the +4 requirement then.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Heat mission statement

2013-08-29 Thread Clint Byrum
Excerpts from Robert Collins's message of 2013-08-27 14:13:37 -0700:
 On 28 August 2013 06:54, Steven Hardy sha...@redhat.com wrote:
  We had some recent discussions regarding the Heat mission statement and
  came up with:
 
  To explicitly model the relationships between OpenStack resources of all
  kinds; and to harness those models, expressed in forms accessible to both
  humans and machines, to manage infrastructure resources throughout the
  lifecycle of applications.
 
 Bingo!
 
  The ideas, iterations and some discussion is captured in this etherpad:
 
  https://etherpad.openstack.org/heat-mission
 
  If anyone has any remaining comments, please speak now, but I think most of
  those involved in the discussion thus-far have reached the point of wishing
  to declare it final ;)
 
 I think there is some confusion about implementation vs intent here
 :). Or at least I hope so. I wouldn't expect Nova's mission statement
 to talk about 'modelling virtual machines' : modelling is internal
 jargon, not a mission!
 
 What you want, IMO, is for a moderately technical sysadmin to read the
 mission statement and go 'hell yeahs, I want to use Heat'.
 
 Create a human and machine accessible service for managing the entire
 lifecycle of infrastructure and applications within OpenStack clouds.
 

Reading the two next to eachother, this evoked The Emperor's New Groove
for me.

Why do we even have that lever?

Robert this encapsulates what I think of Heat perfectly. +1 from me.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TRIPLEO] Derekh for tripleo core

2013-08-29 Thread Clint Byrum
(sent 2 days ago, but re-sending this as it never seemed to have arrived
on the mailing list)

Excerpts from Robert Collins's message of 2013-08-27 14:25:47 -0700:
 http://russellbryant.net/openstack-stats/tripleo-reviewers-30.txt
 http://russellbryant.net/openstack-stats/tripleo-reviewers-90.txt
 
 - Derek is reviewing fairly regularly and has got a sense of the
 culture etc now, I think.
 
 So - calling for votes for Derek to become a TripleO core reviewer!

+1

 
 I think we're nearly at the point where we can switch to the 'two
 +2's' model - what do you think?
 

I am reluctant to go there just yet. We have a couple of absentee
members of core at the moment and thus we really only have 4 active
reviewers. That is n+1, but I worry about reviewers getting fatigued
or feeling pressure to get to more reviews rather than feeling pressure
to do a few really thorough reviews.

So I think we should let derekh's presence on core be felt for a few
weeks and then re-visit the +4 requirement then.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [savanna] Service Relationships and Dependencies

2013-08-29 Thread Clint Byrum
Excerpts from John Speidel's message of 2013-08-27 09:29:18 -0700:
 Some services/components are related or have dependencies on other 
 services and components.As an example, in HDP, the Hive service depends 
 on HBase and Zookeeper.In Savanna, there is no way to express this 
 relationship.If a user wanted to deploy Hive, they would need to know to 
 install both HBase and Zookeeper a priori.Also, because the list of 
 service components(node processes) that is provided to a user to be used 
 in node groups is a flat list, only the component name gives any 
 indication as to what service the components belong to.Because of this, 
 it will likely be difficult for the user to understand exactly what 
 components are required to be installed for a given 
 service(s).Currently, the HDP stack consists of approximately 25 service 
 components.
 
 
 A primary reason that it isn't currently possible to express 
 service/component relationships is that topology is defined from the 
 bottom up.This means that a user first selects components and assigns 
 them to a node template.The users first interaction is with components, 
 not services.Currently, the user will not know if a given topology is 
 valid until an attempt is made to deploy a cluster and validate is 
 called on the plugin.At this point, if the topology were invalid, the 
 user would need to go back and create new node and cluster templates.
 
 
 
 One way to express service relationships would be to define topology top 
 down, with the user first selecting services.After selecting services, 
 the related service components could be listed and the required 
 components could be noted. This approach is a significant change to how 
 Savanna currently works, has not been thoroughly thought through and and 
 is only meant to promote conversation on the matter.
 
 
 
 After making new services available from the HDP plugin, it is clear 
 that defining a desired (valid) topology will be very difficult and 
 error prone with the current savanna architecture.I look forward to 
 discussing solutions to this matter with the community.
 
 

I understand that Savanna is laser-focused on Hadoop, however, it
seems really odd that Savanna would have its own way to define service
dependencies and deployments.

Heat is specifically meant to aid users in deploying multi-node services
with complicated dependency graphs. While I do agree this would be a
departure from the way Savanna works, it would also be in-line with what
Trove is doing for the same reasons.

So, as you move toward a service dependency graph, please consider using
Heat to orchestrate nodes, and enhancing it where Savanna needs it to
work better, rather than pouring more into a Savanna only solution. I
think in the end it will result in a simpler solution for Savanna, and
a better Heat as well.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev