Re: [openstack-dev] [nova] [savanna] Host information for non admin users

2013-09-13 Thread Alexander Kuznetsov
Thanks for your comments let me explain a bit more about Hadoop topology.

In Hadoop 1.2 version,  4 level topologies were introduced: all network,
rack, node group (represent Hadoop nodes on the same compute host in the
simplest case) and node. Usually Hadoop has replication factor 3. In this
case Hadoop placement algorithm is trying to put a HDFS block in the local
node or local node group, second replica should be placed outside the node
group, but on the same rack, and the last replica outside the initial rack.
Topology is defined by the path to vm e.g.

/datacenter1/rack1/host1/vm1
/datacenter1/rack1/host1/vm2
/datacenter1/rack1/host2/vm1
/datacenter1/rack1/host2/vm2
/datacenter1/rack2/host3/vm1
/datacenter1/rack2/host3/vm2


Also, this information will be used for job routing, to place the mapper as
closest as possible to the data.


The main idea to provide this information to Hadoop. Usually it direct
mapping between physical data center structure and Hadoop node placement,
but the case of public center the some abstract names will be fine if this
configuration a reflex a proximity information for Hadoop nodes.


Mike as I understand  holistic scheduler can provide needed information.
Can you give more details about it?


On Fri, Sep 13, 2013 at 11:54 AM, John Garbutt j...@johngarbutt.com wrote:

 Exposing the detailed info in private cloud, sure makes sense. For
 public clouds, not so sure. Would be nice to find something that works
 for both.

 We let the user express their intent through the instance groups api.
 The scheduler will then do a best effort to meet that criteria, using
 its private information. At a courser grain, we have availability
 zones, that you could use to express closeness, and probably often
 give you a good measure of closeness anyway.

 So a Hadoop user could request a several small groups of VMs defined
 in instance groups to be close, and maybe spread across different
 availability zones.

 Would that do the trick? Or does Hadoop/HDFS need a bit more
 granularity than that? Could it look to auto-detect closeness in
 some auto-setup phase, given rough user hints?

 John

 On 13 September 2013 07:40, Alex Glikson glik...@il.ibm.com wrote:
  If I understand correctly, what really matters at least in case of
 Hadoop is
  network proximity between instances.
  Hence, maybe Neutron would be a better fit to provide such information.
 In
  particular, depending on virtual network configuration, having 2
 instances
  on the same node does not guarantee that the network traffic between them
  will be routed within the node.
  Physical layout could be useful for availability-related purposes. But
 even
  then, it should be abstracted in such a way that it will not reveal
 details
  that a cloud provider will typically prefer not to expose. Maybe this
 can be
  done by Ironic -- or a separate/new project (Tuskar sounds related).
 
  Regards,
  Alex
 
 
 
 
  From:Mike Spreitzer mspre...@us.ibm.com
  To:OpenStack Development Mailing List
  openstack-dev@lists.openstack.org,
  Date:13/09/2013 08:54 AM
  Subject:Re: [openstack-dev] [nova] [savanna] Host information for
  nonadminusers
  
 
 
 
  From: Nirmal Ranganathan rnir...@gmail.com
  ...
  Well that's left upto the specific block placement policies in hdfs,
  all we are providing with the topology information is a hint on
  node/rack placement.
 
  Oh, you are looking at the placement of HDFS blocks within the fixed
 storage
  volumes, not choosing where to put the storage volumes.  In that case I
  understand and agree that simply providing identifiers from the
  infrastructure to the middleware (HDFS) will suffice.  Coincidentally my
  group is working on this very example right now in our own environment.
  We
  have a holistic scheduler that is given a whole template to place, and it
  returns placement information.  We imagine, as does Hadoop, a general
  hierarchy in the physical layout, and the holistic scheduler returns, for
  each VM, the path from the root to the VM's host.
 
  Regards,
 
  Mike___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [pci device passthrough] fails with NameError: global name '_' is not defined

2013-09-13 Thread yongli he

于 2013年09月11日 21:27, Henry Gessau 写道:

For the TypeError: expected string or buffer I have filed Bug #1223874.


On Wed, Sep 11, at 7:41 am, yongli he yongli...@intel.com wrote:


于 2013年09月11日 05:38, David Kang 写道:

- Original Message -

From: Russell Bryant rbry...@redhat.com
To: David Kang dk...@isi.edu
Cc: OpenStack Development Mailing List openstack-dev@lists.openstack.org
Sent: Tuesday, September 10, 2013 5:17:15 PM
Subject: Re: [openstack-dev] [nova] [pci device passthrough] fails with NameError: 
global name '_' is not defined
On 09/10/2013 05:03 PM, David Kang wrote:

- Original Message -

From: Russell Bryant rbry...@redhat.com
To: OpenStack Development Mailing List
openstack-dev@lists.openstack.org
Cc: David Kang dk...@isi.edu
Sent: Tuesday, September 10, 2013 4:42:41 PM
Subject: Re: [openstack-dev] [nova] [pci device passthrough] fails
with NameError: global name '_' is not defined
On 09/10/2013 03:56 PM, David Kang wrote:

   Hi,

I'm trying to test pci device passthrough feature.
Havana3 is installed using Packstack on CentOS 6.4.
Nova-compute dies right after start with error NameError: global
name '_' is not defined.
I'm not sure if it is due to misconfiguration of nova.conf or bug.
Any help will be appreciated.

Here is the info:

/etc/nova/nova.conf:
pci_alias={name:test, product_id:7190, vendor_id:8086,
device_type:ACCEL}

pci_passthrough_whitelist=[{vendor_id:8086,product_id:7190}]

   With that configuration, nova-compute fails with the following
   log:

File
/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/amqp.py,
line 461, in _process_data
  **args)

File
/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/dispatcher.py,
line 172, in dispatch
  result = getattr(proxyobj, method)(ctxt, **kwargs)

File
/usr/lib/python2.6/site-packages/nova/conductor/manager.py,
line 567, in object_action
  result = getattr(objinst, objmethod)(context, *args, **kwargs)

File /usr/lib/python2.6/site-packages/nova/objects/base.py,
line
141, in wrapper
  return fn(self, ctxt, *args, **kwargs)

File
/usr/lib/python2.6/site-packages/nova/objects/pci_device.py,
line 242, in save
  self._from_db_object(context, self, db_pci)

NameError: global name '_' is not defined
2013-09-10 12:52:23.774 14749 TRACE
nova.openstack.common.threadgroup Traceback (most recent call
last):
2013-09-10 12:52:23.774 14749 TRACE
nova.openstack.common.threadgroup File
/usr/lib/python2.6/site-packages/nova/openstack/common/threadgroup.py,
line 117, in wait
2013-09-10 12:52:23.774 14749 TRACE
nova.openstack.common.threadgroup x.wait()
2013-09-10 12:52:23.774 14749 TRACE
nova.openstack.common.threadgroup File
/usr/lib/python2.6/site-packages/nova/openstack/common/threadgroup.py,
line 49, in wait
2013-09-10 12:52:23.774 14749 TRACE
nova.openstack.common.threadgroup return self.thread.wait()
2013-09-10 12:52:23.774 14749 TRACE
nova.openstack.common.threadgroup File
/usr/lib/python2.6/site-packages/eventlet/greenthread.py, line
166, in wait
2013-09-10 12:52:23.774 14749 TRACE
nova.openstack.common.threadgroup return self._exit_event.wait()
2013-09-10 12:52:23.774 14749 TRACE
nova.openstack.common.threadgroup File
/usr/lib/python2.6/site-packages/eventlet/event.py, line 116, in
wait
2013-09-10 12:52:23.774 14749 TRACE
nova.openstack.common.threadgroup return hubs.get_hub().switch()
2013-09-10 12:52:23.774 14749 TRACE
nova.openstack.common.threadgroup File
/usr/lib/python2.6/site-packages/eventlet/hubs/hub.py, line 177,
in switch
2013-09-10 12:52:23.774 14749 TRACE
nova.openstack.common.threadgroup return self.greenlet.switch()
2013-09-10 12:52:23.774 14749 TRACE
nova.openstack.common.threadgroup File
/usr/lib/python2.6/site-packages/eventlet/greenthread.py, line
192, in main
2013-09-10 12:52:23.774 14749 TRACE
nova.openstack.common.threadgroup result = function(*args,
**kwargs)
2013-09-10 12:52:23.774 14749 TRACE
nova.openstack.common.threadgroup File
/usr/lib/python2.6/site-packages/nova/openstack/common/service.py,
line 65, in run_service
2013-09-10 12:52:23.774 14749 TRACE
nova.openstack.common.threadgroup service.start()
2013-09-10 12:52:23.774 14749 TRACE
nova.openstack.common.threadgroup File
/usr/lib/python2.6/site-packages/nova/service.py, line 164, in
start
2013-09-10 12:52:23.774 14749 TRACE
nova.openstack.common.threadgroup self.manager.pre_start_hook()
2013-09-10 12:52:23.774 14749 TRACE
nova.openstack.common.threadgroup File
/usr/lib/python2.6/site-packages/nova/compute/manager.py, line
805, in pre_start_hook
2013-09-10 12:52:23.774 14749 TRACE
nova.openstack.common.threadgroup
self.update_available_resource(nova.context.get_admin_context())
2013-09-10 12:52:23.774 14749 TRACE
nova.openstack.common.threadgroup File
/usr/lib/python2.6/site-packages/nova/compute/manager.py, line
4773, in update_available_resource
2013-09-10 12:52:23.774 14749 TRACE
nova.openstack.common.threadgroup

[openstack-dev] [savanna] team meeting minutes September 12

2013-09-13 Thread Sergey Lukjanov
Thanks everyone who have joined Savanna meeting.

Here are the logs from the meeting:

Minutes: 
http://eavesdrop.openstack.org/meetings/savanna/2013/savanna.2013-09-12-18.04.html
Minutes (text): 
http://eavesdrop.openstack.org/meetings/savanna/2013/savanna.2013-09-12-18.04.txt
Log: 
http://eavesdrop.openstack.org/meetings/savanna/2013/savanna.2013-09-12-18.04.log.html

Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Questions about plans for heat wadls moving forward

2013-09-13 Thread Zane Bitter

On 13/09/13 05:41, Monty Taylor wrote:



On 09/12/2013 04:33 PM, Steve Baker wrote:

On 09/13/2013 08:28 AM, Mike Asthalter wrote:

Hello,

Can someone please explain the plans for our 2 wadls moving forward:

   * wadl in original heat
 repo: 
https://github.com/openstack/heat/blob/master/doc/docbkx/api-ref/src/wadls/heat-api/src/heat-api-1.0.wadl
 
%22https://github.com/openstack/heat/blob/master/doc/docbkx/api-ref/src/wadls/heat-api/src/heat-api-1.
   * wadl in api-site
 repo: 
https://github.com/openstack/api-site/blob/master/api-ref/src/wadls/orchestration-api/src/v1/orchestration-api.wadl


The original intention was to delete the heat wadl when the api-site one
became merged.


+1


1. Is there a need to maintain 2 wadls moving forward, with the wadl
in the original heat repo containing calls that may not be
implemented, and the wadl in the api-site repo containing implemented
calls only?

 Anne Gentle advises as follows in regard to these 2 wadls:

 I'd like the WADL in api-site repo to be user-facing. The other
 WADL can be truth if it needs to be a specification that's not yet
 implemented. If the WADL in api-site repo is true and implemented,
 please just maintain one going forward.


2. If we maintain 2 wadls, what are the consequences (gerrit reviews,
docs out of sync, etc.)?

3. If we maintain only the 1 orchestration wadl, how do we want to
pull in the wadl content to the api-ref doc
(https://github.com/openstack/heat/blob/master/doc/docbkx/api-ref/src/docbkx/api-ref.xml
%22https://github.com/openstack/heat/blob/master/doc/docbkx/api-ref/src/docb)
from the orchestration wadl in the api-site repo: subtree merge, other?



These are good questions, and could apply equally to other out-of-tree
docs as features get added during the development cycle.

I still think that our wadl should live only in api-site.  If api-site
has no branching policy to maintain separate Havana and Icehouse
versions then maybe Icehouse changes should be posted as WIP reviews
until they can be merged.


I believe there is no branching in api-site because it's describing API
and there is no such thing as a havana or icehouse version of an API -
there are the API versions and they are orthogonal to server release
versions. At least in theory. :)


Yes and no. When new API versions arrive, they always arrive with a 
particular release. So we do need some way to ensure the docs go live at 
the right time, but I think Steve's suggestion for handling that is fine.


cheers,
Zane.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Savanna]Creating new plugin

2013-09-13 Thread Arindam Choudhury
Hi,

I am trying to provision hadoop0.20.203.0 with jdk6u45. So, I tweaked 
savanna-image-elements and created a pre-installed vm image.
Then I copied vanilla and edit it to create a new plugin named mango. also to 
include the new plugin, I edited etc/savanna/savanna.conf as follows:

plugins=vanilla,mango
[plugin:vanilla]
plugin_class=savanna.plugins.vanilla.plugin:VanillaProvider
[plugin:mango]
plugin_class=savanna.plugins.mango.plugin:MangoProvider

Then, When I try to start the savanna daemon I get the following error:

# tools/install_venv
  removing /root/savanna/.tox/log
using tox.ini: /root/savanna/tox.ini
using tox-1.6.1 from /usr/lib/python2.6/site-packages/tox/__init__.pyc
GLOB start: packaging 
GLOB sdist-make: /root/savanna/setup.py
  removing /root/savanna/.tox/dist
  /root/savanna$ /usr/bin/python /root/savanna/setup.py sdist --formats=zip 
--dist-dir /root/savanna/.tox/dist /root/savanna/.tox/log/tox-0.log
GLOB finish: packaging after 3.06 seconds
copying new sdistfile to '/root/.tox/distshare/savanna-0.2.a26.g3a8ddfb.zip'
venv start: getenv /root/savanna/.tox/venv
venv reusing: /root/savanna/.tox/venv
venv finish: getenv after 0.03 seconds
venv start: installpkg /root/savanna/.tox/dist/savanna-0.2.a26.g3a8ddfb.zip
venv inst-nodeps: /root/savanna/.tox/dist/savanna-0.2.a26.g3a8ddfb.zip
setting 
PATH=/root/savanna/.tox/venv/bin:/usr/lib64/qt-3.3/bin:/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin
  /root/savanna$ /root/savanna/.tox/venv/bin/pip install --pre 
/root/savanna/.tox/dist/savanna-0.2.a26.g3a8ddfb.zip -U --no-deps 
/root/savanna/.tox/venv/log/venv-10.log
venv finish: installpkg after 2.85 seconds
venv start: runtests 
venv runtests: commands[0] | python --version
setting 
PATH=/root/savanna/.tox/venv/bin:/usr/lib64/qt-3.3/bin:/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin
  /root/savanna$ /root/savanna/.tox/venv/bin/python --version 
Python 2.6.6
venv finish: runtests after 0.00 seconds
_ summary 
__
  venv: commands succeeded
  congratulations :)


# tox -evenv -- savanna-api --config-file etc/savanna/savanna.conf -d
GLOB sdist-make: /root/savanna/setup.py
venv inst-nodeps: /root/savanna/.tox/dist/savanna-0.2.a26.g3a8ddfb.zip
venv runtests: commands[0] | savanna-api --config-file etc/savanna/savanna.conf 
-d
/root/savanna/.tox/venv/lib/python2.6/site-packages/sqlalchemy/engine/strategies.py:117:
 SADeprecationWarning: The 'listeners' argument to Pool (and create_engine()) 
is deprecated.  Use event.listen().
  pool = poolclass(creator, **pool_args)
/root/savanna/.tox/venv/lib/python2.6/site-packages/sqlalchemy/pool.py:160: 
SADeprecationWarning: Pool.add_listener is deprecated.  Use event.listen()
  self.add_listener(l)
2013-09-13 15:28:23.443 4783 DEBUG savanna.plugins.base [-] List of requested 
plugins: ['vanilla', 'mango'] _load_all_plugins 
/root/savanna/.tox/venv/lib/python2.6/site-packages/savanna/plugins/base.py:113
2013-09-13 15:28:23.501 4783 CRITICAL savanna [-] [Errno 2] No such file or 
directory: 
'/root/savanna/.tox/venv/lib/python2.6/site-packages/savanna/plugins/mango/resources/core-default.xml'
ERROR: InvocationError: '/root/savanna/.tox/venv/bin/savanna-api --config-file 
etc/savanna/savanna.conf -d'
_ summary 
__
ERROR:   venv: commands failed


  ___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] TC Meeting / Savanna Incubation Follow-Up

2013-09-13 Thread Alexander Kuznetsov
On Thu, Sep 12, 2013 at 7:30 PM, Michael Basnight mbasni...@gmail.comwrote:

 On Sep 12, 2013, at 2:39 AM, Thierry Carrez wrote:

  Sergey Lukjanov wrote:
 
  [...]
  As you can see, resources provisioning is just one of the features and
 the implementation details are not critical for overall architecture. It
 performs only the first step of the cluster setup. We’ve been considering
 Heat for a while, but ended up direct API calls in favor of speed and
 simplicity. Going forward Heat integration will be done by implementing
 extension mechanism [3] and [4] as part of Icehouse release.
 
  The next part, Hadoop cluster configuration, already extensible and we
 have several plugins - Vanilla, Hortonworks Data Platform and Cloudera
 plugin started too. This allow to unify management of different Hadoop
 distributions under single control plane. The plugins are responsible for
 correct Hadoop ecosystem configuration at already provisioned resources and
 use different Hadoop management tools like Ambari to setup and configure
 all cluster  services, so, there are no actual provisioning configs on
 Savanna side in this case. Savanna and its plugins encapsulate the
 knowledge of Hadoop internals and default configuration for Hadoop services.
 
  My main gripe with Savanna is that it combines (in its upcoming release)
  what sounds like to me two very different services: Hadoop cluster
  provisioning service (like what Trove does for databases) and a
  MapReduce+ data API service (like what Marconi does for queues).
 
  Making it part of the same project (rather than two separate projects,
  potentially sharing the same program) make discussions about shifting
  some of its clustering ability to another library/project more complex
  than they should be (see below).
 
  Could you explain the benefit of having them within the same service,
  rather than two services with one consuming the other ?

 And for the record, i dont think that Trove is the perfect fit for it
 today. We are still working on a clustering API. But when we create it, i
 would love the Savanna team's input, so we can try to make a pluggable API
 thats usable for people who want MySQL or Cassandra or even Hadoop. Im less
 a fan of a clustering library, because in the end, we will both have API
 calls like POST /clusters, GET /clusters, and there will be API duplication
 between the projects.

 I think that Cluster API (if it would be created) will be helpful not only
for Trove and Savanna.  NoSQL, RDBMS and Hadoop are not unique software
which can be clustered. What about different kind of messaging solutions
like RabbitMQ, ActiveMQ or J2EE containers like JBoss, Weblogic and
WebSphere, which often are installed in clustered mode. Messaging,
databases, J2EE containers and Hadoop have their own management cycle. It
will be confusing to make Cluster API a part of Trove which has different
mission - database management and provisioning.

 
  The next topic is “Cluster API”.
 
  The concern that was raised is how to extract general clustering
 functionality to the common library. Cluster provisioning and management
 topic currently relevant for a number of projects within OpenStack
 ecosystem: Savanna, Trove, TripleO, Heat, Taskflow.
 
  Still each of the projects has their own understanding of what the
 cluster provisioning is. The idea of extracting common functionality sounds
 reasonable, but details still need to be worked out.
 
  I’ll try to highlight Savanna team current perspective on this
 question. Notion of “Cluster management” in my perspective has several
 levels:
  1. Resources provisioning and configuration (like instances, networks,
 storages). Heat is the main tool with possibly additional support from
 underlying services. For example, instance grouping API extension [5] in
 Nova would be very useful.
  2. Distributed communication/task execution. There is a project in
 OpenStack ecosystem with the mission to provide a framework for distributed
 task execution - TaskFlow [6]. It’s been started quite recently. In Savanna
 we are really looking forward to use more and more of its functionality in
 I and J cycles as TaskFlow itself getting more mature.
  3. Higher level clustering - management of the actual services working
 on top of the infrastructure. For example, in Savanna configuring HDFS data
 nodes or in Trove setting up MySQL cluster with Percona or Galera. This
 operations are typical very specific for the project domain. As for Savanna
 specifically, we use lots of benefits of Hadoop internals knowledge to
 deploy and configure it properly.
 
  Overall conclusion it seems to be that it make sense to enhance Heat
 capabilities and invest in Taskflow development, leaving domain-specific
 operations to the individual projects.
 
  The thing we'd need to clarify (and the incubation period would be used
  to achieve that) is how to reuse as much as possible between the various
  cluster provisioning projects (Trove, the 

Re: [openstack-dev] [Ceilometer][IceHouse] Ceilometer + Kibana + ElasticSearch Integration

2013-09-13 Thread Monty Taylor


On 09/12/2013 07:06 PM, Nachi Ueno wrote:
 Hi Folks
 
 Is anyone interested in Kibana + ElasticSearch Integration with ceilometer?
 # Note: This discussion is not for Havana.

I, for one, welcome our new ElasticSearch overlords.

 I have registered BP. (for IceHouse)
 https://blueprints.launchpad.net/ceilometer/+spec/elasticsearch-driver
 
 This is demo video.
 http://www.youtube.com/watch?v=8SmA0W0hd4Ifeature=youtu.be
 
 I wrote some sample storage driver for elastic search in ceilometer.
 This is WIP - https://review.openstack.org/#/c/46383/
 
 This integration sounds cool for me, because if we can integrate then,
 we can use it as Log as a service.
 
 IMO, there are some discussion points.
 
 [1] We should add elastic search query api for ceilometer? or we
 should let user kick ElasticSearch api directory?
 
 Note that ElasticSearch has no tenant based authentication, in that
 case we need to integrate Keystone and ElasticSearch. (or Horizon)
 
 [2] Log (syslog or any application log) should be stored in
 Ceilometer? (or it should be new OpenStack project? )
 
 Best
 Nachi
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Savanna]Creating new plugin

2013-09-13 Thread Alexander Ignatov

Hi Arindam,

It seems you forgot to do 'git add' on 
'savanna/plugins/mango/resources/core-default.xml'
Please do this on the other xml and resource files you are using in the 
mango plugin.


Regards,
Alexander Ignatov

On 9/13/2013 5:33 PM, Arindam Choudhury wrote:

Hi,

I am trying to provision hadoop0.20.203.0 with jdk6u45. So, I tweaked 
savanna-image-elements and created a pre-installed vm image.
Then I copied vanilla and edit it to create a new plugin named mango. 
also to include the new plugin, I edited etc/savanna/savanna.conf as 
follows:


plugins=vanilla,mango
[plugin:vanilla]
plugin_class=savanna.plugins.vanilla.plugin:VanillaProvider
[plugin:mango]
plugin_class=savanna.plugins.mango.plugin:MangoProvider

Then, When I try to start the savanna daemon I get the following error:

# tools/install_venv
  removing /root/savanna/.tox/log
using tox.ini: /root/savanna/tox.ini
using tox-1.6.1 from /usr/lib/python2.6/site-packages/tox/__init__.pyc
GLOB start: packaging
GLOB sdist-make: /root/savanna/setup.py
  removing /root/savanna/.tox/dist
  /root/savanna$ /usr/bin/python /root/savanna/setup.py sdist 
--formats=zip --dist-dir /root/savanna/.tox/dist 
/root/savanna/.tox/log/tox-0.log

GLOB finish: packaging after 3.06 seconds
copying new sdistfile to 
'/root/.tox/distshare/savanna-0.2.a26.g3a8ddfb.zip'

venv start: getenv /root/savanna/.tox/venv
venv reusing: /root/savanna/.tox/venv
venv finish: getenv after 0.03 seconds
venv start: installpkg 
/root/savanna/.tox/dist/savanna-0.2.a26.g3a8ddfb.zip

venv inst-nodeps: /root/savanna/.tox/dist/savanna-0.2.a26.g3a8ddfb.zip
setting 
PATH=/root/savanna/.tox/venv/bin:/usr/lib64/qt-3.3/bin:/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin
  /root/savanna$ /root/savanna/.tox/venv/bin/pip install --pre 
/root/savanna/.tox/dist/savanna-0.2.a26.g3a8ddfb.zip -U --no-deps 
/root/savanna/.tox/venv/log/venv-10.log

venv finish: installpkg after 2.85 seconds
venv start: runtests
venv runtests: commands[0] | python --version
setting 
PATH=/root/savanna/.tox/venv/bin:/usr/lib64/qt-3.3/bin:/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin

  /root/savanna$ /root/savanna/.tox/venv/bin/python --version
Python 2.6.6
venv finish: runtests after 0.00 seconds
_ 
summary __

  venv: commands succeeded
  congratulations :)


# tox -evenv -- savanna-api --config-file etc/savanna/savanna.conf -d
GLOB sdist-make: /root/savanna/setup.py
venv inst-nodeps: /root/savanna/.tox/dist/savanna-0.2.a26.g3a8ddfb.zip
venv runtests: commands[0] | savanna-api --config-file 
etc/savanna/savanna.conf -d
/root/savanna/.tox/venv/lib/python2.6/site-packages/sqlalchemy/engine/strategies.py:117: 
SADeprecationWarning: The 'listeners' argument to Pool (and 
create_engine()) is deprecated.  Use event.listen().

  pool = poolclass(creator, **pool_args)
/root/savanna/.tox/venv/lib/python2.6/site-packages/sqlalchemy/pool.py:160: 
SADeprecationWarning: Pool.add_listener is deprecated.  Use event.listen()

  self.add_listener(l)
2013-09-13 15:28:23.443 4783 DEBUG savanna.plugins.base [-] List of 
requested plugins: ['vanilla', 'mango'] _load_all_plugins 
/root/savanna/.tox/venv/lib/python2.6/site-packages/savanna/plugins/base.py:113
2013-09-13 15:28:23.501 4783 CRITICAL savanna [-] [Errno 2] No such 
file or directory: 
'/root/savanna/.tox/venv/lib/python2.6/site-packages/savanna/plugins/mango/resources/core-default.xml'
ERROR: InvocationError: '/root/savanna/.tox/venv/bin/savanna-api 
--config-file etc/savanna/savanna.conf -d'
_ 
summary __

ERROR:   venv: commands failed




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] TC Meeting / Savanna Incubation Follow-Up

2013-09-13 Thread Michael Basnight
On Sep 13, 2013, at 6:56 AM, Alexander Kuznetsov wrote:
 On Thu, Sep 12, 2013 at 7:30 PM, Michael Basnight mbasni...@gmail.com wrote:
 On Sep 12, 2013, at 2:39 AM, Thierry Carrez wrote:
 
  Sergey Lukjanov wrote:
 
  [...]
  As you can see, resources provisioning is just one of the features and the 
  implementation details are not critical for overall architecture. It 
  performs only the first step of the cluster setup. We’ve been considering 
  Heat for a while, but ended up direct API calls in favor of speed and 
  simplicity. Going forward Heat integration will be done by implementing 
  extension mechanism [3] and [4] as part of Icehouse release.
 
  The next part, Hadoop cluster configuration, already extensible and we 
  have several plugins - Vanilla, Hortonworks Data Platform and Cloudera 
  plugin started too. This allow to unify management of different Hadoop 
  distributions under single control plane. The plugins are responsible for 
  correct Hadoop ecosystem configuration at already provisioned resources 
  and use different Hadoop management tools like Ambari to setup and 
  configure all cluster  services, so, there are no actual provisioning 
  configs on Savanna side in this case. Savanna and its plugins encapsulate 
  the knowledge of Hadoop internals and default configuration for Hadoop 
  services.
 
  My main gripe with Savanna is that it combines (in its upcoming release)
  what sounds like to me two very different services: Hadoop cluster
  provisioning service (like what Trove does for databases) and a
  MapReduce+ data API service (like what Marconi does for queues).
 
  Making it part of the same project (rather than two separate projects,
  potentially sharing the same program) make discussions about shifting
  some of its clustering ability to another library/project more complex
  than they should be (see below).
 
  Could you explain the benefit of having them within the same service,
  rather than two services with one consuming the other ?
 
 And for the record, i dont think that Trove is the perfect fit for it today. 
 We are still working on a clustering API. But when we create it, i would love 
 the Savanna team's input, so we can try to make a pluggable API thats usable 
 for people who want MySQL or Cassandra or even Hadoop. Im less a fan of a 
 clustering library, because in the end, we will both have API calls like POST 
 /clusters, GET /clusters, and there will be API duplication between the 
 projects.
 
 I think that Cluster API (if it would be created) will be helpful not only 
 for Trove and Savanna.  NoSQL, RDBMS and Hadoop are not unique software which 
 can be clustered. What about different kind of messaging solutions like 
 RabbitMQ, ActiveMQ or J2EE containers like JBoss, Weblogic and WebSphere, 
 which often are installed in clustered mode. Messaging, databases, J2EE 
 containers and Hadoop have their own management cycle. It will be confusing 
 to make Cluster API a part of Trove which has different mission - database 
 management and provisioning.

Are you suggesting a 3rd program, cluster as a service? Trove is trying to 
target a generic enough™ API to tackle different technologies with plugins or 
some sort of extensions. This will include a scheduler to determine rack 
awareness. Even if we decide that both Savanna and Trove need their own API for 
building clusters, I still want to understand what makes the Savanna API and 
implementation different, and how Trove can build an API/system that can 
encompass multiple datastore technologies. So regardless of how this shakes 
out, I would urge you to go to the Trove clustering summit session [1] so we 
can share ideas.

[1] http://summit.openstack.org/cfp/details/54


signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [django_openstack_auth] Core review request

2013-09-13 Thread Timur Sufiev
Greetings to you, venerable horizon core developers!

There is a bug recently submitted to launchpad
https://bugs.launchpad.net/django-openstack-auth/+bug/1221563 - it is
related to OPENSTACK_ENDPOINT_TYPE parameter and can arise only in
situation when different endpoint types - 'internalURL', 'publicURL' and
'adminURL' have different urls: suppose that we specify
OPENSTACK_ENDPOINT_TYPE = 'internalURL' in horizon's local_settings.py, but
after user logins, his endpoint's url corresponds to 'publicURL' type.

I've provided fix addressing this issue, see
https://review.openstack.org/#/c/45655/ , could you please review it?

-- 
Timur Sufiev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] TC Meeting / Savanna Incubation Follow-Up

2013-09-13 Thread Michael Basnight

On Sep 13, 2013, at 9:05 AM, Alexander Kuznetsov wrote:

 
 
 
 On Fri, Sep 13, 2013 at 7:26 PM, Michael Basnight mbasni...@gmail.com wrote:
 On Sep 13, 2013, at 6:56 AM, Alexander Kuznetsov wrote:
  On Thu, Sep 12, 2013 at 7:30 PM, Michael Basnight mbasni...@gmail.com 
  wrote:
  On Sep 12, 2013, at 2:39 AM, Thierry Carrez wrote:
 
   Sergey Lukjanov wrote:
  
   [...]
   As you can see, resources provisioning is just one of the features and 
   the implementation details are not critical for overall architecture. It 
   performs only the first step of the cluster setup. We’ve been 
   considering Heat for a while, but ended up direct API calls in favor of 
   speed and simplicity. Going forward Heat integration will be done by 
   implementing extension mechanism [3] and [4] as part of Icehouse release.
  
   The next part, Hadoop cluster configuration, already extensible and we 
   have several plugins - Vanilla, Hortonworks Data Platform and Cloudera 
   plugin started too. This allow to unify management of different Hadoop 
   distributions under single control plane. The plugins are responsible 
   for correct Hadoop ecosystem configuration at already provisioned 
   resources and use different Hadoop management tools like Ambari to setup 
   and configure all cluster  services, so, there are no actual 
   provisioning configs on Savanna side in this case. Savanna and its 
   plugins encapsulate the knowledge of Hadoop internals and default 
   configuration for Hadoop services.
  
   My main gripe with Savanna is that it combines (in its upcoming release)
   what sounds like to me two very different services: Hadoop cluster
   provisioning service (like what Trove does for databases) and a
   MapReduce+ data API service (like what Marconi does for queues).
  
   Making it part of the same project (rather than two separate projects,
   potentially sharing the same program) make discussions about shifting
   some of its clustering ability to another library/project more complex
   than they should be (see below).
  
   Could you explain the benefit of having them within the same service,
   rather than two services with one consuming the other ?
 
  And for the record, i dont think that Trove is the perfect fit for it 
  today. We are still working on a clustering API. But when we create it, i 
  would love the Savanna team's input, so we can try to make a pluggable API 
  thats usable for people who want MySQL or Cassandra or even Hadoop. Im less 
  a fan of a clustering library, because in the end, we will both have API 
  calls like POST /clusters, GET /clusters, and there will be API duplication 
  between the projects.
 
  I think that Cluster API (if it would be created) will be helpful not only 
  for Trove and Savanna.  NoSQL, RDBMS and Hadoop are not unique software 
  which can be clustered. What about different kind of messaging solutions 
  like RabbitMQ, ActiveMQ or J2EE containers like JBoss, Weblogic and 
  WebSphere, which often are installed in clustered mode. Messaging, 
  databases, J2EE containers and Hadoop have their own management cycle. It 
  will be confusing to make Cluster API a part of Trove which has different 
  mission - database management and provisioning.
 
 Are you suggesting a 3rd program, cluster as a service? Trove is trying to 
 target a generic enough™ API to tackle different technologies with plugins or 
 some sort of extensions. This will include a scheduler to determine rack 
 awareness. Even if we decide that both Savanna and Trove need their own API 
 for building clusters, I still want to understand what makes the Savanna API 
 and implementation different, and how Trove can build an API/system that can 
 encompass multiple datastore technologies. So regardless of how this shakes 
 out, I would urge you to go to the Trove clustering summit session [1] so we 
 can share ideas.
 
 Generic enough™ API shouldn't contain a database specific calls like backups 
 and restore (already in Trove).  Why we need a backup and restore operations 
 for J2EE or messaging solutions? 

I dont mean to encompass J2EE or messaging solutions. Let me amend my email to 
say to tackle different datastore technologies. But going with this point… Do 
you not need to backup things in a J2EE container? Id assume a backup is needed 
by all clusters, personally. I would not like a system that didnt have a way to 
backup and restore things in my cluster.


signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [DevStack] Generalize config file settings

2013-09-13 Thread Dean Troyer
On Fri, Sep 13, 2013 at 6:10 AM, Sean Dague s...@dague.net wrote:

 I like option a, though I'm not sure we need the full system path in the
 conf.d (that's pretty minor though).


That was to avoid making assumptions about target files or encoding paths
in filenames.  It really needs to recognize XXX_CONF_DIR settings too.


 Because inevitably people ask for copies of other folks configs to
 duplicate things, and a single file is easier to pass around than a tree.
 But that would mean a unique parser to handle the top level stanza.


That is a great point and one I did not have in mind.  One possibility
would be to include localrc in this mega-file and the _very_ first step
would be to extract it if localrc doesn't already exist and run from there.
I want to support the conf.d-style also because that is useful for outside
projects to drop in what they require for changing included project
configs; these would not necessarily be user-modifiable.

dt

-- 

Dean Troyer
dtro...@gmail.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] TC Meeting / Savanna Incubation Follow-Up

2013-09-13 Thread Alexander Kuznetsov
Hadoop Ecosystem is not only datastore technologies. Hadoop has other
components:  Map Reduce framework, distributed coordinator - Zookepeer,
workflow management - Oozie, runtime for scripting languages - Hive and
Pig, scalable machine learning library - Apache Mahout. All this components
are tightly coupled together and datastore part can't be considered
separately, from other component. This is a the main reason why for Hadoop
installation and management are required  a separate solution, distinct
from generic enough™ datastore API. In the other case, this API will
contain a huge part, not relating to datastore technologies.


On Fri, Sep 13, 2013 at 8:17 PM, Michael Basnight mbasni...@gmail.comwrote:


 On Sep 13, 2013, at 9:05 AM, Alexander Kuznetsov wrote:

 
 
 
  On Fri, Sep 13, 2013 at 7:26 PM, Michael Basnight mbasni...@gmail.com
 wrote:
  On Sep 13, 2013, at 6:56 AM, Alexander Kuznetsov wrote:
   On Thu, Sep 12, 2013 at 7:30 PM, Michael Basnight mbasni...@gmail.com
 wrote:
   On Sep 12, 2013, at 2:39 AM, Thierry Carrez wrote:
  
Sergey Lukjanov wrote:
   
[...]
As you can see, resources provisioning is just one of the features
 and the implementation details are not critical for overall architecture.
 It performs only the first step of the cluster setup. We’ve been
 considering Heat for a while, but ended up direct API calls in favor of
 speed and simplicity. Going forward Heat integration will be done by
 implementing extension mechanism [3] and [4] as part of Icehouse release.
   
The next part, Hadoop cluster configuration, already extensible and
 we have several plugins - Vanilla, Hortonworks Data Platform and Cloudera
 plugin started too. This allow to unify management of different Hadoop
 distributions under single control plane. The plugins are responsible for
 correct Hadoop ecosystem configuration at already provisioned resources and
 use different Hadoop management tools like Ambari to setup and configure
 all cluster  services, so, there are no actual provisioning configs on
 Savanna side in this case. Savanna and its plugins encapsulate the
 knowledge of Hadoop internals and default configuration for Hadoop services.
   
My main gripe with Savanna is that it combines (in its upcoming
 release)
what sounds like to me two very different services: Hadoop cluster
provisioning service (like what Trove does for databases) and a
MapReduce+ data API service (like what Marconi does for queues).
   
Making it part of the same project (rather than two separate
 projects,
potentially sharing the same program) make discussions about shifting
some of its clustering ability to another library/project more
 complex
than they should be (see below).
   
Could you explain the benefit of having them within the same service,
rather than two services with one consuming the other ?
  
   And for the record, i dont think that Trove is the perfect fit for it
 today. We are still working on a clustering API. But when we create it, i
 would love the Savanna team's input, so we can try to make a pluggable API
 thats usable for people who want MySQL or Cassandra or even Hadoop. Im less
 a fan of a clustering library, because in the end, we will both have API
 calls like POST /clusters, GET /clusters, and there will be API duplication
 between the projects.
  
   I think that Cluster API (if it would be created) will be helpful not
 only for Trove and Savanna.  NoSQL, RDBMS and Hadoop are not unique
 software which can be clustered. What about different kind of messaging
 solutions like RabbitMQ, ActiveMQ or J2EE containers like JBoss, Weblogic
 and WebSphere, which often are installed in clustered mode. Messaging,
 databases, J2EE containers and Hadoop have their own management cycle. It
 will be confusing to make Cluster API a part of Trove which has different
 mission - database management and provisioning.
 
  Are you suggesting a 3rd program, cluster as a service? Trove is trying
 to target a generic enough™ API to tackle different technologies with
 plugins or some sort of extensions. This will include a scheduler to
 determine rack awareness. Even if we decide that both Savanna and Trove
 need their own API for building clusters, I still want to understand what
 makes the Savanna API and implementation different, and how Trove can build
 an API/system that can encompass multiple datastore technologies. So
 regardless of how this shakes out, I would urge you to go to the Trove
 clustering summit session [1] so we can share ideas.
 
  Generic enough™ API shouldn't contain a database specific calls like
 backups and restore (already in Trove).  Why we need a backup and restore
 operations for J2EE or messaging solutions?

 I dont mean to encompass J2EE or messaging solutions. Let me amend my
 email to say to tackle different datastore technologies. But going with
 this point… Do you not need to backup things in a J2EE container? Id assume
 a backup is 

[openstack-dev] [Horizon] Wizard UI for modal workflow dialog

2013-09-13 Thread Toshiyuki Hayashi
Hi,

I just added BP Wizard UI for workflow.
https://blueprints.launchpad.net/horizon/+spec/wizard-ui-for-workflow

[Screenshot]
https://dl.dropboxusercontent.com/u/7098/openstack/wizard.png

[Demo movie]
http://www.youtube.com/watch?v=uCmhI0fbDYgfeature=youtu.be

The current some workflow dialogs (which have many tabs) are difficult
to understand for users what to do.
Wizard UI is better to proceed and understand the tasks users should do.
I believe this feature enhances the UX of modal workflow dialog.

If you have any comments and concerns, please let me know.
Also I started discussing on G+.
https://plus.google.com/u/0/110931099804484211859/posts/aMCU2CCzzZq


Regards,
Toshiyuki

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Oslo.db possible module?

2013-09-13 Thread Joshua Harlow
Hi guys,

In my attempt to not use oslo.cfg in taskflow I ended up re-creating a lot of 
what oslo-incubator db has but without the strong connection to oslo.cfg,

I was thinking that a majority of this code (which is also partially ceilometer 
influenced) could become oslo.db,

https://github.com/stackforge/taskflow/blob/master/taskflow/persistence/backends/impl_sqlalchemy.py
 (search for SQLAlchemyBackend as the main class).

It should be generic enough that it could be easily extracted to be the basis 
for oslo.db if that is desirable,

Thoughts/comments/questions welcome :-)

-Josh
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [DevStack] Generalize config file settings

2013-09-13 Thread Everett Toews
On Sep 13, 2013, at 6:10 AM, Sean Dague wrote:

 Because inevitably people ask for copies of other folks configs to duplicate 
 things, and a single file is easier to pass around than a tree. But that 
 would mean a unique parser to handle the top level stanza.

+1

I share localrc files all the time.

Regards,
Everett
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] TC Meeting / Savanna Incubation Follow-Up

2013-09-13 Thread Clint Byrum
Excerpts from Michael Basnight's message of 2013-09-13 08:26:07 -0700:
 On Sep 13, 2013, at 6:56 AM, Alexander Kuznetsov wrote:
  On Thu, Sep 12, 2013 at 7:30 PM, Michael Basnight mbasni...@gmail.com 
  wrote:
  On Sep 12, 2013, at 2:39 AM, Thierry Carrez wrote:
  
   Sergey Lukjanov wrote:
  
   [...]
   As you can see, resources provisioning is just one of the features and 
   the implementation details are not critical for overall architecture. It 
   performs only the first step of the cluster setup. We’ve been 
   considering Heat for a while, but ended up direct API calls in favor of 
   speed and simplicity. Going forward Heat integration will be done by 
   implementing extension mechanism [3] and [4] as part of Icehouse release.
  
   The next part, Hadoop cluster configuration, already extensible and we 
   have several plugins - Vanilla, Hortonworks Data Platform and Cloudera 
   plugin started too. This allow to unify management of different Hadoop 
   distributions under single control plane. The plugins are responsible 
   for correct Hadoop ecosystem configuration at already provisioned 
   resources and use different Hadoop management tools like Ambari to setup 
   and configure all cluster  services, so, there are no actual 
   provisioning configs on Savanna side in this case. Savanna and its 
   plugins encapsulate the knowledge of Hadoop internals and default 
   configuration for Hadoop services.
  
   My main gripe with Savanna is that it combines (in its upcoming release)
   what sounds like to me two very different services: Hadoop cluster
   provisioning service (like what Trove does for databases) and a
   MapReduce+ data API service (like what Marconi does for queues).
  
   Making it part of the same project (rather than two separate projects,
   potentially sharing the same program) make discussions about shifting
   some of its clustering ability to another library/project more complex
   than they should be (see below).
  
   Could you explain the benefit of having them within the same service,
   rather than two services with one consuming the other ?
  
  And for the record, i dont think that Trove is the perfect fit for it 
  today. We are still working on a clustering API. But when we create it, i 
  would love the Savanna team's input, so we can try to make a pluggable API 
  thats usable for people who want MySQL or Cassandra or even Hadoop. Im less 
  a fan of a clustering library, because in the end, we will both have API 
  calls like POST /clusters, GET /clusters, and there will be API duplication 
  between the projects.
  
  I think that Cluster API (if it would be created) will be helpful not only 
  for Trove and Savanna.  NoSQL, RDBMS and Hadoop are not unique software 
  which can be clustered. What about different kind of messaging solutions 
  like RabbitMQ, ActiveMQ or J2EE containers like JBoss, Weblogic and 
  WebSphere, which often are installed in clustered mode. Messaging, 
  databases, J2EE containers and Hadoop have their own management cycle. It 
  will be confusing to make Cluster API a part of Trove which has different 
  mission - database management and provisioning.
 
 Are you suggesting a 3rd program, cluster as a service? Trove is trying to 
 target a generic enough™ API to tackle different technologies with plugins or 
 some sort of extensions. This will include a scheduler to determine rack 
 awareness. Even if we decide that both Savanna and Trove need their own API 
 for building clusters, I still want to understand what makes the Savanna API 
 and implementation different, and how Trove can build an API/system that can 
 encompass multiple datastore technologies. So regardless of how this shakes 
 out, I would urge you to go to the Trove clustering summit session [1] so we 
 can share ideas.
 

Kudos to Trove for pushing forward on their Heat implementation. I'd
like to see Savannah go in the same direction. I read the why not heat
and it is all a bug list for Heat. Lets fix those bugs so that the next
clusterable solution that needs a simplified API can just grab Heat and
get it done without a special domain specific orchestration backend.

If the backend were shared, would we care so much that there is no common
clustering imperative API for users?

This way Savanna's API is focused on helping users solve their data
processing problems, and Trove is focused on helping users solve their
data storage problems. And if users need to build a cluster of things
that don't exist yet as a handy simplified API, Heat is there for them
as a general purpose tool for building clusters.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Questions about plans for heat wadls moving forward

2013-09-13 Thread Mike Asthalter
Hi Anne,

I want to make sure I've understood the ramifications of your statement about 
content sharing.

So for now, until the infrastructure team provides us with a method to share 
content between repos, the only way to share the content from the orchestration 
wadl with the api-ref doc 
(https://github.com/openstack/heat/blob/master/doc/docbkx/api-ref/src/docbkx/api-ref.xmlapplewebdata://ADF909E2-ABA6-4E57-81C2-41FC459CA6DF/%22https://github.com/openstack/heat/blob/master/doc/docbkx/api-ref/src/docb)
 is to manually copy the content from the orchestration wadl to the original 
heat wadl and then use that for the shared content. So we will not delete the 
original heat wadl until that new method of content sharing is in place. Is 
this correct?


Thanks!

Mike

From: Anne Gentle 
annegen...@justwriteclick.commailto:annegen...@justwriteclick.com
Reply-To: OpenStack Development Mailing List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Thursday, September 12, 2013 11:32 PM
To: OpenStack Development Mailing List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Heat] Questions about plans for heat wadls moving 
forward




On Thu, Sep 12, 2013 at 10:41 PM, Monty Taylor 
mord...@inaugust.commailto:mord...@inaugust.com wrote:


On 09/12/2013 04:33 PM, Steve Baker wrote:
 On 09/13/2013 08:28 AM, Mike Asthalter wrote:
 Hello,

 Can someone please explain the plans for our 2 wadls moving forward:

   * wadl in original heat
 repo: 
 https://github.com/openstack/heat/blob/master/doc/docbkx/api-ref/src/wadls/heat-api/src/heat-api-1.0.wadl
 
 %22https://github.com/openstack/heat/blob/master/doc/docbkx/api-ref/src/wadls/heat-api/src/heat-api-1.
   * wadl in api-site
 repo: 
 https://github.com/openstack/api-site/blob/master/api-ref/src/wadls/orchestration-api/src/v1/orchestration-api.wadl

 The original intention was to delete the heat wadl when the api-site one
 became merged.

Sounds good.

 1. Is there a need to maintain 2 wadls moving forward, with the wadl
 in the original heat repo containing calls that may not be
 implemented, and the wadl in the api-site repo containing implemented
 calls only?

 Anne Gentle advises as follows in regard to these 2 wadls:

 I'd like the WADL in api-site repo to be user-facing. The other
 WADL can be truth if it needs to be a specification that's not yet
 implemented. If the WADL in api-site repo is true and implemented,
 please just maintain one going forward.


 2. If we maintain 2 wadls, what are the consequences (gerrit reviews,
 docs out of sync, etc.)?

 3. If we maintain only the 1 orchestration wadl, how do we want to
 pull in the wadl content to the api-ref doc
 (https://github.com/openstack/heat/blob/master/doc/docbkx/api-ref/src/docbkx/api-ref.xml
 %22https://github.com/openstack/heat/blob/master/doc/docbkx/api-ref/src/docb)
 from the orchestration wadl in the api-site repo: subtree merge, other?



Thanks Mike for asking these questions.

I've been asking the infrastructure team for help with pulling content like the 
current nova request/response examples into the api-site repo. No subtree 
merges please. We'll find some way. Right now it's manual.

 These are good questions, and could apply equally to other out-of-tree
 docs as features get added during the development cycle.

 I still think that our wadl should live only in api-site.  If api-site
 has no branching policy to maintain separate Havana and Icehouse
 versions then maybe Icehouse changes should be posted as WIP reviews
 until they can be merged.

I believe there is no branching in api-site because it's describing API
and there is no such thing as a havana or icehouse version of an API -
there are the API versions and they are orthogonal to server release
versions. At least in theory. :)

Yep, that's our working theory. :)

Anne

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Anne Gentle
annegen...@justwriteclick.commailto:annegen...@justwriteclick.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] [pci passthrough] how to fill instance_type_extra_specs for a pci passthrough?

2013-09-13 Thread David Kang




-- 
--
Dr. Dong-In David Kang
Computer Scientist
USC/ISI

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Questions about plans for heat wadls moving forward

2013-09-13 Thread Anne Gentle
On Fri, Sep 13, 2013 at 1:53 PM, Mike Asthalter 
mike.asthal...@rackspace.com wrote:

  Hi Anne,

  I want to make sure I've understood the ramifications of your statement
 about content sharing.

  So for now, until the infrastructure team provides us with a method to
 share content between repos, the only way to share the content from the
 orchestration wadl with the api-ref doc (
 https://github.com/openstack/heat/blob/master/doc/docbkx/api-ref/src/docbkx/api-ref.xml)
 is to manually copy the content from the orchestration wadl to the original
 heat wadl and then use that for the shared content. So we will not delete
 the original heat wadl until that new method of content sharing is in
 place. Is this correct?


Hi Mike,
It sounds like the dev team is fine with deleting that original heat WADL
and only maintaining one from here forward.

The way they will control Icehouse edits to the heat WADL that shouldn't
yet be displayed to end users is to use the Work In Progress button on
review.openstack.org. When a patch is marked WIP, you can't merge it.

So, you can safely delete the original Heat WADL and then from your dev
guides, if you want to include a WADL, you can point to the one in the
api-site repository. We now have a mirror of the github.com repository at
git.openstack.org that gives you access to the WADL in the api-site
repository at all times. I can walk you through building the URL that
points to the WADL file.

What we also need to build is logic in the build jobs so that any time the
api-site WADL is updated, your dev guide is also updated. This is done in
the Jenkins job in
https://github.com/openstack-infra/config/blob/master/modules/openstack_project/files/jenkins_job_builder/config/api-jobs.yaml.
I can either submit this patch for you, or I'll ask Steve or Zane to do so.

Hope this helps -

Anne



  Thanks!

  Mike

   From: Anne Gentle annegen...@justwriteclick.com
 Reply-To: OpenStack Development Mailing List 
 openstack-dev@lists.openstack.org
 Date: Thursday, September 12, 2013 11:32 PM
 To: OpenStack Development Mailing List openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Heat] Questions about plans for heat wadls
 moving forward




 On Thu, Sep 12, 2013 at 10:41 PM, Monty Taylor mord...@inaugust.comwrote:



 On 09/12/2013 04:33 PM, Steve Baker wrote:
  On 09/13/2013 08:28 AM, Mike Asthalter wrote:
  Hello,
 
  Can someone please explain the plans for our 2 wadls moving forward:
 
 * wadl in original heat
  repo:
 https://github.com/openstack/heat/blob/master/doc/docbkx/api-ref/src/wadls/heat-api/src/heat-api-1.0.wadl
  %22
 https://github.com/openstack/heat/blob/master/doc/docbkx/api-ref/src/wadls/heat-api/src/heat-api-1
 .
* wadl in api-site
  repo:
 https://github.com/openstack/api-site/blob/master/api-ref/src/wadls/orchestration-api/src/v1/orchestration-api.wadl
 
  The original intention was to delete the heat wadl when the api-site one
  became merged.


  Sounds good.


  1. Is there a need to maintain 2 wadls moving forward, with the wadl
  in the original heat repo containing calls that may not be
  implemented, and the wadl in the api-site repo containing implemented
  calls only?
 
  Anne Gentle advises as follows in regard to these 2 wadls:
 
  I'd like the WADL in api-site repo to be user-facing. The other
  WADL can be truth if it needs to be a specification that's not yet
  implemented. If the WADL in api-site repo is true and implemented,
  please just maintain one going forward.
 
 
  2. If we maintain 2 wadls, what are the consequences (gerrit reviews,
  docs out of sync, etc.)?
 
  3. If we maintain only the 1 orchestration wadl, how do we want to
  pull in the wadl content to the api-ref doc
  (
 https://github.com/openstack/heat/blob/master/doc/docbkx/api-ref/src/docbkx/api-ref.xml
   %22
 https://github.com/openstack/heat/blob/master/doc/docbkx/api-ref/src/docb
 )
  from the orchestration wadl in the api-site repo: subtree merge, other?
 
 


  Thanks Mike for asking these questions.

  I've been asking the infrastructure team for help with pulling content
 like the current nova request/response examples into the api-site repo. No
 subtree merges please. We'll find some way. Right now it's manual.


  These are good questions, and could apply equally to other out-of-tree
  docs as features get added during the development cycle.
 
  I still think that our wadl should live only in api-site.  If api-site
  has no branching policy to maintain separate Havana and Icehouse
  versions then maybe Icehouse changes should be posted as WIP reviews
  until they can be merged.

  I believe there is no branching in api-site because it's describing API
 and there is no such thing as a havana or icehouse version of an API -
 there are the API versions and they are orthogonal to server release
 versions. At least in theory. :)


  Yep, that's our working theory. :)

  Anne


 

Re: [openstack-dev] [nova] [pci passthrough] how to fill instance_type_extra_specs for a pci passthrough?

2013-09-13 Thread David Kang
 From: David Kang dk...@isi.edu
 To: OpenStack Development Mailing List openstack-dev@lists.openstack.org
 Sent: Friday, September 13, 2013 4:03:24 PM
 Subject: [nova] [pci passthrough] how to fill instance_type_extra_specs for a 
 pci passthrough?

 Sorry for the last empty mail.
I cannot find a good document for how to describe pci passthrough in nova.conf.

 As an example, if I have the following entries in nova.conf, how should the 
instance_type_extra_specs must be?
(The following entries are just for a test.)

pci_alias={name:test, product_id:7190, vendor_id:8086, 
device_type:ACCEL}
pci_passthrough_whitelist=[{vendor_id:8086,product_id:7190}]

 I'll appreciate any advice and/or pointer for the document.

 Thanks,
 David


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [marconi] Agenda for monday's meeting @ 1600 UTC

2013-09-13 Thread Kurt Griffiths
The Marconi project team holds a weekly meeting in #openstack-meeting-alt
on Mondays, 1600 UTC.

The next meeting is tomorrow, Sept. 16. Everyone is welcome. However,
please take a minute to review the wiki before attending for the first
time:

  http://wiki.openstack.org/marconi

Proposed Agenda:

  * Review actions from last time
  * Triage blueprints (H3)
  * Placement service / cell architecture (vs. tag-aware sharding)
  * Audit and freeze HTTP v1 API
  * Open discussion (time permitting)

If you have additions to the agenda, please add them to the wiki and note
your IRC name so we can call on you during the meeting:

  http://wiki.openstack.org/Meetings/Marconi

Cheers,
Kurt (kgriffs)


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Questions about plans for heat wadls moving forward

2013-09-13 Thread Mike Asthalter
Thanks for the clarification Anne!

I will see you about building the URL for access to the wadl in the api-site 
repo from the dev guide next week.

I think it's best to request Steve to submit the patch for logic in the build 
jobs to update the dev guide whenever the api-site wadl is updated, so he will 
be aware about it.

Mike

From: Anne Gentle 
annegen...@justwriteclick.commailto:annegen...@justwriteclick.com
Reply-To: OpenStack Development Mailing List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Friday, September 13, 2013 3:21 PM
To: OpenStack Development Mailing List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Heat] Questions about plans for heat wadls moving 
forward




On Fri, Sep 13, 2013 at 1:53 PM, Mike Asthalter 
mike.asthal...@rackspace.commailto:mike.asthal...@rackspace.com wrote:
Hi Anne,

I want to make sure I've understood the ramifications of your statement about 
content sharing.

So for now, until the infrastructure team provides us with a method to share 
content between repos, the only way to share the content from the orchestration 
wadl with the api-ref doc 
(https://github.com/openstack/heat/blob/master/doc/docbkx/api-ref/src/docbkx/api-ref.xml)
 is to manually copy the content from the orchestration wadl to the original 
heat wadl and then use that for the shared content. So we will not delete the 
original heat wadl until that new method of content sharing is in place. Is 
this correct?


Hi Mike,
It sounds like the dev team is fine with deleting that original heat WADL and 
only maintaining one from here forward.

The way they will control Icehouse edits to the heat WADL that shouldn't yet be 
displayed to end users is to use the Work In Progress button on 
review.openstack.orghttp://review.openstack.org. When a patch is marked WIP, 
you can't merge it.

So, you can safely delete the original Heat WADL and then from your dev guides, 
if you want to include a WADL, you can point to the one in the api-site 
repository. We now have a mirror of the github.comhttp://github.com 
repository at git.openstack.orghttp://git.openstack.org that gives you access 
to the WADL in the api-site repository at all times. I can walk you through 
building the URL that points to the WADL file.

What we also need to build is logic in the build jobs so that any time the 
api-site WADL is updated, your dev guide is also updated. This is done in the 
Jenkins job in 
https://github.com/openstack-infra/config/blob/master/modules/openstack_project/files/jenkins_job_builder/config/api-jobs.yaml.
 I can either submit this patch for you, or I'll ask Steve or Zane to do so.

Hope this helps -

Anne


Thanks!

Mike

From: Anne Gentle 
annegen...@justwriteclick.commailto:annegen...@justwriteclick.com
Reply-To: OpenStack Development Mailing List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Thursday, September 12, 2013 11:32 PM
To: OpenStack Development Mailing List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Heat] Questions about plans for heat wadls moving 
forward




On Thu, Sep 12, 2013 at 10:41 PM, Monty Taylor 
mord...@inaugust.commailto:mord...@inaugust.com wrote:


On 09/12/2013 04:33 PM, Steve Baker wrote:
 On 09/13/2013 08:28 AM, Mike Asthalter wrote:
 Hello,

 Can someone please explain the plans for our 2 wadls moving forward:

   * wadl in original heat
 repo: 
 https://github.com/openstack/heat/blob/master/doc/docbkx/api-ref/src/wadls/heat-api/src/heat-api-1.0.wadl
 
 %22https://github.com/openstack/heat/blob/master/doc/docbkx/api-ref/src/wadls/heat-api/src/heat-api-1.
   * wadl in api-site
 repo: 
 https://github.com/openstack/api-site/blob/master/api-ref/src/wadls/orchestration-api/src/v1/orchestration-api.wadl

 The original intention was to delete the heat wadl when the api-site one
 became merged.

Sounds good.

 1. Is there a need to maintain 2 wadls moving forward, with the wadl
 in the original heat repo containing calls that may not be
 implemented, and the wadl in the api-site repo containing implemented
 calls only?

 Anne Gentle advises as follows in regard to these 2 wadls:

 I'd like the WADL in api-site repo to be user-facing. The other
 WADL can be truth if it needs to be a specification that's not yet
 implemented. If the WADL in api-site repo is true and implemented,
 please just maintain one going forward.


 2. If we maintain 2 wadls, what are the consequences (gerrit reviews,
 docs out of sync, etc.)?

 3. If we maintain only the 1 orchestration wadl, how do we want to
 pull in the wadl content to the api-ref doc
 (https://github.com/openstack/heat/blob/master/doc/docbkx/api-ref/src/docbkx/api-ref.xml
 %22https://github.com/openstack/heat/blob/master/doc/docbkx/api-ref/src/docb)
 from the orchestration wadl in the api-site repo: subtree merge, other?



[openstack-dev] [Heat] Does Heat support checkpointing for guest application

2013-09-13 Thread Qing He
All,
I'm wondering if Heat provide service for checkpointing the guest application 
for HA/redundancy similar to what corosync/pacemaker/openais provided for bare 
medal applications.

Thanks,

Qing
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Cookiecutter repo for ease in making new projects

2013-09-13 Thread Jay Buffington
On Thu, Sep 12, 2013 at 10:08 PM, Monty Taylor mord...@inaugust.com wrote:

 And boom, you'll have a directory all set up with your new project


Awesome.  I tried it and ran into a couple of small issues.  I don't see a
launchpad yet, so I'm not sure where to report bugs.

Something is stripping all the new lines at EOF, so flake8 fails.  Also,
this
is annoying because vim's default is to add one when it doesn't exist.

Also, foo/test/__init__.py needs to call super(TestCase, self).setUp()
otherwise you get an error that is something like this when you run tox:

ValueError: TestCase.setUp was not called. Have you upcalled all the
way up the hierarchy from your setUp? e.g. Call
super(TestFoo, self).setUp() from your setUp().
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [marconi] Agenda for monday's meeting @ 1600 UTC

2013-09-13 Thread Allan Metts
Sent: Friday, September 13, 2013 4:11 PM

The next meeting is tomorrow, Sept. 16


-Original Message-
From: Kurt Griffiths [mailto:kurt.griffi...@rackspace.com] 
Sent: Friday, September 13, 2013 4:11 PM
To: OpenStack Dev
Subject: [openstack-dev] [marconi] Agenda for monday's meeting @ 1600 UTC

The Marconi project team holds a weekly meeting in #openstack-meeting-alt on 
Mondays, 1600 UTC.

The next meeting is tomorrow, Sept. 16. Everyone is welcome. However, please 
take a minute to review the wiki before attending for the first
time:

  http://wiki.openstack.org/marconi

Proposed Agenda:

  * Review actions from last time
  * Triage blueprints (H3)
  * Placement service / cell architecture (vs. tag-aware sharding)
  * Audit and freeze HTTP v1 API
  * Open discussion (time permitting)

If you have additions to the agenda, please add them to the wiki and note your 
IRC name so we can call on you during the meeting:

  http://wiki.openstack.org/Meetings/Marconi

Cheers,
Kurt (kgriffs)


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer][IceHouse] Ceilometer + Kibana + ElasticSearch Integration

2013-09-13 Thread Jaesuk Ahn
+1

we have been researching on logstash+ es + kivana for openstack log,
thinking how ceilometer can be intergrated with those.

Great to here this!

Although I have to think this integration more from now, for log
aggregator, using logstash might be good idea here.

I will keep following up on this. :)

Jaesuk Ahn, Ph.D.
Team Lead, Cloud Platform Dev.
KT

2013. 9. 14. 오전 2:38에 Nachi Ueno na...@ntti3.com님이 작성:

 Hi Folks

 Thank you for your feedback!
 I'll continue this one

 (1) adding new storage driver
 (2) adding extension for elastic search query in ceilometer.
  (i'm still not sure how ceilometer supports extension framework yet)

  Monsyne
 Thank you for your information. I'll take a look that project.

 Best
 Nachi


 2013/9/13 Monsyne Dragon mdra...@rackspace.com:
  Nice! Have you chatted with these folks: http://projectmeniscus.org/ ?
  (Openstack-related logging-as-a-service project)
  They list interoperation with Ceilometer as a project goal.
 
  On 9/12/13 7:06 PM, Nachi Ueno na...@ntti3.com wrote:
 
 Hi Folks
 
 Is anyone interested in Kibana + ElasticSearch Integration with
 ceilometer?
 # Note: This discussion is not for Havana.
 
 I have registered BP. (for IceHouse)
 https://blueprints.launchpad.net/ceilometer/+spec/elasticsearch-driver
 
 This is demo video.
 http://www.youtube.com/watch?v=8SmA0W0hd4Ifeature=youtu.be
 
 I wrote some sample storage driver for elastic search in ceilometer.
 This is WIP - https://review.openstack.org/#/c/46383/
 
 This integration sounds cool for me, because if we can integrate then,
 we can use it as Log as a service.
 
 IMO, there are some discussion points.
 
 [1] We should add elastic search query api for ceilometer? or we
 should let user kick ElasticSearch api directory?
 
 Note that ElasticSearch has no tenant based authentication, in that
 case we need to integrate Keystone and ElasticSearch. (or Horizon)
 
 [2] Log (syslog or any application log) should be stored in
 Ceilometer? (or it should be new OpenStack project? )
 
 Best
 Nachi
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer][IceHouse] Ceilometer + Kibana + ElasticSearch Integration

2013-09-13 Thread Nachi Ueno
HI Jaesuk

Thank you for your comment.
I'm planning to write fluend plugin to collect log data from VM to the
ceilometer.

2013/9/13 Jaesuk Ahn bluejay@gmail.com:
 +1

 we have been researching on logstash+ es + kivana for openstack log,
 thinking how ceilometer can be intergrated with those.

 Great to here this!

 Although I have to think this integration more from now, for log aggregator,
 using logstash might be good idea here.

 I will keep following up on this. :)

 Jaesuk Ahn, Ph.D.
 Team Lead, Cloud Platform Dev.
 KT


 2013. 9. 14. 오전 2:38에 Nachi Ueno na...@ntti3.com님이 작성:

 Hi Folks

 Thank you for your feedback!
 I'll continue this one

 (1) adding new storage driver
 (2) adding extension for elastic search query in ceilometer.
  (i'm still not sure how ceilometer supports extension framework yet)

  Monsyne
 Thank you for your information. I'll take a look that project.

 Best
 Nachi


 2013/9/13 Monsyne Dragon mdra...@rackspace.com:
  Nice! Have you chatted with these folks: http://projectmeniscus.org/ ?
  (Openstack-related logging-as-a-service project)
  They list interoperation with Ceilometer as a project goal.
 
  On 9/12/13 7:06 PM, Nachi Ueno na...@ntti3.com wrote:
 
 Hi Folks
 
 Is anyone interested in Kibana + ElasticSearch Integration with
 ceilometer?
 # Note: This discussion is not for Havana.
 
 I have registered BP. (for IceHouse)
 https://blueprints.launchpad.net/ceilometer/+spec/elasticsearch-driver
 
 This is demo video.
 http://www.youtube.com/watch?v=8SmA0W0hd4Ifeature=youtu.be
 
 I wrote some sample storage driver for elastic search in ceilometer.
 This is WIP - https://review.openstack.org/#/c/46383/
 
 This integration sounds cool for me, because if we can integrate then,
 we can use it as Log as a service.
 
 IMO, there are some discussion points.
 
 [1] We should add elastic search query api for ceilometer? or we
 should let user kick ElasticSearch api directory?
 
 Note that ElasticSearch has no tenant based authentication, in that
 case we need to integrate Keystone and ElasticSearch. (or Horizon)
 
 [2] Log (syslog or any application log) should be stored in
 Ceilometer? (or it should be new OpenStack project? )
 
 Best
 Nachi
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [marconi] Agenda for monday's meeting @ 1600 UTC

2013-09-13 Thread Kurt Griffiths
Sorry about the typo there - next meeting is Monday, Sept. 16.

@kgriffs

On 9/13/13 5:04 PM, Allan Metts allan.me...@rackspace.com wrote:

Sent: Friday, September 13, 2013 4:11 PM

The next meeting is tomorrow, Sept. 16


-Original Message-
From: Kurt Griffiths [mailto:kurt.griffi...@rackspace.com]
Sent: Friday, September 13, 2013 4:11 PM
To: OpenStack Dev
Subject: [openstack-dev] [marconi] Agenda for monday's meeting @ 1600 UTC

The Marconi project team holds a weekly meeting in #openstack-meeting-alt
on Mondays, 1600 UTC.

The next meeting is tomorrow, Sept. 16. Everyone is welcome. However,
please take a minute to review the wiki before attending for the first
time:

  http://wiki.openstack.org/marconi

Proposed Agenda:

  * Review actions from last time
  * Triage blueprints (H3)
  * Placement service / cell architecture (vs. tag-aware sharding)
  * Audit and freeze HTTP v1 API
  * Open discussion (time permitting)

If you have additions to the agenda, please add them to the wiki and note
your IRC name so we can call on you during the meeting:

  http://wiki.openstack.org/Meetings/Marconi

Cheers,
Kurt (kgriffs)


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev