Re: [openstack-dev] [savanna] How to handle diverging EDP job configuration settings

2014-01-29 Thread Jon Maron
I imagine ‘neutron’ would follow suit as well..

On Jan 29, 2014, at 9:23 AM, Trevor McKay tmc...@redhat.com wrote:

 So, assuming we go forward with this, the followup question is whether
 or not to move main_class and java_opts for Java actions into
 edp.java.main_class and edp.java.java_opts configs.
 
 I think yes.
 
 Best,
 
 Trevor
 
 On Wed, 2014-01-29 at 09:15 -0500, Trevor McKay wrote:
 On Wed, 2014-01-29 at 14:35 +0400, Alexander Ignatov wrote:
 Thank you for bringing this up, Trevor.
 
 EDP gets more diverse and it's time to change its model.
 I totally agree with your proposal, but one minor comment.
 Instead of savanna. prefix in job_configs wouldn't it be better to make it
 as edp.? I think savanna. is too more wide word for this.
 
 +1, brilliant. EDP is perfect.  I was worried about the scope of
 savanna. too.
 
 And one more bureaucratic thing... I see you already started implementing 
 it [1], 
 and it is named and goes as new EDP workflow [2]. I think new bluprint 
 should be 
 created for this feature to track all code changes as well as docs updates. 
 Docs I mean public Savanna docs about EDP, rest api docs and samples.
 
 Absolutely, I can make it new blueprint.  Thanks.
 
 [1] https://review.openstack.org/#/c/69712
 [2] 
 https://blueprints.launchpad.net/openstack/?searchtext=edp-oozie-streaming-mapreduce
 
 Regards,
 Alexander Ignatov
 
 
 
 On 28 Jan 2014, at 20:47, Trevor McKay tmc...@redhat.com wrote:
 
 Hello all,
 
 In our first pass at EDP, the model for job settings was very consistent
 across all of our job types. The execution-time settings fit into this
 (superset) structure:
 
 job_configs = {'configs': {}, # config settings for oozie and hadoop
   'params': {},  # substitution values for Pig/Hive
   'args': []}# script args (Pig and Java actions)
 
 But we have some things that don't fit (and probably more in the
 future):
 
 1) Java jobs have 'main_class' and 'java_opts' settings
  Currently these are handled as additional fields added to the
 structure above.  These were the first to diverge.
 
 2) Streaming MapReduce (anticipated) requires mapper and reducer
 settings (different than the mapred..class settings for
 non-streaming MapReduce)
 
 Problems caused by adding fields
 
 The job_configs structure above is stored in the database. Each time we
 add a field to the structure above at the level of configs, params, and
 args, we force a change to the database tables, a migration script and a
 change to the JSON validation for the REST api.
 
 We also cause a change for python-savannaclient and potentially other
 clients.
 
 This kind of change seems bad.
 
 Proposal: Borrow a page from Oozie and add savanna. configs
 -
 I would like to fit divergent job settings into the structure we already
 have.  One way to do this is to leverage the 'configs' dictionary.  This
 dictionary primarily contains settings for hadoop, but there are a
 number of oozie.xxx settings that are passed to oozie as configs or
 set by oozie for the benefit of running apps.
 
 What if we allow savanna. settings to be added to configs?  If we do
 that, any and all special configuration settings for specific job types
 or subtypes can be handled with no database changes and no api changes.
 
 Downside
 
 Currently, all 'configs' are rendered in the generated oozie workflow.
 The savanna. settings would be stripped out and processed by Savanna,
 thereby changing that behavior a bit (maybe not a big deal)
 
 We would also be mixing savanna. configs with config_hints for jobs,
 so users would potentially see savanna. settings mixed with oozie
 and hadoop settings.  Again, maybe not a big deal, but it might blur the
 lines a little bit.  Personally, I'm okay with this.
 
 Slightly different
 --
 We could also add a 'savanna-configs': {} element to job_configs to
 keep the configuration spaces separate.
 
 But, now we would have 'savanna-configs' (or another name), 'configs',
 'params', and 'args'.  Really? Just how many different types of values
 can we come up with? :)
 
 I lean away from this approach.
 
 Related: breaking up the superset
 -
 
 It is also the case that not every job type has every value type.
 
Configs   ParamsArgs
 HiveY YN
 Pig Y YY
 MapReduce   Y NN
 JavaY NY
 
 So do we make that explicit in the docs and enforce it in the api with
 errors?
 
 Thoughts? I'm sure there are some :)
 
 Best,
 
 Trevor
 
 
 
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 

[openstack-dev] [savanna] neutron floating IP assignment unexpected

2013-12-03 Thread Jon Maron
Hi,

  I have the following configuration in savanna.conf:

# If set to True, Savanna will use floating IPs to communicate
# with instances. To make sure that all instances have
# floating IPs assigned in Nova Network set
# auto_assign_floating_ip=True in nova.conf.If Neutron is
# used for networking, make sure that all Node Groups have
# floating_ip_pool parameter defined. (boolean value)
use_floating_ips=false

# Use Neutron or Nova Network (boolean value)
use_neutron=true

# Use network namespaces for communication (only valid to use in conjunction
# with use_neutron=True)
use_namespaces=true

  My nova.conf file DOES NOT have auto_assign_floating_ip set to True.

  My dashboard local settings file explicitly sets AUTO_ASSIGNMENT_ENABLED = 
False

  Yet, the spawned VMs are generated with a floating IP:

[root@cn082 savanna(keystone_demo)]# nova list
+--+++-+
| ID   | Name   | Status | Networks 
   |
+--+++-+
| e32572ae-397b-4a61-9562-7a52fe6cd738 | dc1-master-001 | ACTIVE | 
private=10.0.0.14, 172.24.4.232 |
| da50a103-0f64-4b33-9bd1-586e8b1c981c | dc1-slave-001  | ACTIVE | 
private=10.0.0.15, 172.24.4.233 |
+--+++-+

  Any idea how this may be the case?

-- Jon


-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Savanna] Definition of template

2013-11-18 Thread Jon Maron
I'd like to suggest that we take a step back and identify the actual 
requirement(s) we're trying address, independent of the actual implementation 
(current or otherwise).  Once that requirement is clearly stated, we can think 
of viable approaches and then, lastly, how the current architecture can be 
re-configured or modified to accommodate those approaches.  Right now this 
conversation has the feel of a nebulous requirement being addressed with a 
kludged approach.

-- Jon

On Nov 13, 2013, at 5:54 PM, Andrey Lazarev alaza...@mirantis.com wrote:

 John,
 
 I think we should either select one of the listed border approaches (no 
 validation or full validation) or define some strict rules on what is allowed 
 and what is not. Otherwise phrases like at a minimum a cluster template 
 would contain node groups is a speculation and we will not be able to 
 determine right way to solve them. I can easily suggest any number of 
 concerns like listed above (cluster template with tt without jt has no 
 sense , cluster template should contain at least namenode, etc.). 
 
 I like idea of no validation for template. If I want to save anti-affinity 
 params in cluster template, but don't know which nodegroups will be used, why 
 can't I do that?  
 
 Thanks,
 Andrew.
 
 
 On Wed, Nov 13, 2013 at 12:23 PM, John Speidel jspei...@hortonworks.com 
 wrote:
 I strongly agree that we should try to keep templates as flexible as possible 
 by allowing some values to be omitted and provided at a later time.  But, in 
 this case, we are talking about cluster templates without any node groups 
 being specified.  I think that at a minimum a cluster template would contain 
 node groups but could omit the node group counts which could be provided at 
 launch time.  This makes a lot of sense.  But, in my opinion, without at 
 least specifying the set of node groups in a cluster template, configuration 
 really wouldn't make sense and therefore the template would not be of 
 much/any value.
 
 
 On Wed, Nov 13, 2013 at 10:08 AM, Alexander Ignatov aigna...@mirantis.com 
 wrote:
 Hi, Andrew
 
 Agreed with your opinion. Initially Savanna’s templates approach is the 
 option 1 you are talking about. 
 This was designed at the start of Savanna 0.2 release cycle. It was also 
 documented here: https://wiki.openstack.org/wiki/Savanna/Templates . 
 Maybe some points are outdated but the idea is the same as the option 1: user 
 can create cluster template and don’t need to specify all fields, for example 
 ’node_groups’ field. And these fields, both required and optional, can be 
 overwritten in the cluster object even if it contains ‘cluster_template_id’.
 
 I see you raised this question because of patch 
 https://review.openstack.org/#/c/56060/. I think it’s just a bug in the 
 validation level not in api.
 
 I also agree that we should change UI part accordingly, at least add an 
 ability for users to override fields set in cluster and node group templates 
 during the cluster creation.
 
 Regards,
 Alexander Ignatov
 
 
 
 On 12 Nov 2013, at 23:20, Andrey Lazarev alaza...@mirantis.com wrote:
 
 Hi all,
 
 I want to raise the question What template is. Answer to this question 
 could influence UI, validation and user experience significantly. I see two 
 possible answers:
 1. Template is a simplification for object creation. It allows to keep 
 common params in one place and not specify them each time. 
 2. Template is a full description of object. User should be able to create 
 object from template without specifying any params.
 
 As I see the current approach is the option 1, but UI is done mostly for 
 option 2. This leads to situations when user creates incomplete template 
 (backend allows it because of option 1), but can't use it later (UI doesn't 
 allow to work with incomplete templates).
 
 Let's define common vision on how will we treat templates and document this 
 somehow.
 
 My opinion is that we should proceed with the option 1 and change UI 
 accordingly.
 
 Thanks,
 Andrew
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 CONFIDENTIALITY NOTICE
 NOTICE: This message is intended for the use of the individual or entity to 
 which it is addressed and may contain information that is confidential, 
 privileged and exempt from disclosure under applicable law. If the reader of 
 this message is not the intended recipient, you are hereby notified that any 
 printing, copying, dissemination, distribution, disclosure or forwarding of 
 this communication is strictly prohibited. If you have received this 
 communication in error, please contact the sender immediately and delete it 
 from your system. Thank 

Re: [openstack-dev] [savanna] Loading of savanna.conf properties

2013-11-10 Thread Jon Maron
The strange part is that I do have os_auth_host configured in savanna.conf, 
pointing to the IP address of the keystone server. That value appears to be 
ignored. I guess I'll have to setup a debugging session. 

Going Mobile


 On Nov 10, 2013, at 4:41 PM, Matthew Farrellee m...@redhat.com wrote:
 
 Jon,
 
 I ran into this issue and we discussed on IRC a couple weeks ago.
 
 At least for the Vanilla plugin you have to supply a instance-infra routable 
 addr/ip for os_auth_host in your savanna.conf.
 
 https://bugs.launchpad.net/savanna/+bug/1244309
 
 Ultimately I'd like to add some code that uses the os_auth_host value to 
 bootstrap and do a service endpoint lookup to get a real endpoint for 
 keystone.
 
 Best,
 
 
 matt
 
 On 11/11/2013 12:38 AM, Dmitry Mescheryakov wrote:
 I've got two guesses why it does not work properly:
 
 1. Savanna does not consume config file at all and the defaults (by
 coincidence) do not work for swift only.
 
 2. you invoke swift_helper.get_swift_configs() at the top level of some
 module like that:
   SWIFT_CONFIG = swift_helper.get_swift_configs()
 The oslo.config is not initialized at module load time (that is true for
 any module), so that is why you see the defaults instead of the supplied
 values.
 
 Still, if you share the code, some other ideas might pop-up.
 
 Dmitry
 
 
 2013/11/10 Jon Maron jma...@hortonworks.com
 mailto:jma...@hortonworks.com
 
I'm not sure that would help - all my code changes are runtime
changes associated with EDP actions (i.e. the cluster and its
configuration are already set).  I was looking for help in trying to
ascertain why the swift_helper would not return the savanna.conf
value at provisioning time.
 
-- Jon
 
On Nov 10, 2013, at 3:53 AM, Dmitry Mescheryakov
dmescherya...@mirantis.com mailto:dmescherya...@mirantis.com wrote:
 
Hey Jon,
 
Can you post your code as a work in progress review? Maybe we can
perceive from the code what is wrong.
 
Thanks,
 
Dmitry
 
 
2013/11/10 Jon Maron jma...@hortonworks.com
mailto:jma...@hortonworks.com
 
Hi,
 
  I am debugging an issue with the swift integration - I see
os_auth_url with a value of 127.0.0.1, indicating that at the
time the swift helper is invoked the default value for auth
host is being leveraged rather than the value in the
savanna.conf file.  Any ideas how that may happen?
 
More detail:
 
  We are invoking the swift_helper to configure the swift
associated properties and ending up with the following in
core-site.xml:
 
  /property
property
namefs.swift.service.savanna.auth.url/name
valuehttp://127.0.0.1:35357/v2.0/tokens//value
  /property
 
  Which, as expected, yields the following when running on a
tasktracker VM:
 
org.apache.pig.impl.plan.VisitorException: ERROR 6000:
file test.pig, line 7, column 0 Output Location Validation
Failed for: 'swift://jmaron.savanna/output More info to follow:
POST http://127.0.0.1:35357/v2.0/tokens/ failed on exception:
java.net.ConnectException: Connection refused; For more
details see: http://wiki.apache.org/hadoop/ConnectionRefused
 
-- Jon
--
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual
or entity to
which it is addressed and may contain information that is
confidential,
privileged and exempt from disclosure under applicable law. If
the reader
of this message is not the intended recipient, you are hereby
notified that
any printing, copying, dissemination, distribution, disclosure or
forwarding of this communication is strictly prohibited. If
you have
received this communication in error, please contact the
sender immediately
and delete it from your system. Thank You.
 
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
mailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
mailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or
entity to which it is addressed and may contain information that is
confidential, privileged and exempt from disclosure under applicable
law. If the reader of this message is not the intended recipient,
you are hereby notified that any printing, copying, dissemination,
distribution, disclosure or forwarding of this communication is
strictly

Re: [openstack-dev] [savanna] using keystone client

2013-10-08 Thread Jon Maron

On Oct 7, 2013, at 10:02 PM, Dolph Mathews dolph.math...@gmail.com wrote:

 
 On Mon, Oct 7, 2013 at 5:57 PM, Jon Maron jma...@hortonworks.com wrote:
 Hi,
 
   I'm trying to use the keystone client code in savanna/utils/openstack but 
 my attempt to sue it yield:
 
  'Api v2.0 endpoint not found in service identity'
 
 
 This sounds like the service catalog for keystone itself either isn't 
 configured, or isn't configured properly (with /v2.0/ endpoints). What does 
 your `keystone service-list` and `keystone endpoint-list` look like?

they look fine:

[root@cn081 ~(keystone_admin)]# keystone endpoint-list
+--+---+---+---+--+--+
|id|   region  |   publicurl
   |  internalurl  |
 adminurl |service_id|
+--+---+---+---+--+--+
| 1d093399f00246b895ce8507c1b24b7b | RegionOne |
http://172.18.0.81:9292|http://172.18.0.81:9292 
   | http://172.18.0.81:9292  | 
dce5859bb86e4d76a3688d2bf70cad33 |
| 48f8a5bcde0747c08b149f36144a018d | RegionOne |
http://172.18.0.81:8080|http://172.18.0.81:8080 
   | http://172.18.0.81:8080  | 
8e83541c4add45058a83609345f0f7f5 |
| 6223abce129948539d413adb0f392f66 | RegionOne | 
http://172.18.0.81:8080/v1/AUTH_%(tenant_id)s | 
http://172.18.0.81:8080/v1/AUTH_%(tenant_id)s | 
http://172.18.0.81:8080/ | fe922aac92ac4e048fb02346a3176827 |
| 64740640bb824c2493cc456c76d9c4e8 | RegionOne |
http://172.18.0.81:8776/v1/%(tenant_id)s   |
http://172.18.0.81:8776/v1/%(tenant_id)s   | 
http://172.18.0.81:8776/v1/%(tenant_id)s | a13bf1f4319a4b78984cbf80ce4a1879 |
| 8948845ea83940f7a04f2d6ec35da7ab | RegionOne |
http://172.18.0.81:8774/v2/%(tenant_id)s   |
http://172.18.0.81:8774/v2/%(tenant_id)s   | 
http://172.18.0.81:8774/v2/%(tenant_id)s | 4480cd65dc6a4858b5b237cc4c30761e |
| cd1420cfcc59467ba76bfc32f79f9c77 | RegionOne |
http://172.18.0.81:9696/   |http://172.18.0.81:9696/
   | http://172.18.0.81:9696/ | 
399854c740b649a6935d6568d3ffe497 |
| d860fe39b41646be97582de9cef8c91c | RegionOne |  
http://172.18.0.81:5000/v2.0 |  http://172.18.0.81:5000/v2.0
 |  http://172.18.0.81:35357/v2.0   | 
b4b2cc6d2db2493eafe2ccbb649b491e |
| edc75652965a4bd2854c194c213ea395 | RegionOne | 
http://172.18.0.81:8773/services/Cloud| 
http://172.18.0.81:8773/services/Cloud|  
http://172.18.0.81:8773/services/Admin  | dea2442916144ef18cf64d2111f1d906 |
+--+---+---+---+--+--+
[root@cn081 ~(keystone_admin)]# keystone service-list
+--+--+--++
|id|   name   | type |  
description   |
+--+--+--++
| a13bf1f4319a4b78984cbf80ce4a1879 |  cinder  |volume| Cinder 
Service |
| dce5859bb86e4d76a3688d2bf70cad33 |  glance  |image |Openstack 
Image Service |
| b4b2cc6d2db2493eafe2ccbb649b491e | keystone |   identity   |   OpenStack 
Identity Service   |
| 4480cd65dc6a4858b5b237cc4c30761e |   nova   |   compute|   Openstack 
Compute Service|
| dea2442916144ef18cf64d2111f1d906 | nova_ec2 | ec2  |  EC2 
Service   |
| 399854c740b649a6935d6568d3ffe497 | quantum  |   network|   Quantum 
Networking Service   |
| fe922aac92ac4e048fb02346a3176827 |  swift   | object-store | Openstack 
Object-Store Service |
| 8e83541c4add45058a83609345f0f7f5 | swift_s3 |  s3  |  Openstack 
S3 Service  |
+--+--+--++


  
   An code sample:
 
 from savanna.utils.openstack import keystone
 
 . . .
   service_id = next((service.id for service in
keystone.client().services.list()
if 'quantum' == service.name), None)
 
 I don't really know what the context of this code is, but be aware that it 
 requires admin access to keystone and is not interacting with a 
 representation of the catalog that normal users see.

I'm

[openstack-dev] [savanna] using keystone client

2013-10-07 Thread Jon Maron
Hi,

  I'm trying to use the keystone client code in savanna/utils/openstack but my 
attempt to sue it yield:

 'Api v2.0 endpoint not found in service identity'

  An code sample:

from savanna.utils.openstack import keystone

. . .
  service_id = next((service.id for service in
   keystone.client().services.list()
   if 'quantum' == service.name), None)

  Thanks for the help!

-- Jon


-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [savanna] neutron and private networks

2013-10-03 Thread Jon Maron
Hi,

  I'd like to raise an issue in the hopes of opening some discussion on the IRC 
chat later today:

We see a critical requirement to support the creation of a savanna cluster 
with neutron networking while leveraging a private network (i.e. without the 
assignment of public IPs) - at least during the provisioning phase.  So the 
current neutron solution coded in the master branch appears to be insufficient 
(it is dependent on the assignment of public IPs to launched instances), at 
least in the context of discussions we've had with users.

  We've been experimenting and trying to understand the viability of such an 
approach and have had some success establishing SSH connections over a private 
network using paramiko etc.  So as long as there is a mechanism to ascertain 
the namespace associated with the given cluster/tenant (configuration?  neutron 
client?) it appears that the modifications to the actual savanna code for the 
instance remote interface (the SSH client code etc) will be fairly small.  The 
namespace selection could potentially be another field made available in the 
dashboard's cluster creation interface.

-- Jon


  
-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [savanna] multi-tenant support

2013-10-02 Thread Jon Maron
Hi,

  A couple of questions related to multi-tenancy:

  1)  Does savanna currently support multiple concurrent cluster creation 
requests from multiple users/tenants?

  2)  What is the significance of the admin credentials in savanna.conf?  Is 
there support for multiple tenants via the savanna dashboard?  How are those 
configured credentials leveraged in the context of a multi tenant environment?

-- Jon


-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [savanna] unit test failures/errors

2013-09-25 Thread Jon Maron
Hi,

  I can't seem to get a clean run from the py27 unit tests.  On the surface it 
doesn't seem that my current commit has anything to do with the code paths 
tested.  I've tried rebuilding my virtual env as well as rebasing but the issue 
hasn't been resolved.  I've also clones the repository into a different 
directory and ran the tests with the same failure results (further proving my 
commit has nothing to do with the failures). I've started debugging thru this 
code to try to ascertain the issue, but if anyone can comment on what may be 
the underlying issue I would appreciate it.

==
ERROR: test_cluster_create_cluster_tmpl_node_group_mixin 
(savanna.tests.unit.service.validation.test_cluster_create_validation.TestClusterCreateFlavorValidation)
--
Traceback (most recent call last):
  File 
/Users/jmaron/dev/workspaces/savanna/savanna/tests/unit/service/validation/test_cluster_create_validation.py,
 line 206, in setUp
api.plugin_base.setup_plugins()
  File /Users/jmaron/dev/workspaces/savanna/savanna/plugins/base.py, line 
197, in setup_plugins
PLUGINS = PluginManager()
  File /Users/jmaron/dev/workspaces/savanna/savanna/plugins/base.py, line 
110, in __init__
self._load_all_plugins()
  File /Users/jmaron/dev/workspaces/savanna/savanna/plugins/base.py, line 
129, in _load_all_plugins
self.plugins[plugin_name] = self._get_plugin_instance(plugin_name)
  File /Users/jmaron/dev/workspaces/savanna/savanna/plugins/base.py, line 
148, in _get_plugin_instance
plugin_path = CONF['plugin:%s' % plugin_name].plugin_class
  File 
/Users/jmaron/dev/workspaces/savanna/.tox/py27/lib/python2.7/site-packages/oslo/config/cfg.py,
 line 1645, in __getitem__
return self.__getattr__(key)
  File 
/Users/jmaron/dev/workspaces/savanna/.tox/py27/lib/python2.7/site-packages/oslo/config/cfg.py,
 line 1641, in __getattr__
raise NoSuchOptError(name)
NoSuchOptError: no such option: plugin:vanilla
  begin captured logging  
savanna.plugins.base: DEBUG: List of requested plugins: []
-  end captured logging  -

==
FAIL: test_cluster_create_v_cluster_configs 
(savanna.tests.unit.service.validation.test_cluster_create_validation.TestClusterCreateValidation)
--
Traceback (most recent call last):
  File 
/Users/jmaron/dev/workspaces/savanna/savanna/tests/unit/service/validation/test_cluster_create_validation.py,
 line 153, in test_cluster_create_v_cluster_configs
self._assert_cluster_configs_validation(True)
  File 
/Users/jmaron/dev/workspaces/savanna/savanna/tests/unit/service/validation/utils.py,
 line 329, in _assert_cluster_configs_validation
Plugin's applicable target 'HDFS' doesn't 
  File 
/Users/jmaron/dev/workspaces/savanna/.tox/py27/lib/python2.7/site-packages/mock.py,
 line 1201, in patched
return func(*args, **keywargs)
  File 
/Users/jmaron/dev/workspaces/savanna/savanna/tests/unit/service/validation/utils.py,
 line 227, in _assert_create_object_validation
self._assert_calls(bad_req, bad_req_i)
  File 
/Users/jmaron/dev/workspaces/savanna/savanna/tests/unit/service/validation/utils.py,
 line 211, in _assert_calls
self.assertEqual(mock.call_args[0][0].message, call_info[2])
AssertionError: Plugin doesn't contain applicable target 'HDFS' != Plugin's 
applicable target 'HDFS' doesn't contain config with name 's'
'Plugin doesn\'t contain applicable target \'HDFS\' != Plugin\'s 
applicable target \'HDFS\' doesn\'t contain config with name \'s\'' = '%s != 
%s' % (safe_repr(Plugin doesn't contain applicable target 'HDFS'), 
safe_repr(Plugin's applicable target 'HDFS' doesn't contain config with name 
's'))
'Plugin doesn\'t contain applicable target \'HDFS\' != Plugin\'s 
applicable target \'HDFS\' doesn\'t contain config with name \'s\'' = 
self._formatMessage('Plugin doesn\'t contain applicable target \'HDFS\' != 
Plugin\'s applicable target \'HDFS\' doesn\'t contain config with name 
\'s\'', 'Plugin doesn\'t contain applicable target \'HDFS\' != Plugin\'s 
applicable target \'HDFS\' doesn\'t contain config with name \'s\'')
  raise self.failureException('Plugin doesn\'t contain applicable target 
 \'HDFS\' != Plugin\'s applicable target \'HDFS\' doesn\'t contain config 
 with name \'s\'')

  begin captured logging  
savanna.plugins.base: DEBUG: List of requested plugins: ['vanilla', 'hdp']
savanna.plugins.base: INFO: Plugin 'vanilla' defined and loaded
savanna.plugins.base: INFO: Plugin 'hdp' defined and loaded
-  end captured logging  -

==
FAIL: 

Re: [openstack-dev] [savanna] Program name and Mission statement

2013-09-11 Thread Jon Maron

On Sep 10, 2013, at 9:42 PM, Mike Spreitzer mspre...@us.ibm.com wrote:

 Jon Maron jma...@hortonworks.com wrote on 09/10/2013 08:50:23 PM:
 
  From: Jon Maron jma...@hortonworks.com 
  To: OpenStack Development Mailing List openstack-dev@lists.openstack.org, 
  Cc: OpenStack Development Mailing List openstack-dev@lists.openstack.org 
  Date: 09/10/2013 08:55 PM 
  Subject: Re: [openstack-dev] [savanna] Program name and Mission statement 
  
  Openstack Big Data Platform 
 
 Let's see if you mean that.  Does this project aim to cover big data things 
 besides MapReduce?  Can you give examples of other things that are in scope? 

Hive, Pig, data storage, oozie etc

 
 Thanks, 
 Mike___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [savanna] swift integration - optional?

2013-09-11 Thread Jon Maron
Hi,

  I noticed that the swift integration is optionally enabled via a 
configuration property?  Is there a reason for not making it available as a 
base, feature of the cluster (i.e. simply allowing access to swift should it be 
required)?  What would be a scenario in which it would be beneficial to 
explicitly disable it?

-- Jon


-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [savanna] Program name and Mission statement

2013-09-10 Thread Jon Maron
Openstack Big Data Platform


On Sep 10, 2013, at 8:39 PM, David Scott david.sc...@cloudscaling.com wrote:

 I vote for 'Open Stack Data'
 
 
 On Tue, Sep 10, 2013 at 5:30 PM, Zhongyue Luo zhongyue@intel.com wrote:
 Why not OpenStack MapReduce? I think that pretty much says it all?
 
 
 On Wed, Sep 11, 2013 at 3:54 AM, Glen Campbell g...@glenc.io wrote:
 performant isn't a word. Or, if it is, it means having performance. I 
 think you mean high-performance.
 
 
 On Tue, Sep 10, 2013 at 8:47 AM, Matthew Farrellee m...@redhat.com wrote:
 Rough cut -
 
 Program: OpenStack Data Processing
 Mission: To provide the OpenStack community with an open, cutting edge, 
 performant and scalable data processing stack and associated management 
 interfaces.
 
 
 On 09/10/2013 09:26 AM, Sergey Lukjanov wrote:
 It sounds too broad IMO. Looks like we need to define Mission Statement
 first.
 
 Sincerely yours,
 Sergey Lukjanov
 Savanna Technical Lead
 Mirantis Inc.
 
 On Sep 10, 2013, at 17:09, Alexander Kuznetsov akuznet...@mirantis.com
 mailto:akuznet...@mirantis.com wrote:
 
 My suggestion OpenStack Data Processing.
 
 
 On Tue, Sep 10, 2013 at 4:15 PM, Sergey Lukjanov
 slukja...@mirantis.com mailto:slukja...@mirantis.com wrote:
 
 Hi folks,
 
 due to the Incubator Application we should prepare Program name
 and Mission statement for Savanna, so, I want to start mailing
 thread about it.
 
 Please, provide any ideas here.
 
 P.S. List of existing programs:
 https://wiki.openstack.org/wiki/Programs
 P.P.S. https://wiki.openstack.org/wiki/Governance/NewPrograms
 
 Sincerely yours,
 Sergey Lukjanov
 Savanna Technical Lead
 Mirantis Inc.
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 mailto:OpenStack-dev@lists.openstack.org
 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 mailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 -- 
 Glen Campbell
 http://glenc.io • @glenc
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 -- 
 Intel SSG/STOD/DCST/CIT
 880 Zixing Road, Zizhu Science Park, Minhang District, 200241, Shanghai, 
 China
 +862161166500
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [savanna] cluster scaling on the 0.2 branch

2013-09-03 Thread Jon Maron
Found an error in the HDP validation code affecting the node count of the 
additional (new) node group.  Looked at the savanna core code and realized the 
nature of the way the node groups were being scaled up (both existing and 
additional) and that pointed me to the issue.

-- Jon

On Aug 30, 2013, at 3:47 PM, Jon Maron jma...@hortonworks.com wrote:

 I've done some additional debugging/testing, and the issue is definitely in 
 the savanna provisioning code.
 
 I have verified that the correct inputs are provided to the validate_scaling 
 method invocation, and that those references remain unaltered.  The scaling 
 request involves adding one node of a new node group named 'another', and 
 adding one node to the existing 'slave' node group:
 
 cluster.node_groups:
 
 [savanna.db.models.NodeGroup[object at 107b15f50] 
 {created=datetime.datetime(2013, 8, 30, 19, 20, 49, 857213), 
 updated=datetime.datetime(2013, 8, 30, 19, 20, 49, 857222), 
 id=u'effcc91c-d0de-4508-84ba-9cedc7e321f6', name=u'master', flavor_id=u'3', 
 image_id=None, node_processes=[u'NAMENODE', u'SECONDARY_NAMENODE', 
 u'GANGLIA_SERVER', u'GANGLIA_MONITOR', u'AMBARI_SERVER', u'AMBARI_AGENT', 
 u'JOBTRACKER', u'NAGIOS_SERVER'], node_configs={}, volumes_per_node=0, 
 volumes_size=10, volume_mount_prefix=u'/volumes/disk', count=1, 
 cluster_id=u'd3052854-8b56-47b6-b3c1-612750aab612', 
 node_group_template_id=u'15344a5c-5e83-496a-9648-d7b58f40ad1f'}, 
 savanna.db.models.NodeGroup[object at 107ca1750] 
 {created=datetime.datetime(2013, 8, 30, 19, 20, 49, 860178), 
 updated=datetime.datetime(2013, 8, 30, 19, 20, 49, 860184), 
 id=u'b56a2e69-58d9-4e95-a54f-d9b994bc8515', name=u'slave', flavor_id=u'3', 
 image_id=None, node_processes=[u'DATANODE', u'HDFS_CLIENT', 
 u'GANGLIA_MONITOR', u'AMBARI_AGENT', u'TASKTRACKER', u'MAPREDUCE_CLIENT'], 
 node_configs={}, volumes_per_node=0, volumes_size=10, 
 volume_mount_prefix=u'/volumes/disk', count=1, 
 cluster_id=u'd3052854-8b56-47b6-b3c1-612750aab612', 
 node_group_template_id=u'5dd6aa5a-496c-4dda-b94c-3b3752eb0efb'}]
 
 additional:
 
 {savanna.db.models.NodeGroup[object at 107cc77d0] {created=None, 
 updated=None, id=None, name=u'another', flavor_id=u'3', image_id=None, 
 node_processes=[u'DATANODE', u'HDFS_CLIENT', u'GANGLIA_MONITOR', 
 u'AMBARI_AGENT', u'TASKTRACKER', u'MAPREDUCE_CLIENT'], node_configs={}, 
 volumes_per_node=0, volumes_size=10, volume_mount_prefix=u'/volumes/disk', 
 count=1, cluster_id=None, 
 node_group_template_id=u'f7f2ddc3-18ca-439f-9c08-570ff9307baf'}: 1}
 
 existing:
 
 {u'slave': 2}
 
 Once the scale_cluster() call is made, the cluster does have the additional 
 node group, but the list of instances isn't correct:
 
 cluster.node_groups (note the addition of the 'another' node group):
 
 - [savanna.db.models.NodeGroup[object at 107c9cad0] 
 {created=datetime.datetime(2013, 8, 30, 19, 20, 49, 857213), 
 updated=datetime.datetime(2013, 8, 30, 19, 20, 49, 857222), 
 id=u'effcc91c-d0de-4508-84ba-9cedc7e321f6', name=u'master', flavor_id=u'3', 
 image_id=None, node_processes=[u'NAMENODE', u'SECONDARY_NAMENODE', 
 u'GANGLIA_SERVER', u'GANGLIA_MONITOR', u'AMBARI_SERVER', u'AMBARI_AGENT', 
 u'JOBTRACKER', u'NAGIOS_SERVER'], node_configs={}, volumes_per_node=0, 
 volumes_size=10, volume_mount_prefix=u'/volumes/disk', count=1, 
 cluster_id=u'd3052854-8b56-47b6-b3c1-612750aab612', 
 node_group_template_id=u'15344a5c-5e83-496a-9648-d7b58f40ad1f'}, 
 - savanna.db.models.NodeGroup[object at 107c9cc90] 
 {created=datetime.datetime(2013, 8, 30, 19, 20, 49, 860178), 
 updated=datetime.datetime(2013, 8, 30, 19, 34, 51, 39463), 
 id=u'b56a2e69-58d9-4e95-a54f-d9b994bc8515', name=u'slave', flavor_id=u'3', 
 image_id=None, node_processes=[u'DATANODE', u'HDFS_CLIENT', 
 u'GANGLIA_MONITOR', u'AMBARI_AGENT', u'TASKTRACKER', u'MAPREDUCE_CLIENT'], 
 node_configs={}, volumes_per_node=0, volumes_size=10, 
 volume_mount_prefix=u'/volumes/disk', count=2, 
 cluster_id=u'd3052854-8b56-47b6-b3c1-612750aab612', 
 node_group_template_id=u'5dd6aa5a-496c-4dda-b94c-3b3752eb0efb'}, 
 - savanna.db.models.NodeGroup[object at 107cc7290] 
 {created=datetime.datetime(2013, 8, 30, 19, 34, 49, 309577), 
 updated=datetime.datetime(2013, 8, 30, 19, 34, 49, 309584), 
 id=u'b8ea4e37-68d1-471d-9ddf-b74c2c533892', name=u'another', flavor_id=u'3', 
 image_id=None, node_processes=[u'DATANODE', u'HDFS_CLIENT', 
 u'GANGLIA_MONITOR', u'AMBARI_AGENT', u'TASKTRACKER', u'MAPREDUCE_CLIENT'], 
 node_configs={}, volumes_per_node=0, volumes_size=10, 
 volume_mount_prefix=u'/volumes/disk', count=1, 
 cluster_id=u'd3052854-8b56-47b6-b3c1-612750aab612', 
 node_group_template_id=u'f7f2ddc3-18ca-439f-9c08-570ff9307baf'}]
 
 However, only the instance for the existing node group is passed in:
 
 [savanna.db.models.Instance[object at 107cc9f50] 
 {created=datetime.datetime(2013, 8, 30, 19, 34, 50, 727467), 
 updated=datetime.datetime(2013, 8, 30, 19, 35, 36, 853529), extra=None, 
 node_group_id=u'b56a2e69-58d9-4e95-a54f-d9b994bc8515

[openstack-dev] [savanna] cluster scaling on the 0.2 branch

2013-08-28 Thread Jon Maron
Hi,

  I am trying to back port the HDP scaling implementation to the 0.2 branch and 
have run into a number of differences.  At this point I am trying to figure out 
whether what I am observing is intended or symptoms of a bug.

  For a case in which I am adding one instance to an existing node group as 
well as an additional node group with one instance I am seeing the following 
arguments being passed to the scale_cluster method of the plugin:

- A cluster object that contains the following set of node groups:

[savanna.db.models.NodeGroup[object at 10d8bdd90] 
{created=datetime.datetime(2013, 8, 28, 21, 50, 5, 208003), 
updated=datetime.datetime(2013, 8, 28, 21, 50, 5, 208007), 
id=u'd6fadb7a-367b-41ed-989c-af40af2d3e3d', name=u'master', flavor_id=u'3', 
image_id=None, node_processes=[u'NAMENODE', u'SECONDARY_NAMENODE', 
u'GANGLIA_SERVER', u'GANGLIA_MONITOR', u'AMBARI_SERVER', u'AMBARI_AGENT', 
u'JOBTRACKER', u'NAGIOS_SERVER'], node_configs={}, volumes_per_node=0, 
volumes_size=10, volume_mount_prefix=u'/volumes/disk', count=1, 
cluster_id=u'e086d444-2a0f-4105-8ef2-51c56cdb70d2', 
node_group_template_id=u'15344a5c-5e83-496a-9648-d7b58f40ad1f'}, 
savanna.db.models.NodeGroup[object at 10d8bd950] 
{created=datetime.datetime(2013, 8, 28, 21, 50, 5, 210962), 
updated=datetime.datetime(2013, 8, 28, 22, 5, 1, 728402), 
id=u'672e5597-2a8d-4470-8f5d-8cc43c7bb28e', name=u'slave', flavor_id=u'3', 
image_id=None, node_processes=[u'DATANODE', u'HDFS_CLIENT', u'GANGLIA_MONITOR', 
u'AMBARI_AGENT', u'TASKTRACKER', u'MAPREDUCE_CLIENT'], node_configs={}, 
volumes_per_node=0, volumes_size=10, volume_mount_prefix=u'/volumes/disk', 
count=2, cluster_id=u'e086d444-2a0f-4105-8ef2-51c56cdb70d2', 
node_group_template_id=u'5dd6aa5a-496c-4dda-b94c-3b3752eb0efb'}, 
savanna.db.models.NodeGroup[object at 10d897f90] 
{created=datetime.datetime(2013, 8, 28, 22, 4, 59, 871379), 
updated=datetime.datetime(2013, 8, 28, 22, 4, 59, 871388), 
id=u'880e1b17-f4e4-456d-8421-31bf8ef1fb65', name=u'slave2', flavor_id=u'1', 
image_id=None, node_processes=[u'DATANODE', u'HDFS_CLIENT', u'GANGLIA_MONITOR', 
u'AMBARI_AGENT', u'TASKTRACKER', u'MAPREDUCE_CLIENT'], node_configs={}, 
volumes_per_node=0, volumes_size=10, volume_mount_prefix=u'/volumes/disk', 
count=1, cluster_id=u'e086d444-2a0f-4105-8ef2-51c56cdb70d2', 
node_group_template_id=u'd67da924-792b-4558-a5cb-cb97bba4107f'}]
 
  So it appears that the cluster is already configured with the three node 
groups (two original, one new) and the associated counts.

- The list of instances.  However, whereas the master branch was passing me two 
instances (one instance representing the addition to the existing group, one 
representing the new instance associated with the added node group), in the 0.2 
branch I am only seeing one instance being passed (the one instance being added 
to the existing node group):

(Pdb) p instances
[savanna.db.models.Instance[object at 10d8bf050] 
{created=datetime.datetime(2013, 8, 28, 22, 5, 1, 725343), 
updated=datetime.datetime(2013, 8, 28, 22, 5, 47, 286665), extra=None, 
node_group_id=u'672e5597-2a8d-4470-8f5d-8cc43c7bb28e', 
instance_id=u'377694a2-a589-479b-860f-f1541d249624', 
instance_name=u'scale-slave-002', internal_ip=u'192.168.32.4', 
management_ip=u'172.18.3.5', volumes=[]}]
(Pdb) p len(instances)
1

  I am not certain why I am not getting a listing of instances representing the 
instances being added to the cluster as I do in the master branch.  Is this 
intended?  How do I obtain the instance reference for the instance being added 
to the new node group?

-- Jon
-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev