Re: [openstack-dev] [Sahara] Questions about how Sahara use trust ?

2015-07-10 Thread Andrew Lazarev
Hi Chen,

As I remember, proxy users were added for security reasons. When one user
creates cluster in Sahara he should not get access to data of other users.

Thanks,
Andrew.

On Thu, Jul 9, 2015 at 11:12 PM, Li, Chen chen...@intel.com wrote:

  Hi Sahara guys,





 When sahara create a transient cluster, it create a trust with sahara
 admin user.


 https://github.com/openstack/sahara/blob/master/sahara/service/ops.py#L239-L240


 https://github.com/openstack/sahara/blob/master/sahara/service/trusts.py#L79



 When sahara deal with swift, it create a trust too, but :

 sahara admin user = create a proxy domain =  set in sahara.conf

 ð  sahara create proxy user in the domain.

 ð  create a trust with the proxy user.

 https://github.com/openstack/sahara/blob/master/sahara/utils/proxy.py#L110

 https://github.com/openstack/sahara/blob/master/sahara/utils/proxy.py#L265





 My questions are :

 Why not user proxy user for transient cluster ?

 Or, why a proxy user is needed for swift but not use sahara admin user
 directly ?



 Looking forward to your reply.





 Thanks.

 -chen

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Sahara] Difference between Sahara and CloudBrak

2015-06-15 Thread Andrew Lazarev
Hi Jay,

Cloudbreak is a Hadoop installation tool driven by Hortonworks. The main
difference with Sahara is a point of control. In Hortonworks world you have
Ambari and different planforms (AWS, OpenStack, etc.) to run Hadoop. Sahara
point of view - you have OpenStack cluster and want to control everything
from horizon (Hadoop of any vendor, Murano apps, etc.).

So,
If you tied with Hortonworks, spend most working time in Ambari and run
Hadoop on different types of clouds - choose CloudBreak.
If you have OpenStack infrastructure and want to run Hadoop on top of it -
choose Sahara.

Thanks,
Andrew.

On Mon, Jun 15, 2015 at 9:03 AM, Jay Lau jay.lau@gmail.com wrote:

 Hi Sahara Team,

 Just notice that the CloudBreak (https://github.com/sequenceiq/cloudbreak)
 also support running on top of OpenStack, can anyone show me some
 difference between Sahara and CloudBreak when both of them using OpenStack
 as Infrastructure Manager?

 --
 Thanks,

 Jay Lau (Guangya Liu)

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [surge] Introducing Surge - rapid deploy/scale stream processing systems on OpenStack

2015-05-18 Thread Andrew Lazarev
As I see Surge is pretty much replicating Sahara functionality running in
one process per host mode. Sahara currently supports much more features.

Andrew.

On Fri, May 15, 2015 at 10:38 AM, Joe Gordon joe.gord...@gmail.com wrote:



 On Fri, May 15, 2015 at 10:13 AM, Debojyoti Dutta ddu...@gmail.com
 wrote:

 Hi,

 It gives me a great pleasure to introduce Surge - a system to rapidly
 deploy and scale a stream processing system on OpenStack. It leverages
 Vagrant and Ansible, and supports both OpenStack as well as the local mode
 (with VirtualBox).

 https://github.com/CiscoSystems/surge


 I see you support Storm and Kafka,

 How is this different then Sahara's Storm plugin?


 https://github.com/openstack/sahara/blob/45045d918f363fa5763cde700561434345017661/setup.cfg#L47

 And I See Sahara is exploring Kafka support:
 https://blueprints.launchpad.net/sahara/+spec/cdh-kafka-service


 Hope to see a lot of pull requests and comments.

 thx
 -Debo~

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Sahara] improve oozie engine for common lib management

2015-05-18 Thread Andrew Lazarev
I think it could be useful and pretty easy to implement. Feel free to file
blueprint+spec.

Andrew.

On Mon, May 11, 2015 at 9:49 AM, Sergey Lukjanov slukja...@mirantis.com
wrote:

 Hi,

 do you think it could be implemented based on job binaries? It sounds like
 it's a type of job binary that should be always uploaded to Oozie for the
 tenant where it lives. (A bit crazy, but could useful).

 Thanks.

 On Mon, May 11, 2015 at 6:39 PM, lu jander juvenboy1...@gmail.com wrote:

 Hi, All
 Currently, oozie share lib is not well known and can be hardly used by
 the users, so I think we can  make it less oozieness and more friendly for
 the users, it can be used for running jobs which are using third party
 libs. If many jobs use the same libs, oozie share lib can make it as a
 common share lib.

 here is the bp,
 https://blueprints.launchpad.net/sahara/+spec/improve-oozie-share-lib
 I will write a spec soon after scheduled edp jobs bp.

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Sincerely yours,
 Sergey Lukjanov
 Sahara Technical Lead
 (OpenStack Data Processing)
 Principal Software Engineer
 Mirantis Inc.

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Sahara HA discussion

2015-05-13 Thread Andrew Lazarev
Hi Lu,

Topic in https://etherpad.openstack.org/p/sahara-liberty-proposed-sessions
is about HA in Sahara itself. How to recover after openstack controller
failure. It doesn't cover HA of clusters created by Sahara.

Thanks,
Andrew.


On Sun, Apr 26, 2015 at 10:44 PM, lu jander juvenboy1...@gmail.com wrote:

 Hi, Sergey

 we are in the same phase like you, I have noticed that there is a topic in
 the https://etherpad.openstack.org/p/sahara-liberty-proposed-sessions

 currently we decide to do the HA on service level (HDFS etc), here is the
 doc
 http://www.cloudera.com/content/cloudera/en/documentation/core/latest/topics/admin_ha.html

 but I have heard that you will focus on the node level HA? for example.
  like rebuild a node when one node is failed?

 2015-04-23 16:33 GMT+08:00 Sergey Reshetnyak sreshetn...@mirantis.com:

 Hi Lu

 I'm going to start working on HA support for Sahara  for HDP and CDH
 plugins. Now I didn't create specs or blueprints about HA. Also I don't
 have code for HA support.
 When are you going to start implement HA for CDH?

 Thanks
 Sergey

 2015-04-20 4:06 GMT+03:00 Lu, Huichun huichun...@intel.com:

  Hi Sergey

 Last IRC meeting, I heard that you are currently working on the HA on
 CDH and HDP, by chance that we just raise a bp about HA, so do you have any
 bp or spec about this topic? I think it’s interesting about this topic, we
 can have some discussion.

 https://blueprints.launchpad.net/sahara/+spec/cdh-ha-support





 thx Sergey ^^



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Sahara] Is it possible to do instance scale for a node in a living cluster ?

2015-03-23 Thread Andrew Lazarev
Hi Chen,

1.   resize the master node. Using command like “nova resize”, to add
more cpu  memory and other resources for this single instance.
Instance reside is not supported on Sahara level. But you can always do
that on nova level. Sahara doesn't use any specific information about
flavor (except disk mappings: swap, etc.). How Hadoop reacts on hardware
change fully depends on distribution used.

However if you need to resize scalable processes (e.g. datanode) you can
create new instance with larger flavor and delete old one. Sahara supports
such operation out of the box. Of course, processes that live in one copy
can't be scaled in such manner.

2.   Split processes on the master node to several nodes.
As in previous item this can be done only for scalable processes. Each
Sahara plugin defines which processes are scalable and which are not.

Another example, I already have a big cluster, and now I want to enable a
new service on it.
1.   I would like to start a new node for the new service and add the
new node into my cluster.
You can easily do that. Cluster scale in Sahara is not limited with
nodegroups it already has. You can add any new nodegroups to the cluster.

2.   Or, just start the new service on a node which already in the
cluster.
This is not supported. But you can start node with joined old and new
services on it and delete old node. As I said before this could be done
only for scalable processes.

Currently, Sahara start cluster and manage all nodes based on “templates”,
so everything on “living” has to be pre-defined.
This is not true. You can add whatever you like as new nodegroups.

So, my question here is:
 Is it possible for Sahara to do things like that ?
Sahara supports scaling by adding and removing nodes. It doesn't support
changing of existing nodes.

 Would Sahara want to support things like this ?
Changing existing nodes totally makes sense (like modifications of heat
stack), but looks a bit comprehensive. Plugins will need to know on how to
change existing services. But for hadoop distributions with managers
(Ambari in HDP, CMC in Cloudera) it should be easy.

 If yes, any plans in the past, in the future?
 If not, any special reasons ??
As I remember there were no such plans because you could always get desired
configuration by adding/removing nodes. If you need master nodes upgrade
you can always create new cluster and migrate data to it.

Thanks,
Andrew.

On Sun, Mar 22, 2015 at 7:03 PM, Li, Chen chen...@intel.com wrote:

  Hi Sahara,



 Recently, I learned Sahara support nodes scale up  scale down, which
 means “scale” has been limited to node number.

 Is it possible to do a “instance scale”?



 For example, I build a small cluster at first, several slave nodes
 (running datanode  nodemanager) and a single master node (running all
 other processes)

 I keep increasing the number of slave nodes, and in a moment, my master
 node has become the performance bottleneck of the whole cluster.



 In this case, I would like to do several things, such as:

 1.   resize the master node. Using command like “nova resize”, to add
 more cpu  memory and other resources for this single instance.

 2.   Split processes on the master node to several nodes.



 I think it make sense to users.

 Or the whole “performance bottleneck” staff would never happen in real
 world  ???



 Another example, I already have a big cluster, and now I want to enable a
 new service on it.

 1.   I would like to start a new node for the new service and add the
 new node into my cluster.

 2.   Or, just start the new service on a node which already in the
 cluster.



 Currently, Sahara start cluster and manage all nodes based on “templates”,
 so everything on “living” has to be pre-defined.

 Things on above breaks the whole “pre-defined” thing.



 So, my question here is:

  Is it possible for Sahara to do things like that ?



  Would Sahara want to support things like this ?

  If yes, any plans in the past, in the future?

  If not, any special reasons ??





 Looking forward to your reply.



 Thanks.

 -chen







 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] About Sahara EDP New Ideas for Liberty

2015-03-20 Thread Andrew Lazarev
Hi Weiting,

1. Add a schedule feature to run the jobs on time:
This request comes from the customer, they usually run the job in a
specific time every day. So it should be great if there
 is a scheduler to help arrange the regular job to run.
Looks like a great feature. And should be quite easy to implement. Feel
free to create spec for that.

2. A more complex workflow design in Sahara EDP:
Current EDP only provide one job that is running on one cluster.
Yes. And ability to run several jobs in one oozie workflow is discussed on
every summit (e.g. 'coordinated jobs' at
https://etherpad.openstack.org/p/kilo-summit-sahara-edp). But for now it
was not a priority

But in a real case, it should be more complex, they usually use multiple
jobs to calculate the data and may use several different type clusters to
process it..
It means that workflow manager should be on Sahara side. Looks like a
complicated feature. But we would be happy to help with designing and
implementing it. Please file proposal for design session on ongoing summit.
Are you going to Vancouver?

Another concern is about Spark, for Spark it cannot use Oozie to do this.
So we need to create an abstract layer to help to implement this kind of
scenarios.
If workflow is on Sahara side it should work automatically for all engines.

Thanks,
Andrew.



On Sun, Mar 8, 2015 at 3:17 AM, Chen, Weiting weiting.c...@intel.com
wrote:

  Hi all.



 We got several feedbacks about Sahara EDP’s future from some China
 customers.

 Here are some ideas we would like to share with you and need your input if
 we can implement them in Sahara(Liberty).



 1. Add a schedule feature to run the jobs on time:

 This request comes from the customer, they usually run the job in a
 specific time every day. So it should be great if there is a scheduler to
 help arrange the regular job to run.



 2. A more complex workflow design in Sahara EDP:

 Current EDP only provide one job that is running on one cluster.

 But in a real case, it should be more complex, they usually use multiple
 jobs to calculate the data and may use several different type clusters to
 process it.

 For example: Raw Data - Job A(Cluster A) - Job B(Cluster B) - Job
 C(Cluster A) - Result

 Actually in my opinion, this kind of job could be easy to implement by
 using Oozie as a workflow engine. But for current EDP, it doesn’t implement
 this kind of complex case.

 Another concern is about Spark, for Spark it cannot use Oozie to do this.
 So we need to create an abstract layer to help to implement this kind of
 scenarios.



 However, any suggestion is welcome.

 Thanks.



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [sahara] team meeting Feb 12 1400 UTC

2015-02-11 Thread Andrew Lazarev
Hi guys,

We'll be having the Sahara team meeting tomorrow at #openstack-meeting-3
channel.

Agenda: https://wiki.openstack.org/wiki/Meetings/SaharaAgenda#Next_meetings

http://www.timeanddate.com/worldclock/fixedtime.html?msg=Sahara+Meetingiso=20150212T14

Thanks,
Andrew
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [devstack] configuring https for glance client

2015-02-10 Thread Andrew Lazarev
This doesn't look flexible for me. Glance and keystone could use different
settings for SSL. I like current way to use session and config section for
each separate client (like [1]).

[1] https://review.openstack.org/#/c/131098/

Thanks,
Andrew.

On Mon, Feb 9, 2015 at 6:19 PM, Matt Riedemann mrie...@linux.vnet.ibm.com
wrote:



 On 2/9/2015 5:40 PM, Andrew Lazarev wrote:

 Hi Nova experts,

 Some time ago I figured out that devstack fails to stack with
 USE_SSL=True option because it doesn't configure nova to work with
 secured glace [1]. Support of secured glance was added to nova in Juno
 cycle [2], but it looks strange for me.

 Glance client takes settings form '[ssl]' section. The same section is
 used to set up nova server SSL settings. Other clients have separate
 sections in the config file (and switching to session use now),  e.g.
 related code for cinder - [3].

 I've created quick fix for the devstack - [4], but it would be nice to
 shed a light on nova plans around glance config before merging a
 workaround for devstack.

 So, the questions are:
 1. Is it normal that glance client reads from '[ssl]' config section?
 2. Is there a plan to move glance client to sessions use and move
 corresponding config section to '[glance]'?
 3. Are any plans to run CI for USE_SSL=True use case?

 [1] - https://bugs.launchpad.net/devstack/+bug/1405484
 [2] - https://review.openstack.org/#/c/72974
 [3] -
 https://github.com/openstack/nova/blob/2015.1.0b2/nova/
 volume/cinder.py#L73
 [4] - https://review.openstack.org/#/c/153737

 Thanks,
 Andrew.


 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
 unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 This came up in another -dev thread at one point which prompted a series
 from Matthew Gilliard [1] to use [ssl] globally or project-specific options
 since both glance and keystone are currently getting their ssl options from
 the global [ssl] group in nova right now.

 I've been a bad citizen and haven't gotten back to the series review yet.

 [1] https://review.openstack.org/#/q/status:open+project:
 openstack/nova+branch:master+topic:ssl-config-options,n,z

 --

 Thanks,

 Matt Riedemann


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] [devstack] configuring https for glance client

2015-02-09 Thread Andrew Lazarev
Hi Nova experts,

Some time ago I figured out that devstack fails to stack with USE_SSL=True
option because it doesn't configure nova to work with secured glace [1].
Support of secured glance was added to nova in Juno cycle [2], but it looks
strange for me.

Glance client takes settings form '[ssl]' section. The same section is used
to set up nova server SSL settings. Other clients have separate sections in
the config file (and switching to session use now),  e.g. related code for
cinder - [3].

I've created quick fix for the devstack - [4], but it would be nice to shed
a light on nova plans around glance config before merging a workaround for
devstack.

So, the questions are:
1. Is it normal that glance client reads from '[ssl]' config section?
2. Is there a plan to move glance client to sessions use and move
corresponding config section to '[glance]'?
3. Are any plans to run CI for USE_SSL=True use case?

[1] - https://bugs.launchpad.net/devstack/+bug/1405484
[2] - https://review.openstack.org/#/c/72974
[3] -
https://github.com/openstack/nova/blob/2015.1.0b2/nova/volume/cinder.py#L73
[4] - https://review.openstack.org/#/c/153737

Thanks,
Andrew.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] [devstack] configuring https for glance client

2015-02-09 Thread Andrew Lazarev
Hi Nova experts,

Some time ago I figured out that devstack fails to stack with USE_SSL=True
option because it doesn't configure nova to work with secured glace [1].
Support of secured glance was added to nova in Juno cycle [2], but it looks
strange for me.

Glance client takes settings form '[ssl]' section. The same section is used
to set up nova server SSL settings. Other clients have separate sections in
the config file (and switching to session use now),  e.g. related code for
cinder - [3].

I've created quick fix for the devstack - [4], but it would be nice to shed
a light on nova plans around glance config before merging a workaround for
devstack.

So, the questions are:
1. Is it normal that glance client reads from '[ssl]' config section?
2. Is there a plan to move glance client to sessions use and move
corresponding config section to '[glance]'?
3. Are any plans to run CI for USE_SSL=True use case?

[1] - https://bugs.launchpad.net/devstack/+bug/1405484
[2] - https://review.openstack.org/#/c/72974
[3] -
https://github.com/openstack/nova/blob/2015.1.0b2/nova/volume/cinder.py#L73
[4] - https://review.openstack.org/#/c/153737

Thanks,
Andrew.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Sahara] [Heat] validating properties of Sahara resources in Heat

2015-01-07 Thread Andrew Lazarev
Answers inlined and marked as [AL].

On Mon, Jan 5, 2015 at 5:17 AM, Pavlo Shchelokovskyy 
pshchelokovs...@mirantis.com wrote:

 Hi all,

 I would like to ask Sahara developers' opinion on two bugs raised against
 Heat's resources - [1] and [2].
 Below I am going to repeat some of my comments from those bugs and
 associated Gerrit reviews [3] to have the conversation condensed here in ML.

 In Heat's Sahara-specific resources we have such properties as
 floating_ip_pool for OS::Sahara::NodeGroupTemplate [4]
 and neutron_management_network for both OS::Sahara::ClusterTemplate [5]
 and OS::Sahara::Cluster [6].
 My questions are about when and under which conditions those properties
 are required to successfully start a Sahara Cluster.

 floating_ip_pool:

 I was pointed that Sahara could be configured to use netns/proxy to access
 the cluster VMs instead of floating IPs.

 My questions are:
 - Can that particular configuration setting (netns/proxy) be assessed via
 saharaclient?


[AL] No, settings are configured in sahara.conf and hardly to be checked
outside of sahara.


 - What would be the result of providing floating_ip_pool when Sahara is
 indeed configured with netns/proxy?

  Is it going to function normally, having just wasted several floating IPs
 from quota?


[AL] It will assign floating IP as requested. Floating IP could be used not
only for management by Sahara, but for other purposes too. User could
request to assign floating IP.


 - And more crucial, what would happen if Sahara is _not_ configured to use
 netns/proxy and not provided with floating_ip_pool?
   Can that lead to cluster being created (at least VMs for it spawned) but
 Sahara would not be able to access them for configuration?
   Would Sahara in that case kill the cluster/shutdown VMs or hang in some
 cluster failed state?


[AL] Sahara will return validation error on attempt to create cluster. No
resources will be created.

neutron_management_network:
 I understand the point that it is redundant to use it in both resources
 (although we are stuck with deprecation period as those are part of Juno
 release already).


[AL] neutron_management_network must be specified somewhere in case of
neutron. It could be either template OR cluster. No need to specify it in
both places.



 Still, my questions are:
 - would this property passed during creation of Cluster override the one
 passed during creation of Cluster Template?


[AL] Yes, Sahara looks to template only when no value provided in cluster
request.


 - what would happen if I set this property (pass it via saharaclient) when
 Nova-network is in use?


[AL] Validation error will be returned


 - what if I _do not_ pass this property and Neutron has several networks
 available?


[AL] Validation error will be returned even if only one neutron network
available. Sahara currently doesn't support automatic network selection
(could be a nice feature).


 The reason I'm asking is that in Heat we try to follow fail-fast
 approach, especially for billable resources,
 to avoid situation when a (potentially huge) stack is being created and
 breaks on last or second-to-last resource,
 leaving user with many resources spawned (even if for a short time if the
 stack rollback is enabled)
 which might cost a hefty sum of money for nothing. That is why we are
 trying to validate the template
 as thoroughly as we can before starting to create any actual resources in
 the cloud.

 Thus I'm interested in finding the best possible (or least-worse)
 cover-it-all strategy
 for validating properties being set for these resources.

 [1] https://bugs.launchpad.net/heat/+bug/1399469
 [2] https://bugs.launchpad.net/heat/+bug/1402844
 [3] https://review.openstack.org/#/c/141310
 [4]
 https://github.com/openstack/heat/blob/master/heat/engine/resources/sahara_templates.py#L136
 [5]
 https://github.com/openstack/heat/blob/master/heat/engine/resources/sahara_templates.py#L274
 [6]
 https://github.com/openstack/heat/blob/master/heat/engine/resources/sahara_cluster.py#L79

 Best regards,

 Pavlo Shchelokovskyy
 Software Engineer
 Mirantis Inc
 www.mirantis.com

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [sahara] team meeting Nov 20 1800 UTC

2014-11-20 Thread Andrew Lazarev
Minutes: 
*http://eavesdrop.openstack.org/meetings/sahara/2014/sahara.2014-11-20-18.01.html
http://eavesdrop.openstack.org/meetings/sahara/2014/sahara.2014-11-20-18.01.html*
Logs: 
*http://eavesdrop.openstack.org/meetings/sahara/2014/sahara.2014-11-20-18.01.log.html
http://eavesdrop.openstack.org/meetings/sahara/2014/sahara.2014-11-20-18.01.log.html*

Thanks,
Andrew.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara] Nominate Michael McCune to sahara-core

2014-11-11 Thread Andrew Lazarev
+2

Andrew.

On Tue, Nov 11, 2014 at 9:37 AM, Sergey Lukjanov slukja...@mirantis.com
wrote:

 Hi folks,

  I'd like to propose Michael McCune to sahara-core. He has a good
 knowledge of codebase and implemented important features such as Swift auth
 using trusts. Mike has been consistently giving us very well thought out
 and constructive reviews for Sahara project.

 Sahara core team members, please, vote +/- 2.

 Thanks.


 --
 Sincerely yours,
 Sergey Lukjanov
 Sahara Technical Lead
 (OpenStack Data Processing)
 Principal Software Engineer
 Mirantis Inc.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara] Nominate Sergey Reshetniak to sahara-core

2014-11-11 Thread Andrew Lazarev
+2

Andrew.

On Tue, Nov 11, 2014 at 9:35 AM, Sergey Lukjanov slukja...@mirantis.com
wrote:

 Hi folks,

  I'd like to propose Sergey to sahara-core. He's made a lot of work on
 different parts of Sahara and he has a very good knowledge of codebase,
 especially in plugins area.  Sergey has been consistently giving us very
 well thought out and constructive reviews for Sahara project.

 Sahara core team members, please, vote +/- 2.

 Thanks.


 --
 Sincerely yours,
 Sergey Lukjanov
 Sahara Technical Lead
 (OpenStack Data Processing)
 Principal Software Engineer
 Mirantis Inc.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][oslo] projects still using obsolete oslo modules

2014-10-13 Thread Andrew Lazarev
Filed https://bugs.launchpad.net/sahara/+bug/1380725 for sahara stuff.

Andrew.

On Mon, Oct 13, 2014 at 6:20 AM, Doug Hellmann d...@doughellmann.com
wrote:

 I’ve put together a little script to generate a report of the projects
 using modules that used to be in the oslo-incubator but that have moved to
 libraries [1]. These modules have been deleted, and now only exist in the
 stable/juno branch of the incubator. We do not anticipate back-porting
 fixes except for serious security concerns, so it is important to update
 all projects to use the libraries where the modules now live.

 Liaisons, please look through the list below and file bugs against your
 project for any changes needed to move to the new libraries and start
 working on the updates. We need to prioritize this work for early in Kilo
 to ensure that your projects do not fall further out of step. K-1 is the
 ideal target, with K-2 as an absolute latest date. I anticipate having
 several more libraries by the time the K-2 milestone arrives.

 Most of the porting work involves adding dependencies and updating import
 statements, but check the documentation for each library for any special
 guidance. Also, because the incubator is updated to use our released
 libraries, you may end up having to port to several libraries *and* sync a
 copy of any remaining incubator dependencies that have not graduated all in
 a single patch in order to have a working copy. I suggest giving your
 review teams a heads-up about what to expect to avoid -2 for the scope of
 the patch.

 Doug


 [1] https://review.openstack.org/#/c/127039/


 openstack-dev/heat-cfnclient: exception
 openstack-dev/heat-cfnclient: gettextutils
 openstack-dev/heat-cfnclient: importutils
 openstack-dev/heat-cfnclient: jsonutils
 openstack-dev/heat-cfnclient: timeutils

 openstack/ceilometer: gettextutils
 openstack/ceilometer: log_handler

 openstack/python-troveclient: strutils

 openstack/melange: exception
 openstack/melange: extensions
 openstack/melange: utils
 openstack/melange: wsgi
 openstack/melange: setup

 openstack/tuskar: config.generator
 openstack/tuskar: db
 openstack/tuskar: db.sqlalchemy
 openstack/tuskar: excutils
 openstack/tuskar: gettextutils
 openstack/tuskar: importutils
 openstack/tuskar: jsonutils
 openstack/tuskar: strutils
 openstack/tuskar: timeutils

 openstack/sahara-dashboard: importutils

 openstack/barbican: gettextutils
 openstack/barbican: jsonutils
 openstack/barbican: timeutils
 openstack/barbican: importutils

 openstack/kite: db
 openstack/kite: db.sqlalchemy
 openstack/kite: jsonutils
 openstack/kite: timeutils

 openstack/python-ironicclient: gettextutils
 openstack/python-ironicclient: importutils
 openstack/python-ironicclient: strutils

 openstack/python-melangeclient: setup

 openstack/neutron: excutils
 openstack/neutron: gettextutils
 openstack/neutron: importutils
 openstack/neutron: jsonutils
 openstack/neutron: middleware.base
 openstack/neutron: middleware.catch_errors
 openstack/neutron: middleware.correlation_id
 openstack/neutron: middleware.debug
 openstack/neutron: middleware.request_id
 openstack/neutron: middleware.sizelimit
 openstack/neutron: network_utils
 openstack/neutron: strutils
 openstack/neutron: timeutils

 openstack/tempest: importlib

 openstack/manila: excutils
 openstack/manila: gettextutils
 openstack/manila: importutils
 openstack/manila: jsonutils
 openstack/manila: network_utils
 openstack/manila: strutils
 openstack/manila: timeutils

 openstack/keystone: gettextutils

 openstack/python-glanceclient: importutils
 openstack/python-glanceclient: network_utils
 openstack/python-glanceclient: strutils

 openstack/python-keystoneclient: jsonutils
 openstack/python-keystoneclient: strutils
 openstack/python-keystoneclient: timeutils

 openstack/zaqar: config.generator
 openstack/zaqar: excutils
 openstack/zaqar: gettextutils
 openstack/zaqar: importutils
 openstack/zaqar: jsonutils
 openstack/zaqar: setup
 openstack/zaqar: strutils
 openstack/zaqar: timeutils
 openstack/zaqar: version

 openstack/python-novaclient: gettextutils

 openstack/ironic: config.generator
 openstack/ironic: gettextutils

 openstack/cinder: config.generator
 openstack/cinder: excutils
 openstack/cinder: gettextutils
 openstack/cinder: importutils
 openstack/cinder: jsonutils
 openstack/cinder: log_handler
 openstack/cinder: network_utils
 openstack/cinder: strutils
 openstack/cinder: timeutils
 openstack/cinder: units

 openstack/python-manilaclient: gettextutils
 openstack/python-manilaclient: importutils
 openstack/python-manilaclient: jsonutils
 openstack/python-manilaclient: strutils
 openstack/python-manilaclient: timeutils

 openstack/trove: exception
 openstack/trove: excutils
 openstack/trove: gettextutils
 openstack/trove: importutils
 openstack/trove: iniparser
 openstack/trove: jsonutils
 openstack/trove: network_utils
 openstack/trove: notifier
 openstack/trove: pastedeploy
 openstack/trove: rpc
 openstack/trove: strutils
 

Re: [openstack-dev] [Ceilometer] Adding pylint checking of new ceilometer patches

2014-10-07 Thread Andrew Lazarev
 I can't say I'm too deeply versed in the code,  but it's enough to make
me wonder if we want to go that direction and avoid the issues altogether?

It's the nature of python that methods and modules can be added in runtime
and pylint can't do full analysis. That's why the best use of it - limited
list of checks applied to the last commit only. This is a way how nova and
sahara have it implemented. Non-voting gate job helps to find silly
mistakes that could barely be found by other ways.

Thanks,
Andrew.

On Fri, Oct 3, 2014 at 10:09 AM, Neal, Phil phil.n...@hp.com wrote:

  From: Dina Belova [mailto:dbel...@mirantis.com]
  On Friday, October 03, 2014 2:53 AM
 
  Igor,
 
  Personally this idea looks really nice to me, as this will help to avoid
  strange code being merged and not found via reviewing process.
 
  Cheers,
  Dina
 
  On Fri, Oct 3, 2014 at 12:40 PM, Igor Degtiarov
  idegtia...@mirantis.com wrote:
  Hi folks!
 
  I try too guess do we need in ceilometer checking new patches for
  critical errors with pylint?
 
  As far as I know Nova and Sahara and others have such check. Actually
  it is not checking of all project but comparing of the number of
  errors without new patch and with it, and if diff is more then 0 then
  patch are not taken.

 Looking a bit deeper it seems that Nova struggled with false positives and
 resorted to https://review.openstack.org/#/c/28754/ , which layers some
 historical checking of git on top of pylint's tendency to check only the
 latest commit. I can't say I'm too deeply versed in the code,  but it's
 enough to make me wonder if we want to go that direction and avoid the
 issues altogether?

 
  I have taken as pattern Sahara's solution and proposed a patch for
  ceilometer:
  https://review.openstack.org/#/c/125906/
 
  Cheers,
  Igor Degtiarov
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
  --
  Best regards,
  Dina Belova
  Software Engineer
  Mirantis Inc.
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Adding pylint checking of new ceilometer patches

2014-10-07 Thread Andrew Lazarev
At first step we won't implement pylint as gate job, but will add it at
master to have a possibility to check  code with pylint locally, if it is
needed.
No much sense in running it locally. It has too many false positives. The
best use - see new critical errors as a non-voting job.

Andrew.

On Mon, Oct 6, 2014 at 2:38 AM, Igor Degtiarov idegtia...@mirantis.com
wrote:

 My points are next:

 1. Pylint check will be very useful for project, and will help to
 avoid critical errors or mistakes in code.

 2. At first step we won't implement pylint as gate job, but will add
 it at master to have a possibility to check  code with pylint locally,
 if it is needed.

 3. In future it could be added as a non-voting job.
 -- Igor


 On Sat, Oct 4, 2014 at 1:56 AM, Angus Lees g...@inodes.org wrote:
  You can turn off lots of the refactor recommendation checks.  I've been
  running pylint across neutron and it's uncovered half a dozen legitimate
  bugs so far - and that's with many tests still disabled.
 
  I agree that the defaults are too noisy, but its about the only tool that
  does linting across files - pep8 for example only looks at the current
 file
  (and not even the parse tree).
 
  On 4 Oct 2014 03:22, Doug Hellmann d...@doughellmann.com wrote:
 
 
  On Oct 3, 2014, at 1:09 PM, Neal, Phil phil.n...@hp.com wrote:
 
   From: Dina Belova [mailto:dbel...@mirantis.com]
   On Friday, October 03, 2014 2:53 AM
  
   Igor,
  
   Personally this idea looks really nice to me, as this will help to
   avoid
   strange code being merged and not found via reviewing process.
  
   Cheers,
   Dina
  
   On Fri, Oct 3, 2014 at 12:40 PM, Igor Degtiarov
   idegtia...@mirantis.com wrote:
   Hi folks!
  
   I try too guess do we need in ceilometer checking new patches for
   critical errors with pylint?
  
   As far as I know Nova and Sahara and others have such check. Actually
   it is not checking of all project but comparing of the number of
   errors without new patch and with it, and if diff is more then 0 then
   patch are not taken.
  
   Looking a bit deeper it seems that Nova struggled with false positives
   and resorted to https://review.openstack.org/#/c/28754/ , which
 layers some
   historical checking of git on top of pylint's tendency to check only
 the
   latest commit. I can't say I'm too deeply versed in the code,  but
 it's
   enough to make me wonder if we want to go that direction and avoid the
   issues altogether?
 
  I haven’t looked at it in a while, but I’ve never been particularly
  excited by pylint. It’s extremely picky, encourages enforcing some
  questionable rules (arbitrary limits on variable name length?), and
 repots a
  lot of false positives. That combination tends to result in making
 writing
  software annoying without helping with quality in any real way.
 
  Doug
 
  
  
   I have taken as pattern Sahara's solution and proposed a patch for
   ceilometer:
   https://review.openstack.org/#/c/125906/
  
   Cheers,
   Igor Degtiarov
  
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
  
  
  
   --
   Best regards,
   Dina Belova
   Software Engineer
   Mirantis Inc.
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara] team meeting Sept 25 1800 UTC

2014-09-25 Thread Andrew Lazarev
Thanks everyone who have joined Sahara meeting.

Here are the logs from the meeting:

http://eavesdrop.openstack.org/meetings/sahara/2014/sahara.2014-09-25-18.02.html
http://eavesdrop.openstack.org/meetings/sahara/2014/sahara.2014-09-25-18.02.log.html

Andrew.

On Wed, Sep 24, 2014 at 2:50 PM, Sergey Lukjanov slukja...@mirantis.com
wrote:

 Hi folks,

 We'll be having the Sahara team meeting as usual in
 #openstack-meeting-alt channel.

 Agenda:
 https://wiki.openstack.org/wiki/Meetings/SaharaAgenda#Next_meetings


 http://www.timeanddate.com/worldclock/fixedtime.html?msg=Sahara+Meetingiso=20140925T18

 --
 Sincerely yours,
 Sergey Lukjanov
 Sahara Technical Lead
 (OpenStack Data Processing)
 Principal Software Engineer
 Mirantis Inc.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Sahara][Doc] Filing a bug for modifying the image

2014-09-24 Thread Andrew Lazarev
Hi Sharan,

Yes, file bug, commit new image to review.

Thanks,
Andrew.

On Wed, Sep 24, 2014 at 10:16 AM, Sharan Kumar M 
sharan.monikan...@gmail.com wrote:

 Hi all,

 In this documentation
 http://docs.openstack.org/developer/sahara/overview.html#details, the
 missing services were added. And I notice that the image is not in parallel
 to the description and I think it needs to be fixed. So should I file a new
 bug in launchpad before proceeding?

 Thanks,
 Sharan Kumar M

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara] Juno release images

2014-09-23 Thread Andrew Lazarev
No fedora?

Andrew.

On Tue, Sep 23, 2014 at 1:16 PM, Sergey Reshetnyak sreshetn...@mirantis.com
 wrote:

 Hi sahara folks,

 I've prepared all new images for Sahara (using diskimage-builder) for
 Juno release.

 Vanilla plugin:
 http://sahara-files.mirantis.com/sahara-juno-vanilla-1.2.1-centos-6.5.qcow2

 http://sahara-files.mirantis.com/sahara-juno-vanilla-1.2.1-ubuntu-14.04.qcow2
 http://sahara-files.mirantis.com/sahara-juno-vanilla-2.4.1-centos-6.5.qcow2

 http://sahara-files.mirantis.com/sahara-juno-vanilla-2.4.1-ubuntu-14.04.qcow2

 HDP plugin:
 http://sahara-files.mirantis.com/sahara-juno-hdp-1.3.2-centos-6.5.qcow2
 http://sahara-files.mirantis.com/sahara-juno-hdp-2.0.6-centos-6.5.qcow2
 http://sahara-files.mirantis.com/sahara-juno-hdp-plain-centos-6.5.qcow2

 Cloudera plugin:

 http://sahara-files.mirantis.com/sahara-juno-cloudera-5.1.2-centos-6.5.qcow2

 http://sahara-files.mirantis.com/sahara-juno-cloudera-5.1.2-ubuntu-12.04.qcow2

 Spark plugin:
 http://sahara-files.mirantis.com/sahara-juno-spark-1.0.0-ubuntu-14.04.qcow2


 Thanks
 Sergey Reshetnyak

 http://sahara-files.mirantis.com/sahara-juno-vanilla-2.4.1-ubuntu-14.04.qcow2

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Sahara] Networking Service for Sahara

2014-09-17 Thread Andrew Lazarev
Hi Sharan,

Sahara works with either network service installed in OpenStack. If
OpenStack uses neutron - sahara will use neutron too. If nova network is
used, Sahara supports that as well.

Thanks,
Andrew.

On Wed, Sep 17, 2014 at 1:38 PM, Sharan Kumar M sharan.monikan...@gmail.com
 wrote:

 Hi all,

 What is the default networking service for Sahara? Is it Nova Network or
 Neutron? I referred this page
 http://docs.openstack.org/developer/sahara/userdoc/features.html#neutron-and-nova-network-support
 and it says Nova Network. Is that right?

 Thanks,
 Sharan Kumar M

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Sahara] Working on DOC bug

2014-09-12 Thread Andrew Lazarev
Hi Sharan,

Sahara uses python clients to communicate with other components. So, you
can start with inspecting requirements.txt file on python-*
dependancies. Next, you need to understand what is really used and how.
Also, some components could be used indirectly via nova client (e.g.
security groups management).

For this particular bug I see that Heat component is not described. Sahara
can provision cluster using Heat if it is configured to use heat
infrastructure engine. Also Sahara uses neutron for floating IPs and
security groups management if openstack cluster uses neutron. Probably
adding description of these two services is enough to close bug.

Thanks,
Andrew.

On Fri, Sep 12, 2014 at 12:13 PM, Sharan Kumar M 
sharan.monikan...@gmail.com wrote:

 Hi,

 After digging a little into OpenStack basics, setting up devstack and
 starting with sahara, I browsed the list of bugs on Launchpad. I saw a low
 hanging fruit bug, which is a wishlist as well.
 https://bugs.launchpad.net/sahara/+bug/1350063

 But before I start, I would like to get some advice on how to get started
 with it, any resources for this bug etc. Just to let you all know, I am new
 to OpenStack and this would probably be the first patch I submit.

 Thanks,
 Sharan Kumar M

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara] integration tests in python-saharaclient

2014-09-04 Thread Andrew Lazarev
Trevor,

by the way, what typo?
https://review.openstack.org/#/c/118903/

Andrew.


On Thu, Sep 4, 2014 at 7:58 AM, Trevor McKay tmc...@redhat.com wrote:

 by the way, what typo?

 Trev

 On Wed, 2014-09-03 at 14:58 -0700, Andrew Lazarev wrote:
  Hi team,
 
 
  Today I've realized that we have some tests called 'integration'
  in python-saharaclient. Also I've found out that Jenkins doesn't use
  them and they can't be run starting from April because of typo in
  tox.ini.
 
 
  Does anyone know what these tests are? Does anyone mind if I delete
  them since we don't use them anyway?
 
 
  Thanks,
  Andrew.
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [sahara] integration tests in python-saharaclient

2014-09-03 Thread Andrew Lazarev
Hi team,

Today I've realized that we have some tests called 'integration'
in python-saharaclient. Also I've found out that Jenkins doesn't use them
and they can't be run starting from April because of typo in tox.ini.

Does anyone know what these tests are? Does anyone mind if I delete them
since we don't use them anyway?

Thanks,
Andrew.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [sahara] Upgrade of Hadoop components inside released version

2014-06-24 Thread Andrew Lazarev
Hi Team,

I want to raise topic about upgrade of components in Hadoop version that is
already supported by released Sahara plugin. The question is raised because
of several change requests [1] and [2]. Topic was discussed in Atlanta
([3]), but we didn't come to the decision.

All of us agreed that existing clusters must continue to work after
OpenStack upgrade. So if user creates cluster by Icehouse Sahara and then
upgrades OpenStack - everything should continue working as before. The most
tricky operation is scaling and it dictates list of restrictions over new
version of component:

1. plugin-version pair supported by the plugin must not change
2. if component upgrade requires DIB involved then plugin must work with
both versions of image - old and new one
3. cluster with mixed nodes (created by old code and by new one) should
still be operational

Given that we should choose policy for components upgrade. Here are several
options:

1. Prohibit components upgrade in released versions of plugin. Change
plugin version even if hadoop version didn't change. This solves all listed
problems but a little bit frustrating for user. They will need to recreate
all clusters they have and migrate data like as it is hadoop upgrade. They
should also consider Hadoop upgrade to do migration only once.

2. Disable some operations over cluster created by the previous version. If
users don't have option to scale cluster there will be no problems with
mixed nodes. For this option Sahara need to know if the cluster was created
by this version or not.

3. Require change author to perform all kind of tests and prove that mixed
cluster works as good and not mixed. In such case we need some list of
tests that are enough to cover all corner cases.

Ideas are welcome.

[1] https://review.openstack.org/#/c/98260/
[2] https://review.openstack.org/#/c/87723/
[3] https://etherpad.openstack.org/p/juno-summit-sahara-relmngmt-backward

Thanks,
Andrew.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara] 2014.1.1 preparation

2014-06-03 Thread Andrew Lazarev
correction: https://review.openstack.org/#/c/96621/ (Added validate_edp
method to
Plugin SPI doc) has no sense without https://review.openstack.org/#/c/87573
(Fix running EDP job on transient cluster) where validate_edp was
introduced.

Andrew.


On Mon, Jun 2, 2014 at 11:34 PM, Andrew Lazarev alaza...@mirantis.com
wrote:

 https://review.openstack.org/#/c/93564/ has no sense without
 https://review.openstack.org/#/c/87573

 +1 on merging DOC bugs you listed and these 2 EDP bugs

 Andrew.


 On Mon, Jun 2, 2014 at 11:08 PM, Sergey Lukjanov slukja...@mirantis.com
 wrote:

 /me proposing to backport:

 Docs:

 https://review.openstack.org/#/c/87531/ Change IRC channel name to
 #openstack-sahara
 https://review.openstack.org/#/c/96621/ Added validate_edp method to
 Plugin SPI doc
 https://review.openstack.org/#/c/89647/ Updated architecture diagram in
 docs

 EDP:

 https://review.openstack.org/#/c/93564/
 https://review.openstack.org/#/c/93564/

 On Tue, Jun 3, 2014 at 10:03 AM, Sergey Lukjanov slukja...@mirantis.com
 wrote:
  Hey folks,
 
  this Thu, June 5 is the date for 2014.1.1 release. We already have
  some back ported patches to the stable/icehouse branch, so, the
  question is do we need some more patches to back port? Please, propose
  them here.
 
  2014.1 - stable/icehouse diff:
  https://github.com/openstack/sahara/compare/2014.1...stable/icehouse
 
  Thanks.
 
  --
  Sincerely yours,
  Sergey Lukjanov
  Sahara Technical Lead
  (OpenStack Data Processing)
  Principal Software Engineer
  Mirantis Inc.



 --
 Sincerely yours,
 Sergey Lukjanov
 Sahara Technical Lead
 (OpenStack Data Processing)
 Principal Software Engineer
 Mirantis Inc.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara] summit wrap-up: backward compat

2014-05-28 Thread Andrew Lazarev
 for juno we should just have a v1 api (there can still be a v1.1 endpoint,
 but it should be deprecated), and maybe a v2 api

 +1 any semantic changes require new major version number

 +1 api should only have a major number (no 1.1 or 2.1)


In this case we will end up with new major number each release. Even if no
major changes were done.

we should only be producing images for the currently supported plugin
 versions. images for deprecated versions can be found with the releases
 where the version wasn't deprecated.


agree. We just need to store all images for previous releases somewhere.

Andrew.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara] bug triage day after summit

2014-05-20 Thread Andrew Lazarev
I think May 26 was a random 'day after summit'. I'm Ok with May 27 too.

Andrew.


On Tue, May 20, 2014 at 10:16 AM, Sergey Lukjanov slukja...@mirantis.comwrote:

 I'm ok with moving it to May 27.

 On Tuesday, May 20, 2014, Michael McCune mimcc...@redhat.com wrote:

 I think in our eagerness to triage bugs we might have missed that May 26
 is a holiday in the U.S.

 I know some of us have the day off work and while that doesn't
 necessarily stop the effort, it might throw a wrench in people's holiday
 weekend plans. I'm wondering if we should re-evaluate and make the
 following day(May 27) triage day instead?

 regards,
 mike

 - Original Message -
  Hey sahara folks,
 
  let's make a Bug Triage Day after the summit.
 
  I'm proposing the May, 26 for it.
 
  Any thoughts/objections?
 
  Thanks.
 
  --
  Sincerely yours,
  Sergey Lukjanov
  Sahara Technical Lead
  (OpenStack Data Processing)
  Mirantis Inc.
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 --
 Sincerely yours,
 Sergey Lukjanov
 Sahara Technical Lead
 (OpenStack Data Processing)
 Mirantis Inc.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara] Nominate Trevor McKay for sahara-core

2014-05-13 Thread Andrew Lazarev
+1

It will be good addition to the core team.

Thanks,
Andrew.




On Mon, May 12, 2014 at 2:31 PM, Sergey Lukjanov slukja...@mirantis.comwrote:

 Hey folks,

 I'd like to nominate Trevor McKay (tmckay) for sahara-core.

 He is among the top reviewers of Sahara subprojects. Trevor is working
 on Sahara full time since summer 2013 and is very familiar with
 current codebase. His code contributions and reviews have demonstrated
 a good knowledge of Sahara internals. Trevor has a valuable knowledge
 of EDP part and Hadoop itself. He's working on both bugs and new
 features implementation.

 Some links:

 http://stackalytics.com/report/contribution/sahara-group/30
 http://stackalytics.com/report/contribution/sahara-group/90
 http://stackalytics.com/report/contribution/sahara-group/180

 https://review.openstack.org/#/q/owner:tmckay+sahara+AND+-status:abandoned,n,z
 https://launchpad.net/~tmckay

 Sahara cores, please, reply with +1/0/-1 votes.

 Thanks.

 --
 Sincerely yours,
 Sergey Lukjanov
 Sahara Technical Lead
 (OpenStack Data Processing)
 Mirantis Inc.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [dev-env] Error setting up dev environment on Mac OS X (10.9.2)

2014-03-31 Thread Andrew Lazarev
Hi Bob,

The error is because of https://bugs.launchpad.net/sahara/+bug/1283133. Fix
is ready, but not merged to master because of FF (
https://review.openstack.org/#/c/75456/). As a workaround I could suggest
temporarily removing DROP queries in migration script #3. Or use other sql
db (e.g. mysql).

Thanks,
Andrew


On Mon, Mar 31, 2014 at 9:18 AM, Robert Nettleton 
rnettle...@hortonworks.com wrote:

 Hi All,

 I'm trying to setup my dev environment on Mac OS X (10.9.2) with the
 latest Sahara code, using the following instructions:


 http://docs.openstack.org/developer/sahara/devref/development.environment.html

 When I run the Create database Schema step, I see the following error:

 
 sqlalchemy.exc.OperationalError: (OperationalError) near DROP: syntax
 error u'ALTER TABLE job_executions DROP COLUMN java_opts' ()
 

 Has anyone seen this problem?  If so, is there a workaround or setup step
 that I'm missing?

 In a separate thread, Sergey mentioned that gnu-getups was required on
 Mac OS X for the dev env.  I've installed this package, but it does not
 resolve my problem.

 thanks,
 Bob

 CONFIDENTIALITY NOTICE
 NOTICE: This message is intended for the use of the individual or entity
 to which it is addressed and may contain information that is confidential,
 privileged and exempt from disclosure under applicable law. If the reader
 of this message is not the intended recipient, you are hereby notified that
 any printing, copying, dissemination, distribution, disclosure or
 forwarding of this communication is strictly prohibited. If you have
 received this communication in error, please contact the sender immediately
 and delete it from your system. Thank You.
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [savanna] floating ip pool by name

2014-02-27 Thread Andrew Lazarev
Hi Team,

I was always using floating_ip_pool: net04_ext construction and it
worked fine. Now it responds with validation error Floating IP pool
net04_ext for node group 'manager' not found because
https://bugs.launchpad.net/savanna/+bug/1282027 was merged and savanna
expects only ID here. Is it intentional restriction? What is the reasoning?
Referencing by name is comfortable.

Thanks,
Andrew.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [savanna] plugin version or hadoop version?

2014-02-17 Thread Andrew Lazarev
IDH uses version of IDH distro and there is no direct mapping between
distro version and hadoop version. E.g. IDH 2.5.1 works with apache hadoop
1.0.3.

I suggest to call the field as just 'version' everywhere and assume this
version as plugin specific property.

Andrew.


On Mon, Feb 17, 2014 at 5:06 AM, Matthew Farrellee m...@redhat.com wrote:

 $ savanna plugins-list
 +-+--+---+
 | name| versions | title |
 +-+--+---+
 | vanilla | 1.2.1| Vanilla Apache Hadoop |
 | hdp | 1.3.2| Hortonworks Data Platform |
 +-+--+---+

 above is output from the /plugins endpoint - http://docs.openstack.org/
 developer/savanna/userdoc/rest_api_v1.0.html#plugins

 the question is, should the version be the version of the plugin or the
 version of hadoop the plugin installs?

 i ask because it seems like we have version == plugin version for hdp and
 version == hadoop version for vanilla.

 the documentation is somewhat vague on the subject, mostly stating
 version without qualification. however, the json passed to the service
 references hadoop_version and the arguments in the client are called
 hadoop_version

 fyi, this could be complicated by the idh and spark plugins.

 best,


 matt

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [savanna] Mission Statement wording

2014-02-13 Thread Andrew Lazarev
Short version looks good for me.

Andrew.


On Thu, Feb 13, 2014 at 4:29 AM, Sergey Lukjanov slukja...@mirantis.comwrote:

 Hi folks,

 I'm working now on adding Savanna's mission statement to governance docs
 [0]. There are some comments on our current one to make it simpler and
 remove marketing like stuff.

 So, current option is:

 To provide a scalable data processing stack and associated management
 interfaces.

 (thanks for Doug for proposing it).

 So, please, share your objections (and suggestions too). Additionally I'd
 like to talk about it on todays IRC meeting.

 Thanks.

 [0] https://review.openstack.org/#/c/71045/1/reference/programs.yaml

 --
 Sincerely yours,
 Sergey Lukjanov
 Savanna Technical Lead
 Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] savann-ci, Re: [savanna] Alembic migrations and absence of DROP column in sqlite

2014-02-04 Thread Andrew Lazarev
Since sqlite is not in the list of databases that would be used in
production, CI should use other DB for testing.

Andrew.


On Tue, Feb 4, 2014 at 1:13 PM, Alexander Ignatov aigna...@mirantis.comwrote:

 Indeed. We should create a bug around that and move our savanna-ci to
 mysql.

 Regards,
 Alexander Ignatov



 On 05 Feb 2014, at 01:01, Trevor McKay tmc...@redhat.com wrote:

  This brings up an interesting problem:
 
  In https://review.openstack.org/#/c/70420/ I've added a migration that
  uses a drop column for an upgrade.
 
  But savann-ci is apparently using a sqlite database to run.  So it can't
  possibly pass.
 
  What do we do here?  Shift savanna-ci tests to non sqlite?
 
  Trevor
 
  On Sat, 2014-02-01 at 18:17 +0200, Roman Podoliaka wrote:
  Hi all,
 
  My two cents.
 
  2) Extend alembic so that op.drop_column() does the right thing
  We could, but should we?
 
  The only reason alembic doesn't support these operations for SQLite
  yet is that SQLite lacks proper support of ALTER statement. For
  sqlalchemy-migrate we've been providing a work-around in the form of
  recreating of a table and copying of all existing rows (which is a
  hack, really).
 
  But to be able to recreate a table, we first must have its definition.
  And we've been relying on SQLAlchemy schema reflection facilities for
  that. Unfortunately, this approach has a few drawbacks:
 
  1) SQLAlchemy versions prior to 0.8.4 don't support reflection of
  unique constraints, which means the recreated table won't have them;
 
  2) special care must be taken in 'edge' cases (e.g. when you want to
  drop a BOOLEAN column, you must also drop the corresponding CHECK (col
  in (0, 1)) constraint manually, or SQLite will raise an error when the
  table is recreated without the column being dropped)
 
  3) special care must be taken for 'custom' type columns (it's got
  better with SQLAlchemy 0.8.x, but e.g. in 0.7.x we had to override
  definitions of reflected BIGINT columns manually for each
  column.drop() call)
 
  4) schema reflection can't be performed when alembic migrations are
  run in 'offline' mode (without connecting to a DB)
  ...
  (probably something else I've forgotten)
 
  So it's totally doable, but, IMO, there is no real benefit in
  supporting running of schema migrations for SQLite.
 
  ...attempts to drop schema generation based on models in favor of
 migrations
 
  As long as we have a test that checks that the DB schema obtained by
  running of migration scripts is equal to the one obtained by calling
  metadata.create_all(), it's perfectly OK to use model definitions to
  generate the initial DB schema for running of unit-tests as well as
  for new installations of OpenStack (and this is actually faster than
  running of migration scripts). ... and if we have strong objections
  against doing metadata.create_all(), we can always use migration
  scripts for both new installations and upgrades for all DB backends,
  except SQLite.
 
  Thanks,
  Roman
 
  On Sat, Feb 1, 2014 at 12:09 PM, Eugene Nikanorov
  enikano...@mirantis.com wrote:
  Boris,
 
  Sorry for the offtopic.
  Is switching to model-based schema generation is something decided? I
 see
  the opposite: attempts to drop schema generation based on models in
 favor of
  migrations.
  Can you point to some discussion threads?
 
  Thanks,
  Eugene.
 
 
 
  On Sat, Feb 1, 2014 at 2:19 AM, Boris Pavlovic bpavlo...@mirantis.com
 
  wrote:
 
  Jay,
 
  Yep we shouldn't use migrations for sqlite at all.
 
  The major issue that we have now is that we are not able to ensure
 that DB
  schema created by migration  models are same (actually they are not
 same).
 
  So before dropping support of migrations for sqlite  switching to
 model
  based created schema we should add tests that will check that model 
  migrations are synced.
  (we are working on this)
 
 
 
  Best regards,
  Boris Pavlovic
 
 
  On Fri, Jan 31, 2014 at 7:31 PM, Andrew Lazarev 
 alaza...@mirantis.com
  wrote:
 
  Trevor,
 
  Such check could be useful on alembic side too. Good opportunity for
  contribution.
 
  Andrew.
 
 
  On Fri, Jan 31, 2014 at 6:12 AM, Trevor McKay tmc...@redhat.com
 wrote:
 
  Okay,  I can accept that migrations shouldn't be supported on
 sqlite.
 
  However, if that's the case then we need to fix up
 savanna-db-manage so
  that it checks the db connection info and throws a polite error to
 the
  user for attempted migrations on unsupported platforms. For example:
 
  Database migrations are not supported for sqlite
 
  Because, as a developer, when I see a sql error trace as the result
 of
  an operation I assume it's broken :)
 
  Best,
 
  Trevor
 
  On Thu, 2014-01-30 at 15:04 -0500, Jay Pipes wrote:
  On Thu, 2014-01-30 at 14:51 -0500, Trevor McKay wrote:
  I was playing with alembic migration and discovered that
  op.drop_column() doesn't work with sqlite.  This is because sqlite
  doesn't support dropping a column (broken imho, but that's another

Re: [openstack-dev] [savanna] Specific job type for streaming mapreduce? (and someday pipes)

2014-02-03 Thread Andrew Lazarev
I see two points:
* having Savanna types mapped to Oozie action types is intuitive for hadoop
users and this is something we would like to keep
* it is hard to distinguish different kinds of one job type

Adding 'subtype' field will solve both problems. Having it optional will
not break backward compatibility. Adding database migration
script is also pretty straightforward.

Summarizing, my vote is on subtype field.

Thanks,
Andrew.


On Mon, Feb 3, 2014 at 2:10 PM, Trevor McKay tmc...@redhat.com wrote:


 I was trying my best to avoid adding extra job types to support
 mapreduce variants like streaming or mapreduce with pipes, but it seems
 that adding the types is the simplest solution.

 On the API side, Savanna can live without a specific job type by
 examining the data in the job record.  Presence/absence of certain
 things, or null values, etc, can provide adequate indicators to what
 kind of mapreduce it is.  Maybe a little bit subtle.

 But for the UI, it seems that explicit knowledge of what the job is
 makes things easier and better for the user.  When a user creates a
 streaming mapreduce job and the UI is aware of the type later on at job
 launch, the user can be prompted to provide the right configs (i.e., the
 streaming mapper and reducer values).

 The explicit job type also supports validation without having to add
 extra flags (which impacts the savanna client, and the JSON, etc). For
 example, a streaming mapreduce job does not require any specified
 libraries so the fact that it is meant to be a streaming job needs to be
 known at job creation time.

 So, to that end, I propose that we add a MapReduceStreaming job type,
 and probably at some point we will have MapReducePiped too. It's
 possible that we might have other job types in the future too as the
 feature set grows.

 There was an effort to make Savanna job types parallel Oozie action
 types, but in this case that's just not possible without introducing a
 subtype field in the job record, which leads to a database migration
 script and savanna client changes.

 What do you think?

 Best,

 Trevor



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [savanna] How to handle diverging EDP job configuration settings

2014-01-29 Thread Andrew Lazarev
I like idea of edp. prefix.

Andrew.


On Wed, Jan 29, 2014 at 6:23 AM, Trevor McKay tmc...@redhat.com wrote:

 So, assuming we go forward with this, the followup question is whether
 or not to move main_class and java_opts for Java actions into
 edp.java.main_class and edp.java.java_opts configs.

 I think yes.

 Best,

 Trevor

 On Wed, 2014-01-29 at 09:15 -0500, Trevor McKay wrote:
  On Wed, 2014-01-29 at 14:35 +0400, Alexander Ignatov wrote:
   Thank you for bringing this up, Trevor.
  
   EDP gets more diverse and it's time to change its model.
   I totally agree with your proposal, but one minor comment.
   Instead of savanna. prefix in job_configs wouldn't it be better to
 make it
   as edp.? I think savanna. is too more wide word for this.
 
  +1, brilliant. EDP is perfect.  I was worried about the scope of
  savanna. too.
 
   And one more bureaucratic thing... I see you already started
 implementing it [1],
   and it is named and goes as new EDP workflow [2]. I think new bluprint
 should be
   created for this feature to track all code changes as well as docs
 updates.
   Docs I mean public Savanna docs about EDP, rest api docs and samples.
 
  Absolutely, I can make it new blueprint.  Thanks.
 
   [1] https://review.openstack.org/#/c/69712
   [2]
 https://blueprints.launchpad.net/openstack/?searchtext=edp-oozie-streaming-mapreduce
  
   Regards,
   Alexander Ignatov
  
  
  
   On 28 Jan 2014, at 20:47, Trevor McKay tmc...@redhat.com wrote:
  
Hello all,
   
In our first pass at EDP, the model for job settings was very
 consistent
across all of our job types. The execution-time settings fit into
 this
(superset) structure:
   
job_configs = {'configs': {}, # config settings for oozie and hadoop
 'params': {},  # substitution values for Pig/Hive
 'args': []}# script args (Pig and Java actions)
   
But we have some things that don't fit (and probably more in the
future):
   
1) Java jobs have 'main_class' and 'java_opts' settings
  Currently these are handled as additional fields added to the
structure above.  These were the first to diverge.
   
2) Streaming MapReduce (anticipated) requires mapper and reducer
settings (different than the mapred..class settings for
non-streaming MapReduce)
   
Problems caused by adding fields

The job_configs structure above is stored in the database. Each time
 we
add a field to the structure above at the level of configs, params,
 and
args, we force a change to the database tables, a migration script
 and a
change to the JSON validation for the REST api.
   
We also cause a change for python-savannaclient and potentially other
clients.
   
This kind of change seems bad.
   
Proposal: Borrow a page from Oozie and add savanna. configs
-
I would like to fit divergent job settings into the structure we
 already
have.  One way to do this is to leverage the 'configs' dictionary.
  This
dictionary primarily contains settings for hadoop, but there are a
number of oozie.xxx settings that are passed to oozie as configs or
set by oozie for the benefit of running apps.
   
What if we allow savanna. settings to be added to configs?  If we
 do
that, any and all special configuration settings for specific job
 types
or subtypes can be handled with no database changes and no api
 changes.
   
Downside

Currently, all 'configs' are rendered in the generated oozie
 workflow.
The savanna. settings would be stripped out and processed by
 Savanna,
thereby changing that behavior a bit (maybe not a big deal)
   
We would also be mixing savanna. configs with config_hints for
 jobs,
so users would potentially see savanna. settings mixed with
 oozie
and hadoop settings.  Again, maybe not a big deal, but it might blur
 the
lines a little bit.  Personally, I'm okay with this.
   
Slightly different
--
We could also add a 'savanna-configs': {} element to job_configs to
keep the configuration spaces separate.
   
But, now we would have 'savanna-configs' (or another name),
 'configs',
'params', and 'args'.  Really? Just how many different types of
 values
can we come up with? :)
   
I lean away from this approach.
   
Related: breaking up the superset
-
   
It is also the case that not every job type has every value type.
   
Configs   ParamsArgs
HiveY YN
Pig Y YY
MapReduce   Y NN
JavaY NY
   
So do we make that explicit in the docs and enforce it in the api
 with
errors?
   
Thoughts? I'm sure there are some :)
   
Best,
   
Trevor
   
   
   
   
  

Re: [openstack-dev] [savanna] Undoing a change in the alembic migrations

2014-01-29 Thread Andrew Lazarev
+1 on new migration script. Just to be consecutive.

Andrew.


On Wed, Jan 29, 2014 at 2:17 PM, Trevor McKay tmc...@redhat.com wrote:

 Hi Sergey,

   In https://review.openstack.org/#/c/69982/1 we are moving the
 'main_class' and 'java_opts' fields for a job execution into the
 job_configs['configs'] dictionary.  This means that 'main_class' and
 'java_opts' don't need to be in the database anymore.

   These fields were just added in the initial version of the migration
 scripts.  The README says that migrations work from icehouse. Since
 this is the initial script, does that mean we can just remove references
 to those fields from the db models and the script, or do we need a new
 migration script (002) to erase them?

 Thanks,

 Trevor


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [savanna] Choosing provisioning engine during cluster launch

2014-01-29 Thread Andrew Lazarev
Alexander,

What is the purpose of exposing this to user side? Both engines must do
exactly the same thing and they exist in the same time only for transition
period until heat engine is stabilized. I don't see any value in proposed
option.

Andrew.


On Wed, Jan 29, 2014 at 8:44 PM, Alexander Ignatov aigna...@mirantis.comwrote:

 Today Savanna has two provisioning engines, heat and old one known as
 'direct'.
 Users can choose which engine will be used by setting special parameter in
 'savanna.conf'.

 I have an idea to give an ability for users to define provisioning engine
 not only when savanna is started but when new cluster is launched. The
 idea is simple.
 We will just add new field 'provisioning_engine' to 'cluster' and
 'cluster_template'
 objects. And profit is obvious, users can easily switch from one engine to
 another without
 restarting savanna service. Of course, this parameter can be omitted and
 the default value
 from the 'savanna.conf' will be applied.

 Is this viable? What do you think?

 Regards,
 Alexander Ignatov




 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [savanna] why swift-internal:// ?

2014-01-24 Thread Andrew Lazarev
what about having swift:// which defaults to the configured tenant and
auth url for what we now call swift-internal, and we allow for user input
to change tenant and auth url for what would be swift-external?

I like the proposal.

Andrew.


On Fri, Jan 24, 2014 at 4:50 AM, Matthew Farrellee m...@redhat.com wrote:

 andrew,

 what about having swift:// which defaults to the configured tenant and
 auth url for what we now call swift-internal, and we allow for user input
 to change tenant and auth url for what would be swift-external?

 in fact, we may need to add the tenant selection in icehouse. it's a
 pretty big limitation to only allow a single tenant.

 best,


 matt

 On 01/23/2014 11:15 PM, Andrew Lazarev wrote:

 Matt,

 For swift-internal we are using the same keystone (and identity protocol
 version) as for savanna. Also savanna admin tenant is used.

 Thanks,
 Andrew.


 On Thu, Jan 23, 2014 at 6:17 PM, Matthew Farrellee m...@redhat.com
 mailto:m...@redhat.com wrote:

 what makes it internal vs external?

 swift-internal needs user  pass

 swift-external needs user  pass  ?auth url?

 best,


 matt

 On 01/23/2014 08:43 PM, Andrew Lazarev wrote:

 Matt,

 I can easily imagine situation when job binaries are stored in
 external
 HDFS or external SWIFT (like data sources). Internal and
 external swifts
 are different since we need additional credentials.

 Thanks,
 Andrew.


 On Thu, Jan 23, 2014 at 5:30 PM, Matthew Farrellee
 m...@redhat.com mailto:m...@redhat.com
 mailto:m...@redhat.com mailto:m...@redhat.com wrote:

  trevor,

  job binaries are stored in swift or an internal savanna db,
  represented by swift-internal:// and savanna-db://
 respectively.

  why swift-internal:// and not just swift://?

  fyi, i see mention of a potential future version of savanna
 w/
  swift-external://

  best,


  matt

  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  mailto:OpenStack-dev@lists.__openstack.org
 mailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/
 openstack-dev
 http://lists.openstack.org/__cgi-bin/mailman/listinfo/__
 openstack-dev
 http://lists.openstack.org/__cgi-bin/mailman/listinfo/__
 openstack-dev
 http://lists.openstack.org/cgi-bin/mailman/listinfo/
 openstack-dev




 _
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.__org
 mailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/__cgi-bin/mailman/listinfo/__
 openstack-dev
 http://lists.openstack.org/cgi-bin/mailman/listinfo/
 openstack-dev



 _
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.__org
 mailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-dev
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [savanna] why swift-internal:// ?

2014-01-23 Thread Andrew Lazarev
Matt,

I can easily imagine situation when job binaries are stored in external
HDFS or external SWIFT (like data sources). Internal and external swifts
are different since we need additional credentials.

Thanks,
Andrew.


On Thu, Jan 23, 2014 at 5:30 PM, Matthew Farrellee m...@redhat.com wrote:

 trevor,

 job binaries are stored in swift or an internal savanna db, represented by
 swift-internal:// and savanna-db:// respectively.

 why swift-internal:// and not just swift://?

 fyi, i see mention of a potential future version of savanna w/
 swift-external://

 best,


 matt

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev