Re: [openstack-dev] [Nova] why force_config_drive is a per comptue node config

2014-02-28 Thread Jiang, Yunhong
Hi, Michael, I cooked a patch at https://review.openstack.org/#/c/77027/ and 
please have a look.

Another thing I'm not sure is, currently when nova show will only show if the 
user specify the 'config_drive' according to the DB, however, user have no idea 
if the config_drive is success or not, or the format used etc. Do you think we 
should extend such information to be more useful?

Also, how do you think about make the config_drive_format also based on image 
property instead of compute node config? IIRC, vfat/cdrom is mostly based on 
image requirement, right? Or, take image property as precendence.

Thanks
--jyh

 -Original Message-
 From: Jiang, Yunhong [mailto:yunhong.ji...@intel.com]
 Sent: Thursday, February 27, 2014 1:55 PM
 To: OpenStack Development Mailing List (not for usage questions);
 yunhong jiang
 Subject: Re: [openstack-dev] [Nova] why force_config_drive is a per
 comptue node config
 
 Hi, Michael, I created a bug at
 https://bugs.launchpad.net/nova/+bug/1285880 and please have a look.
 
 Thanks
 --jyh
 
  -Original Message-
  From: Jiang, Yunhong [mailto:yunhong.ji...@intel.com]
  Sent: Thursday, February 27, 2014 1:35 PM
  To: OpenStack Development Mailing List (not for usage questions);
  yunhong jiang
  Subject: Re: [openstack-dev] [Nova] why force_config_drive is a per
  comptue node config
 
 
 
   -Original Message-
   From: Michael Still [mailto:mi...@stillhq.com]
   Sent: Thursday, February 27, 2014 1:04 PM
   To: yunhong jiang
   Cc: OpenStack Development Mailing List
   Subject: Re: [openstack-dev] [Nova] why force_config_drive is a per
   comptue node config
  
   On Fri, Feb 28, 2014 at 6:34 AM, yunhong jiang
   yunhong.ji...@linux.intel.com wrote:
Greeting,
I have some questions on the force_config_drive
  configuration
   options
and hope get some hints.
a) Why do we want this? Per my understanding, if the user
   want to use
the config drive, they need specify it in the nova boot. Or is it
because possibly user have no idea of the cloudinit installation in the
image?
  
   It is possible for a cloud admin to have only provided images which
   work with config drive. In that case the admin would want to force
   config drive on, to ensure that instances always boot correctly.
 
  So would it make sense to keep it as image property, instead of compute
  node config?
 
  
b) even if we want to force config drive, why it's a compute
   node
config instead of cloud wise config? Compute-node config will have
   some
migration issue per my understanding.
  
   That's a fair point. It should probably have been a flag on the api
   servers. I'd file a bug for that one.
 
  Thanks, and I can cook a patch for it. Still I think it will be better if 
  we use
  image property?
 
  --jyh
 
  
   Michael
  
   --
   Rackspace Australia
  
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][baremetal] Deprovision of bare-metal nodes

2014-02-28 Thread Taurus Cheung
Hi,

I am working on deploying images to bare-metal machines using nova bare-metal. 
After deployment, I would like to deprovision (disconnect) bare-metal nodes 
from OpenStack controller/compute, so these bare-metal nodes can run standalone.

A typical scenario is that I have a workstation with OpenStack controller and 
nova baremetal compute installed. During bare-metal deployment, I plug the 
workstation into the network. After deployment, I disconnect it from the 
network.

Is this use-case typical, possible and without side-effect?

Regards,
Taurus
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][IPv6] BP:Store both IPv6 LLA and GUA address on router interface port

2014-02-28 Thread Xuhan Peng
Robert,

Thanks for your comments! See my replies inline.


On Thu, Feb 27, 2014 at 9:56 PM, Robert Li (baoli) ba...@cisco.com wrote:

  Hi Xuhan,

  Thank you for your summary. see comments inline.

  --Robert

   On 2/27/14 12:49 AM, Xuhan Peng pengxu...@gmail.com wrote:

As the follow up action of IPv6 sub-team meeting [1], I created a new
 blueprint [2] to store both IPv6 LLA and GUA address on router interface
 port.

  Here is what it's about:

  Based on the two modes (ipv6-ra-mode and ipv6-address-mode) design[3],
 RA can be sent from both openstack controlled dnsmasq or existing devices.

  RA From dnsmasq: gateway ip that dnsmasq binds into should be link local
 address (LLA) according to [4]. This means we need to pass the LLA of the
 created router internal port (i.e. qr-) to dnsmasq spawned by openstack
 dhcp agent. In the mean while, we need to assign an GUA to the created
 router port so that the traffic from external network can be routed back
 using the GUA of the router port as the next hop into the internal subnet.
 Therefore, we will need some change to the current logic to leverage both
 LLA and GUA on router port.


  [Robert]: in this case, a LLA address is automatically created based on
 the gateway port's MAC address (EUI64 format). If it's determined that the
 gateway port is enabled with IPv6 (due to the two modes), then an RA rule
 can be installed based on the gateway port's automatic LLA.
 if a service VM is running on the same subnet that supports IPv6 (either
 by RA or DHCPv6), then the service VM is attached to a neutron port on the
 same subnet (the gateway port). In this case, the automatic LLA on that
 port can be used to install the RA Rule. This is actually the same as in
 the dnsmasq case: use the gateway port's automatic LLA.

 [Xuhan]  I agree there is no need to create another LLA for gateway port
in this case since one is automatically created. We can probably use the
calculation method introduced by this patch (
https://review.openstack.org/#/c/56184/) to accomplish this, and create a
RA rule based on this address.

As you pointed out in my code review, the ICMPv6 type filter is not
supported by current security group. We will need a new blueprint to enable
this. I will try to create one soon.


  RA from existing device on the same link which is not controlled by
 openstack: dnsmasq will not send RA in this case. RA is sending from
 subnet's gateway address which should also be LLA according to [4].
 Allowing subnet's gateway IP to be LLA is enough in this case. Current code
 works when force_gateway_on_subnet = False.


  [Robert]
 if it's a provider network, the gateway already exists. I believe that the
 behavior of the --gateway option in the subnet API is to indicate the
 gateway's true IP address and install default route. In the IPv6 case,
 however, due to the existence of RA, the gateway doesn't have to be
 provided. In this case, a neutron gateway port doesn't have to be created,
 either. Installing a RA rule to prevent RA from malicious source should be
 done explicitly. A couple of methods may be considered. For example, an
 option such as --alow-ra LLA can be introduced in the subnet API, or the
 security group rule can be enhanced to allow specification of message type
 so that a RA rule can be incorporated.


[Xuhan]  This is a problem that we may not be able to solve in Icehouse
considering the time left. However, I think the gateway port is not created
until we attach the subnet on the router. Therefore, for a workaround in
Icehouse, we can allow LLA as the gateway IP passed to subnet creation, so
RA from the provider network gateway LLA can also be allowed. The logic to
create RA rule can looks like this:

1. if gateway ip of a subnet is GUA (when dnsmasq or a service VM is
sending RA):
  calculate the gateway port's LLA based on port's MAC address,
  then allow RA from this LLA.

2. if gateway ip of a subnet is LLA (for provider network existing gateway)
  allow RA from this LLA.

In next release, we can evaluate how to allow RA from existing gateway in a
better way.

Thoughts?


  In any case, I don't believe that the gateway behavior should be
 modified. In addition, I don't think that this functionality (IPv6 RA rule)
 has to be provided right now, but can be introduced when it's completely
 sorted out.

  The above is just my two cents.

  thanks.




 RA from router gateway port (i.e. qg-):  the LLA of the gateway port
 (qg-) should be set as the gateway of tenant subnet to get the RA from
 that. This could be potentially calculated by [5] or by other methods in
 the future considering privacy extension. However, this will make the
 tenant network gateway port qr- useless. Therefore, we also need code
 change to current router interface attach logic.
  If you have any comments on this, please let me know.

  [1]
 

Re: [openstack-dev] [TripleO] Tuskar CLI UX

2014-02-28 Thread Ladislav Smola

On 02/27/2014 05:02 PM, Ana Krivokapic wrote:


On 02/27/2014 04:41 PM, Tzu-Mainn Chen wrote:

Hello,

I think if we will use Openstack CLI, it has to be something like this
https://github.com/dtroyer/python-oscplugin.
Otherwise we are not Openstack on Openstack.

Btw. abstracting it all to one big CLI will be just more confusing when
people will debug issues. So it would
have to be done very good.

E.g calling 'openstack-client net-create' fails.
Where do you find error log?
Are you using nova-networking or Neutron?
..

Calli 'neutron net-create' and you just know.

Btw. who would actually hire a sysadmin that will start to use CLI and
have no
idea what is he doing? They need to know what each service do, how to
correctly
use them and how do debug it when something is wrong.


For flavors just use flavors, we call them flavors in code too. It has
just nicer face in UI.

Actually, don't we called them node_profiles in the UI code?


We do: 
https://github.com/openstack/tuskar-ui/tree/master/tuskar_ui/infrastructure/node_profiles

  Personally,
I'd much prefer that we call them flavors in the code.
I agree, keeping the name flavor makes perfect sense here, IMO. The 
only benefit of using node profile seems to be that it is more 
descriptive. However, as already mentioned, admins are well used to 
the name flavor. It seems to me that this change introduces more 
confusion than it serves to clear things up. In other words, it brings 
more harm than good.




I see, we have brought an API flavor wrapper 
https://github.com/openstack/tuskar-ui/blob/master/tuskar_ui/api.py#L91


Nevertheless keeping 'flavor' make sense.



Mainn

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Future of the Nova API

2014-02-28 Thread Alex Xu

On 2014?02?28? 13:40, Chris Friesen wrote:

On 02/27/2014 06:00 PM, Alex Xu wrote:


Does mean our code looks like as below?
if client_version  2:

elif client_version  3
...
elif client_version  4:
   ...
elif client_version  5:
   ...
elif client_version  6:
   ..

And we need test each version...  That looks bad...


I don't think the code would look like that

Each part of the API could look at the version separately.  And each 
part of the API only needs to check the client version if it has made 
a backwards-incompatible change.


So a part of the API that only made one backwards-incompatible change 
at version 3 would only need one check.


if client_version = 3
do_newer_something()
else
do_something()



Maybe some other part of the API made a change at v6 (assuming global 
versioning).  That part of the API would also only need one check.



if client version = 6
do_newer_something()
else
do_something()



Yes, I know it. But it still looks bad :(

In api code, it will be looks like as below:

def do_something(self, body):
if client_version == 2:
   args = body['SomeArguments']
elif client_version == 3:
   args = body['some_arguments']

   try:
ret = self.compute_api.do_something(args)
   except exception.SomeException:
if client_version == 2:
raise exception.HTTPBadRequest()
elif client_version == 4:
raise exception.HTTPConflictRequest()

   if client_version == 2:
   return {'some_arguments': ret}
   elif client_version == 3:
   return {'SomeArguments': ret}



Chris


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Infra] openstack_citest MySQL user privileges to create databases on CI nodes

2014-02-28 Thread Roman Podoliaka
Hi Clark, all,

https://review.openstack.org/#/c/76634/ has been merged, but I still
get 'command denied' errors [1].

Is there something else, that must be done before we can use new
privileges of openstack_citest user?

Thanks,
Roman

[1] 
http://logs.openstack.org/63/74963/4/check/gate-oslo-incubator-python27/e115a5f/console.html

On Wed, Feb 26, 2014 at 11:54 AM, Roman Podoliaka
rpodoly...@mirantis.com wrote:
 Hi Clark,

 I think we can safely GRANT ALL on *.* to openstack_citest@localhost and 
 call that good enough
 Works for me.

 Thanks,
 Roman

 On Tue, Feb 25, 2014 at 8:29 PM, Clark Boylan clark.boy...@gmail.com wrote:
 On Tue, Feb 25, 2014 at 2:33 AM, Roman Podoliaka
 rpodoly...@mirantis.com wrote:
 Hi all,

 [1] made it possible for openstack_citest MySQL user to create new
 databases in tests on demand (which is very useful for parallel
 running of tests on MySQL and PostgreSQL, thank you, guys!).

 Unfortunately, openstack_citest user can only create tables in the
 created databases, but not to perform SELECT/UPDATE/INSERT queries.
 Please see the bug [2] filed by Joshua Harlow.

 In PostgreSQL the user who creates a database, becomes the owner of
 the database (and can do everything within this database), and in
 MySQL we have to GRANT those privileges explicitly. But
 openstack_citest doesn't have the permission to do GRANT (even on its
 own databases).

 I think, we could overcome this issue by doing something like this
 while provisioning a node:
 GRANT ALL on `some_predefined_prefix_goes_here\_%`.* to
 'openstack_citest'@'localhost';

 and then create databases giving them names starting with the prefix value.

 Is it an acceptable solution? Or am I missing something?

 Thanks,
 Roman

 [1] https://review.openstack.org/#/c/69519/
 [2] https://bugs.launchpad.net/openstack-ci/+bug/1284320

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 The problem with the prefix approach is it doesn't scale. At some
 point we will decide we need a new prefix then a third and so on
 (which is basically what happened at the schema level). That said we
 recently switched to using single use slaves for all unittesting so I
 think we can safely GRANT ALL on *.* to openstack_citest@localhost and
 call that good enough. This should work fine for upstream testing but
 may not be super friendly to others using the puppet manifests on
 permanent slaves. We can wrap the GRANT in a condition in puppet that
 is set only on single use slaves if this is a problem.

 Clark

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Oslo] Questions about syncing non-imported files

2014-02-28 Thread ChangBo Guo
1)
I found modules tracked in openstack-common.conf is not consistent with
actual
modules in directoy 'openstack/common' in some projects like Nova. I
drafted a script
to enforce the check in https://review.openstack.org/#/c/76901/. Maybe need
more work
to improve it. Please help review :).

2)
Some projects include README ,which is out of date in direcotry
'openstack/common'
like Nova, Cinder. But other projects don't include it. Should we keep the
file in
directory 'openstack/common'? or move to other pace or just remove it.

3) What kind of module can be recorded in openstack-common.conf ? only
modules in
directory openstack/common ? This is an example:
https://github.com/openstack/nova/blob/master/openstack-common.conf#L17

4) We have some useful check scripts in tools, is there any plan and rule
to
 sync them to downstream projects ? I would like to be volunteer for this.


-- 
ChangBo Guo(gcb)
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Tuskar CLI UX

2014-02-28 Thread Ladislav Smola

On 02/27/2014 04:30 PM, Dougal Matthews wrote:

On 27/02/14 15:08, Ladislav Smola wrote:

Hello,

I think if we will use Openstack CLI, it has to be something like this
https://github.com/dtroyer/python-oscplugin.
Otherwise we are not Openstack on Openstack.

Btw. abstracting it all to one big CLI will be just more confusing when
people will debug issues. So it would
have to be done very good.

E.g calling 'openstack-client net-create' fails.
Where do you find error log?
Are you using nova-networking or Neutron?


I would at least expect the debug/log of the tuskar client to show what
calls its making on other clients so following this trail wouldn't be
too hard.



Well sure, this is part of 'being done very good'.

Though a lot of calls makes some asynchronous jobs, that can result in 
errors

you will just not see when you call the clients.
So you will need to know where to look depending on what is acting weird.

What I am trying to say is, the Openstack is just complex, there is no way
around, sysadmins just need to understand, what they are doing.

If we are going to simplify that, we would need to build something like
we have in UI. So some abstraction layer that leads the user and won't
let him break it. Though this leads to limited functionality we are able 
to control.

Which I am not entirely convinced is what CLI users want.




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Thoughts on adding a '--progress' option?

2014-02-28 Thread Jay Lau
Does heat resource-list and heat event-list help?

[gyliu@drsserver hadoop_heat(keystone_admin)]$ heat resource-list a1
++--++--+
| logical_resource_id| resource_type|
resource_status| updated_time |
++--++--+
| CfnUser| AWS::IAM::User   |
CREATE_COMPLETE| 2014-02-28T16:50:11Z |
| HadoopM| AWS::EC2::Instance   |
CREATE_IN_PROGRESS | 2014-02-28T16:50:11Z |
| HadoopMasterWaitHandle | AWS::CloudFormation::WaitConditionHandle |
CREATE_COMPLETE| 2014-02-28T16:50:11Z |
| HadoopSlaveKeys| AWS::IAM::AccessKey  |
CREATE_IN_PROGRESS | 2014-02-28T16:50:11Z |
| HadoopMasterWaitCondition  | AWS::CloudFormation::WaitCondition   |
INIT_COMPLETE  | 2014-02-28T16:50:31Z |
| LaunchConfig   | AWS::AutoScaling::LaunchConfiguration|
INIT_COMPLETE  | 2014-02-28T16:50:31Z |
| HadoopSGroup   | AWS::AutoScaling::AutoScalingGroup   |
INIT_COMPLETE  | 2014-02-28T16:50:52Z |
| HadoopSlaveScaleDownPolicy | AWS::AutoScaling::ScalingPolicy  |
INIT_COMPLETE  | 2014-02-28T16:50:52Z |
| HadoopSlaveScaleUpPolicy   | AWS::AutoScaling::ScalingPolicy  |
INIT_COMPLETE  | 2014-02-28T16:50:52Z |
| MEMAlarmHigh   | AWS::CloudWatch::Alarm   |
INIT_COMPLETE  | 2014-02-28T16:50:52Z |
| MEMAlarmLow| AWS::CloudWatch::Alarm   |
INIT_COMPLETE  | 2014-02-28T16:50:52Z |
++--++--+

[gyliu@drsserver hadoop_heat(keystone_admin)]$ heat event-list -r
HadoopMasterWaitCondition a1
+---+---+++--+
| logical_resource_id   | id| resource_status_reason |
resource_status| event_time   |
+---+---+++--+
| HadoopMasterWaitCondition | 37389 | state changed  |
CREATE_IN_PROGRESS | 2014-02-28T16:51:07Z |
| HadoopMasterWaitCondition | 37390 | state changed  |
CREATE_COMPLETE| 2014-02-28T16:52:46Z |
+---+---+++--+

Thanks,

Jay


2014-02-28 15:28 GMT+08:00 Qiming Teng teng...@linux.vnet.ibm.com:


 The creation a stack is usually a time costly process, considering that
 there are cases where software packages need to be installed and
 configured.

 There are also cases where a stack consists of more than one VM instance
 and the dependency between instances.  The instances may have to be
 created one by one.

 Are Heat people considering adding some progress updates during the
 deployment?  For example, a simple log that can be printed by heatclient
 telling the user what progress has been made:

 Refreshing known resources types
 Receiving template ...
 Validating template ...
 Creating resource my_lb [AWS::EC2:LoadBalancer]
 Creating resource lb_instance1 [AWS::EC2::Instance]
 Creating resource latency_watcher [AWS::CloudWatch::Alarm]
 
 ...


 This would be useful for users to 'debug' their templates, especially
 when the template syntax is okay but its activities are not the intended
 one.

 Do we have to rely on heat-cfn-api to get these notifications?

 Any thoughts?

   - Qiming

 Research Staff Member
 IBM Research - China
 tengqim AT cn DOT ibm DOT com


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][IPv6]

2014-02-28 Thread Xuhan Peng
Here is a list of related blueprint and bug patches:

Create new IPv6 attributes for Subnets
https://review.openstack.org/#/c/52983/

Ensure entries in dnsmasq belong to a subnet using DHCP
https://review.openstack.org/#/c/64578/

Calculate stateless IPv6 address
https://review.openstack.org/#/c/56184/

Add support to DHCP agent for BP ipv6-two-attributes
https://review.openstack.org/#/c/70649/

Permit ICMPv6 RAs only from known routers
https://review.openstack.org/#/c/72252/

Allow LLA as router interface of IPv6 subnet
https://review.openstack.org/#/c/76125/

Create new IPv6 attributes for Subnets by client
https://review.openstack.org/#/c/75871/

Make sure dnsmasq can distinguish IPv6 address from MAC address
https://review.openstack.org/#/c/75355/


On Thu, Feb 27, 2014 at 9:04 AM, Shixiong Shang 
sparkofwisdom.cl...@gmail.com wrote:

 Hi, Sean and the team:

 Do we have a list of code reviews and a list of BPs submitted by Neutron
 IPv6 sub-team targeting at Icehouse 3? Would appreciate everybody's help to
 compose a list so we won't overlook anything, especially the deadline is
 next Friday.

 Thanks!

 Shixiong


 *Shixiong Shang*

 *!--- Stay Hungry, Stay Foolish ---!*


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Flavor Framework

2014-02-28 Thread Gary Duan
Hi, Eugene,

What are the parameters that will be part of flavor definition? As I am
thinking of it now, the parameter could be performance and capacity
related, for example, throughput, max. session number and so on; or
capability related, for example, HA, L7 switching.

Compared to # of CPU and memory size in Nova flavor, these parameters don't
seem to have exact definitions across different implementations. Or, you
think it is not something we need worry about. It's totally operator's
decision of how to rate different drivers?

Thanks,
Gary


On Thu, Feb 27, 2014 at 10:19 PM, Eugene Nikanorov
enikano...@mirantis.comwrote:

 Hi Jay,

 Thanks for looking into this.


 1) I'm not entirely sure that a provider attribute is even necessary to
 expose in any API. What is important is for a scheduler to know which
 drivers are capable of servicing a set of attributes that are grouped
 into a flavor.

 Well, provider becomes read-only attribute and for admin only (jsut to see
 which driver actually handles the resources), not too much of API
 visibility.


 2) I would love to see the use of the term flavor banished from
 OpenStack APIs. Nova has moved from flavors to instance types, which
 clearly describes what the thing is, without the odd connotations that
 the word flavor has in different languages (not to mention the fact
 that flavor is spelled flavour in non-American English).

 How about using the term load balancer type, VPN type, and firewall
 type instead?

 Oh... I don't have strong opinion on the name of the term.
 Flavor was used several time in our discussions and is short.
 *Instance* Type however seems also fine. Another option is probably a
 Service Offering.



 3) I don't believe the FlavorType (public or internal) attribute of the
 flavor is useful. We want to get away from having any vendor-specific
 attributes or objects in the APIs (yes, even if they are hidden from
 the normal user). See point #1 for more about this. A scheduler should
 be able to match a driver to a request simply by matching the set of
 required capabilities in the requested flavor (load balancer type) to
 the set of capabilities advertised by the driver.

 ServiceType you mean? If you're talking about ServiceType then it mostly
 for the user to filter flavors (I'm using short term for now) by service
 type. Say, when user wants to create new loadbalancer, horizon will show
 only flavors related to the lb.
 That could be also solved by having different names live you suggested
 above: Lb type, VPN type, etc.
 On other hand that would be similar objects with different names - does it
 make much sense?

 I'm not sure what you think 'vendor-specific' attributes are, I don't
 remember to have plan of exposing any kind of vendor-related parameters.
 The parameters that flavor represents are capabilities of the service in
 terms that user care about: latency, throughput, topology, technology, etc.



 4) A minor point... I think it would be fine to group the various
 types into a single database table behind the scenes (like you have in
 the Object model section). However, I think it is useful to have the
 public API expose a /$servie-types resource endpoint for each service
 itself, instead of a generic /types (or /flavors) endpoint. So, folks
 looking to set up a load balancer would call GET /balancer-types, or
 call neutron balancer-type-list, instead of calling
 GET /types?service=load-balancer or neutron flavor-list
 --service=load-balancer

 I'm fine with this suggestion.



 5) In the section on Scheduling, you write Scheduling is a process of
 choosing provider and a backend for the resource. As mentioned above, I
 think this could be changed to something like this: Scheduling is a
 process of matching the set of requested capabilities -- the flavor
 (type) -- to the set of capabilities advertised by a driver for the
 resource. That would put Neutron more in line with how Nova handles
 this kind of thing.

 I agree, I actually meant this and nova example is how I think it should
 work.
 But more important is what is the result of scheduling.
 We discussed that yesterday with Mark and I think we got so the point
 where we could not find agreement for now.
 In my opinion the result of scheduling is binding resource to the driver
 (at least)
 So further calls to the resource go to the same driver because of that
 binding.
 That's pretty much the same how agent scheduling works.

 By the way I'm thinking about getting rid of 'provider' term and using
 'driver' instead. Currently 'provider' is just a user-facing representation
 of the driver. Once we introduce flavors/service types/etc, we can use term
 'driver' for implementation means.

 Thanks,
 Eugene.


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list

Re: [openstack-dev] [Mistral] Renaming action types

2014-02-28 Thread Renat Akhmerov
Haah :) Honestly, I don’t like it. “invoke” doesn’t seem to be carrying any 
useful information here. And “invoke_mistral” looks completely confusing since 
it’s not clear it’s related with HTTP.

Renat Akhmerov
@ Mirantis Inc.



On 27 Feb 2014, at 23:42, Manas Kelshikar ma...@stackstorm.com wrote:

 How about ...
 
 invoke_http  invoke_mistral to fit the verb_noun pattern. 
 
 
 On Wed, Feb 26, 2014 at 6:04 AM, Renat Akhmerov rakhme...@mirantis.com 
 wrote:
 Ooh, I was wrong. Sorry. We use dash naming. We have “on-success”, “on-error” 
 and so forth.
 
 Please let us know if you see other inconsistencies.
 
 Thanks
 
 Renat Akhmerov
 @ Mirantis Inc.
 
 
 
 On 26 Feb 2014, at 21:00, Renat Akhmerov rakhme...@mirantis.com wrote:
 
  Thanks Jay.
 
  Regarding underscore naming. If you meant using underscore naming for 
  “createVM” and “novaURL” then yes, “createVM” is just a task name and it’s 
  a user preference. The same about “novaURL” which will be defined by users. 
  As for keywords, seemingly we follow underscore naming.
 
  Renat Akhmerov
  @ Mirantis Inc.
 
 
 
  On 26 Feb 2014, at 17:58, Jay Pipes jaypi...@gmail.com wrote:
 
  On Wed, 2014-02-26 at 14:38 +0700, Renat Akhmerov wrote:
  Folks,
 
  I’m proposing to rename these two action types REST_API and
  MISTRAL_REST_API to HTTP and MISTRAL_HTTP. Words “REST” and “API”
  don’t look correct to me, if you look at
 
 
  Services:
  Nova:
type: REST_API
parameters:
  baseUrl: {$.novaURL}
actions:
  createVM:
parameters:
  url: /servers/{$.vm_id}
  method: POST
 
  There’s no information about “REST” or “API” here. It’s just a spec
  how to form an HTTP request.
 
  +1 on HTTP and MISTRAL_HTTP.
 
  On an unrelated note, would it be possible to use under_score_naming
  instead of camelCase naming?
 
  Best,
  -jay
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Defining term DSL

2014-02-28 Thread Renat Akhmerov
Yes. Guys, thanks for your feedback. I had a conversation with Dmitri today and 
realized you guys are right here. We should think about building basically a 
“domain model” which the system operates with and once we built it we should 
forget that we have some DSL or whatever we used to describe this model (could 
be other language, for example). Our initial intention actually was different 
but anyway what you’re saying is valid. Looks like Nikolay agrees with me too 
and he’s now reworking this commit. Coming up soon.

Renat Akhmerov
@ Mirantis Inc.



On 27 Feb 2014, at 23:36, Manas Kelshikar ma...@stackstorm.com wrote:

 I looked at the review prior to looking at the discussion and even I was 
 confused by names like DSL*. The way I see it DSL is largely syntatic sugar 
 and therefore it will be good to have a clear separation between DSL and 
 model. The fact that something is defined in a DSL is irrelevant once it 
 crosses mistral API border in effect within mistral itself DSLTask, DSLAction 
 etc are simply description objects and how they were defined does not matter 
 to mistral implementation. 
 
 Each description object being a recipe to eventually execute a task. We 
 therefore already see these two manifestations in current code i.e. 
 DSLTask(per Nikolay's change) and Task 
 (https://github.com/stackforge/mistral/blob/master/mistral/api/controllers/v1/task.py#L30).
 
 To me it seems like we only need to agree upon names. Here are my suggestions 
 -
 
 i)
 DSLTask - Task
 Task - TaskInstance
 (Similarly for workflow, action etc.)
 
 OR
 
 ii)
 DSLTask - TaskSpec
 Task - Task
 (Similarly for workflow, action etc.)
  
 
 
 On Wed, Feb 26, 2014 at 9:31 PM, Renat Akhmerov rakhme...@mirantis.com 
 wrote:
 
 On 26 Feb 2014, at 22:54, Dmitri Zimine d...@stackstorm.com wrote:
 
 Based on the terminology from [1], it's not part of the model, but the 
 language that describes the model in the file.
 
 Sorry, I’m having a hard time trying to understand this phrase :) What do you 
 mean by “model” here? And why should DSL be a part of the model?
 
 And theoretically this may be not the only language to express the workflow.
 
 Sure, from that perspective, for example, JVM has many “DSLs”: Java, Scala, 
 Groovy etc.
 
 Once the file is parsed, we operate on model, not on the language.
 
 How does it influence using term DSL? DSL is, in fact, a user interface. 
 Model is something we build inside a system to process DSL in a more 
 convenient way.
 
 
 I am afraid we are breaking an abstraction when begin to call things 
 DSLWorkbook or DSLWorkflow. What is the difference between Workbook and 
 DSLWorkbook, and how DSL is relevant here? 
 
 Prefix “DSL” tells that this exactly matches the structure of an object 
 declared with using DSL. But, for example, a workbook in a database may have 
 (and it has) a different structure better suitable for storing it in a 
 relational model.
 So I’m not sure what you mean by saying “we are breaking an abstraction” 
 here. What abstraction?
 
 [1] https://wiki.openstack.org/wiki/Mistral, 
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [savanna] floating ip pool by name

2014-02-28 Thread Sergey Lukjanov
We can add support for network names on the client / dashboard level
as an UX enhancement. Client name resolving code sounds mostly useful.

On Fri, Feb 28, 2014 at 11:43 AM, Alexander Ignatov
aigna...@mirantis.com wrote:
 Andrew,

 This change was needed to heat engine. In case when heat engine and neutron
 env are used Heat stacks fails with 'Bad network UUID error'.
 This happen because neutron client can't work with networks by names. So
 checking network IDs at the validation stage prevents from stack fails and
 cluster errors. Also savanna expects IDs for all resources used during
 cluster creation (flavors, images etc). Now there are networks.

 Regards,
 Alexander Ignatov



 On 28 Feb 2014, at 04:09, Andrew Lazarev alaza...@mirantis.com wrote:

 Hi Team,

 I was always using floating_ip_pool: net04_ext construction and it
 worked fine. Now it responds with validation error Floating IP pool
 net04_ext for node group 'manager' not found because
 https://bugs.launchpad.net/savanna/+bug/1282027 was merged and savanna
 expects only ID here. Is it intentional restriction? What is the reasoning?
 Referencing by name is comfortable.

 Thanks,
 Andrew.
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Defining term DSL

2014-02-28 Thread Nikolay Makhotkin
Yes, I also think this changes more refer to model than DSL


On Fri, Feb 28, 2014 at 1:41 PM, Renat Akhmerov rakhme...@mirantis.comwrote:

 Yes. Guys, thanks for your feedback. I had a conversation with Dmitri
 today and realized you guys are right here. We should think about building
 basically a domain model which the system operates with and once we built
 it we should forget that we have some DSL or whatever we used to describe
 this model (could be other language, for example). Our initial intention
 actually was different but anyway what you're saying is valid. Looks like
 Nikolay agrees with me too and he's now reworking this commit. Coming up
 soon.

 Renat Akhmerov
 @ Mirantis Inc.



 On 27 Feb 2014, at 23:36, Manas Kelshikar ma...@stackstorm.com wrote:

 I looked at the review prior to looking at the discussion and even I was
 confused by names like DSL*. The way I see it DSL is largely syntatic sugar
 and therefore it will be good to have a clear separation between DSL and
 model. The fact that something is defined in a DSL is irrelevant once it
 crosses mistral API border in effect within mistral itself DSLTask,
 DSLAction etc are simply description objects and how they were defined does
 not matter to mistral implementation.

 Each description object being a recipe to eventually execute a task. We
 therefore already see these two manifestations in current code i.e.
 DSLTask(per Nikolay's change) and Task (
 https://github.com/stackforge/mistral/blob/master/mistral/api/controllers/v1/task.py#L30
 ).

 To me it seems like we only need to agree upon names. Here are my
 suggestions -

 i)
 DSLTask - Task
 Task - TaskInstance
 (Similarly for workflow, action etc.)

 OR

 ii)
 DSLTask - TaskSpec
 Task - Task
 (Similarly for workflow, action etc.)



 On Wed, Feb 26, 2014 at 9:31 PM, Renat Akhmerov rakhme...@mirantis.comwrote:


 On 26 Feb 2014, at 22:54, Dmitri Zimine d...@stackstorm.com wrote:

 Based on the terminology from [1], it's not part of the model, but the
 language that describes the model in the file.


 Sorry, I'm having a hard time trying to understand this phrase :) What do
 you mean by model here? And why should DSL be a part of the model?

 And theoretically this may be not the only language to express the
 workflow.


 Sure, from that perspective, for example, JVM has many DSLs: Java,
 Scala, Groovy etc.

 Once the file is parsed, we operate on model, not on the language.


 How does it influence using term DSL? DSL is, in fact, a user interface.
 Model is something we build inside a system to process DSL in a more
 convenient way.


 I am afraid we are breaking an abstraction when begin to call things
 DSLWorkbook or DSLWorkflow. What is the difference between Workbook and
 DSLWorkbook, and how DSL is relevant here?


 Prefix DSL tells that this exactly matches the structure of an object
 declared with using DSL. But, for example, a workbook in a database may
 have (and it has) a different structure better suitable for storing it in a
 relational model.
 So I'm not sure what you mean by saying we are breaking an abstraction
 here. What abstraction?

 [1] https://wiki.openstack.org/wiki/Mistral,




 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Best Regards,
Nikolay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][TripleO] Neutron DB migrations best practice

2014-02-28 Thread Roman Podoliaka
Hi Robert, all,

 But what are we meant to do? Nova etc are dead easy: nova-manage db sync 
 every time the code changes, done.
I believe, it's not different from Nova: run db sync every time the
code changes. It's the only way to guarantee the most recent DB schema
version is used.

Interestingly, that Neutron worked for us in TripleO even without
db-sync. I think it's caused by the fact, the Neutron internally calls
metadata.create_all(), which creates DB schema from SQLAlchemy models
definitions (which is perfectly ok for *new installations* as long as
you 'stamp' the DB schema revision then, but it *does not* work for
upgrades).

Thanks,
Roman

On Wed, Feb 26, 2014 at 2:42 AM, Robert Collins
robe...@robertcollins.net wrote:
 So we had this bug earlier in the week;
 https://bugs.launchpad.net/tripleo/+bug/1283921

Table 'ovs_neutron.ml2_vlan_allocations' doesn't exist in 
 neutron-server.log

 We fixed this by running neutron-db-migrate upgrade head... which we
 figured out by googling and asking weird questions in
 #openstack-neutron.

 But what are we meant to do? Nova etc are dead easy: nova-manage db
 sync every time the code changes, done.

 Neutron seems to do something special and different here, and it's not
 documented from an ops perspective AFAICT - so - please help, cluebats
 needed!

 -Rob


 --
 Robert Collins rbtcoll...@hp.com
 Distinguished Technologist
 HP Converged Cloud

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] openstack java sdk

2014-02-28 Thread rash g
Hello,
I am working on a project which uses openstack.I want to
connect my java code to openstack.For that I was thinking of using
openstack-java sdk.But I did not find any jar files for the same to
import in my java code.
Can anyone tell me where I can get jar files for openstack-java sdk?

Thanks,
Rashmi

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA] The future of nosetests with Tempest

2014-02-28 Thread Alexei Kornienko

Hi,

Let me express my concerns on this topic:

With some recent changes made to Tempest compatibility with
nosetests is going away.
I think that we should not drop nosetests support from tempest or any 
other project. The problem with testrepository is that it's not 
providing any debugger support at all (and will never provide). It also 
has some issues with proving error traces in human readable form and it 
can be quite hard to find out what is actually broken.


Because of this I think we should try to avoid any kind of test 
libraries that break compatibility with conventional test runners.


Our tests should be able to run correctly with nosetests, teststools or 
plain old unittest runner. If for some reason test libraries (like 
testscenarios) doesn't provide support for this we should fix this 
libraries or avoid their usage.


Regards,
Alexei Kornienko

On 02/27/2014 06:36 PM, Frittoli, Andrea (HP Cloud) wrote:

This is another example of achieving the same result (exclusion from a
list):
https://git.openstack.org/cgit/openstack/tripleo-image-elements/tree/element
s/tempest/tests2skip.py
https://git.openstack.org/cgit/openstack/tripleo-image-elements/tree/element
s/tempest/tests2skip.txt

andrea

-Original Message-
From: Matthew Treinish [mailto:mtrein...@kortar.org]
Sent: 27 February 2014 15:49
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [QA] The future of nosetests with Tempest

On Tue, Feb 25, 2014 at 07:46:23PM -0600, Matt Riedemann wrote:


On 2/12/2014 1:57 PM, Matthew Treinish wrote:

On Wed, Feb 12, 2014 at 11:32:39AM -0700, Matt Riedemann wrote:


On 1/17/2014 8:34 AM, Matthew Treinish wrote:

On Fri, Jan 17, 2014 at 08:32:19AM -0500, David Kranz wrote:

On 01/16/2014 10:56 PM, Matthew Treinish wrote:

Hi everyone,

With some recent changes made to Tempest compatibility with
nosetests is going away. We've started using newer features that
nose just doesn't support. One example of this is that we've
started using testscenarios and we're planning to do this in more

places moving forward.

So at Icehouse-3 I'm planning to push the patch out to remove
nosetests from the requirements list and all the workarounds and
references to nose will be pulled out of the tree. Tempest will
also start raising an unsupported exception when you try to run
it with nose so that there isn't any confusion on this moving
forward. We talked about doing this at summit briefly and I've
brought it up a couple of times before, but I believe it is time
to do this now. I feel for tempest to move forward we need to do this

now so that there isn't any ambiguity as we add even more features and new
types of testing.

I'm with you up to here.

Now, this will have implications for people running tempest with
python 2.6 since up until now we've set nosetests. There is a
workaround for getting tempest to run with python 2.6 and testr see:

https://review.openstack.org/#/c/59007/1/README.rst

but essentially this means that when nose is marked as
unsupported on tempest python 2.6 will also be unsupported by
Tempest. (which honestly it basically has been for while now just
we've gone without making it official)

The way we handle different runners/os can be categorized as
tested in gate, unsupported (should work, possibly some hacks
needed), and hostile. At present, both nose and py2.6 I would
say are in the unsupported category. The title of this message and
the content up to here says we are moving nose to the hostile
category. With only 2 months to feature freeze I see no
justification in moving
py2.6 to the hostile category. I don't see what new testing
features scheduled for the next two months will be enabled by
saying that tempest cannot and will not run on 2.6. It has been
agreed I think by all projects that py2.6 will be dropped in J. It
is OK that py2.6 will require some hacks to work and if in the
next few months it needs a few more then that is ok. If I am
missing another connection between the py2.6 and nose issues, please

explain.

So honestly we're already at this point in tempest. Nose really
just doesn't work with tempest, and we're adding more features to
tempest, your negative test generator being one of them, that
interfere further with nose. I've seen several

I disagree here, my team is running Tempest API, CLI and scenario
tests every day with nose on RHEL 6 with minimal issues.  I had to
workaround the negative test discovery by simply sed'ing that out of
the tests before running it, but that's acceptable to me until we
can start testing on RHEL 7.  Otherwise I'm completely OK with
saying py26 isn't really supported and isn't used in the gate, and
it's a buyer beware situation to make it work, which includes
pushing up trivial patches to make it work (which I did a few of
last week, and they were small syntax changes or usages of
testtools).

I don't understand how the core projects can be running unit tests
in the gate on py26 but our 

Re: [openstack-dev] openstack java sdk

2014-02-28 Thread Denis Makogon
Hello, Rash.

Here [1], you could try this one. But we cannot give any warranties about
stability of this SDK.

[1] https://github.com/woorea/openstack-java-sdk


Best regards,
Denis Makogon.


On Fri, Feb 28, 2014 at 12:43 PM, rash g rashg...@gmail.com wrote:

 Hello,
 I am working on a project which uses openstack.I want to
 connect my java code to openstack.For that I was thinking of using
 openstack-java sdk.But I did not find any jar files for the same to
 import in my java code.
 Can anyone tell me where I can get jar files for openstack-java
 sdk?

 Thanks,
 Rashmi

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Future of the Nova API

2014-02-28 Thread Day, Phil
 -Original Message-
 From: Chris Behrens [mailto:cbehr...@codestud.com]
 Sent: 26 February 2014 22:05
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [nova] Future of the Nova API
 
 
 This thread is many messages deep now and I'm busy with a conference this
 week, but I wanted to carry over my opinion from the other v3 API in
 Icehouse thread and add a little to it.
 
 Bumping versions is painful. v2 is going to need to live for a long time to
 create the least amount of pain. I would think that at least anyone running a
 decent sized Public Cloud would agree, if not anyone just running any sort of
 decent sized cloud. I don't think there's a compelling enough reason to
 deprecate v2 and cause havoc with what we currently have in v3. I'd like us
 to spend more time on the proposed tasks changes. And I think we need
 more time to figure out if we're doing versioning in the correct way. If we've
 got it wrong, a v3 doesn't fix the problem and we'll just be causing more
 havoc with a v4.
 
 - Chris
 
Like Chris I'm struggling to keep up with this thread,  but of all the various 
messages I've read this is the one that resonates most with me.

My perception of the V3 API improvements (in order to importance to me):
i) The ability to version individual extensions
Crazy that small improvements can't be introduced without having to create a 
new extension,  when often the extension really does nothing more that indicate 
that some other part of the API code has changed.

ii) The opportunity to get the proper separation between Compute and Network 
APIs
Being (I think) one of the few clouds that provides both the Nova and Neutron 
API this is a major source of confusion and hence support calls.

iii) The introduction of the task model
I like the idea of tasks, and think it will be a much easier way for users to 
interact with the system.   Not convinced that it couldn't co-exist in V2 
thought rather than having to co-exist as V2 and V3

iv)Clean-up of a whole bunch of minor irritations / inconsistencies
There are lots of things that are really messy (inconsistent error codes, 
aspects of core that are linked to just Xen, etc, etc).  They annoy people the 
first time they hit them, then the code around them and move on.Probably 
I've had more hate mail from people writing language bindings than application 
developers (who tend to be abstracted from this by the clients)


 Phil




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] openstack java sdk

2014-02-28 Thread Murali G D
Hi,

Please check following link for maven artifacts.
http://mvnrepository.com/artifact/com.woorea/openstack-java-sdk/3.2.1
Include this in your maven project pom.xml.

However if you need latest one download and compile your-self as it
contains lot of fixes on top of above ones.

Thanks,
Murali G D

-Original Message-
From: rash g [mailto:rashg...@gmail.com]
Sent: Friday, February 28, 2014 4:13 PM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] openstack java sdk

Hello,
I am working on a project which uses openstack.I want to connect
my java code to openstack.For that I was thinking of using openstack-java
sdk.But I did not find any jar files for the same to import in my java
code.
Can anyone tell me where I can get jar files for openstack-java
sdk?

Thanks,
Rashmi

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Future of the Nova API

2014-02-28 Thread Day, Phil
 -Original Message-
 From: Jay Pipes [mailto:jaypi...@gmail.com]
 Sent: 24 February 2014 23:49
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [nova] Future of the Nova API
 
 
  Similarly with a Xen vs KVM situation I don't think its an extension
  related issue. In V2 we have features in *core* which are only
  supported by some virt backends. It perhaps comes down to not being
  willing to say either that we will force all virt backends to support
  all features in the API or they don't get in the tree. Or
  alternatively be willing to say no to any feature in the API which can
  not be currently implemented in all virt backends. The former greatly
  increases the barrier to getting a hypervisor included, the latter
  restricts Nova development to the speed of the slowest developing and
  least mature hypervisor supported.
 
 Actually, the problem is not feature parity. The problem lies where two
 drivers implement the same or similar functionality, but the public API for a
 user to call the functionality is slightly different depending on which 
 driver is
 used by the deployer.
 
 There's nothing wrong at all (IMO) in having feature disparity amongst
 drivers.

I agree with the rest of your posy Jay, but I  think there are some feature 
parity issues - for example having rescue always return a generated admin 
password when only some (one ?) Hypervisor supports actually setting the 
password is an issue. 

For some calls (create , rebuild) this can be suppressed by a Conf value 
(enable_instance_password) but when I tried to get that extended to Rescue in 
V2 it was blocked as a would break compatibility - either add an extension or 
only do it in V3 change.   So clients have to be able to cope with an optional 
attribute in the response to create/rebuild (because they can't inspect the API 
to see if the conf value is set), but can't be expected to cope with in the 
response from rescue apparently ;-(

 Phil

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Infra] openstack_citest MySQL user privileges to create databases on CI nodes

2014-02-28 Thread Sergey Lukjanov
Slave images are auto rebuilt daily, so, probably, it's not happens
yet for all providers.

Anyway I see the following in nodepool logs:

2014-02-28 02:24:09,255 INFO
nodepool.image.build.rax-ord.bare-precise: notice:
/Stage[main]/Jenkins::Slave/Mysql::Db[openstack_citest]/Database_grant[openstack_citest@localhost/openstack_citest]/privileges:
privileges changed '' to 'all'

On Fri, Feb 28, 2014 at 12:28 PM, Roman Podoliaka
rpodoly...@mirantis.com wrote:
 Hi Clark, all,

 https://review.openstack.org/#/c/76634/ has been merged, but I still
 get 'command denied' errors [1].

 Is there something else, that must be done before we can use new
 privileges of openstack_citest user?

 Thanks,
 Roman

 [1] 
 http://logs.openstack.org/63/74963/4/check/gate-oslo-incubator-python27/e115a5f/console.html

 On Wed, Feb 26, 2014 at 11:54 AM, Roman Podoliaka
 rpodoly...@mirantis.com wrote:
 Hi Clark,

 I think we can safely GRANT ALL on *.* to openstack_citest@localhost and 
 call that good enough
 Works for me.

 Thanks,
 Roman

 On Tue, Feb 25, 2014 at 8:29 PM, Clark Boylan clark.boy...@gmail.com wrote:
 On Tue, Feb 25, 2014 at 2:33 AM, Roman Podoliaka
 rpodoly...@mirantis.com wrote:
 Hi all,

 [1] made it possible for openstack_citest MySQL user to create new
 databases in tests on demand (which is very useful for parallel
 running of tests on MySQL and PostgreSQL, thank you, guys!).

 Unfortunately, openstack_citest user can only create tables in the
 created databases, but not to perform SELECT/UPDATE/INSERT queries.
 Please see the bug [2] filed by Joshua Harlow.

 In PostgreSQL the user who creates a database, becomes the owner of
 the database (and can do everything within this database), and in
 MySQL we have to GRANT those privileges explicitly. But
 openstack_citest doesn't have the permission to do GRANT (even on its
 own databases).

 I think, we could overcome this issue by doing something like this
 while provisioning a node:
 GRANT ALL on `some_predefined_prefix_goes_here\_%`.* to
 'openstack_citest'@'localhost';

 and then create databases giving them names starting with the prefix value.

 Is it an acceptable solution? Or am I missing something?

 Thanks,
 Roman

 [1] https://review.openstack.org/#/c/69519/
 [2] https://bugs.launchpad.net/openstack-ci/+bug/1284320

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 The problem with the prefix approach is it doesn't scale. At some
 point we will decide we need a new prefix then a third and so on
 (which is basically what happened at the schema level). That said we
 recently switched to using single use slaves for all unittesting so I
 think we can safely GRANT ALL on *.* to openstack_citest@localhost and
 call that good enough. This should work fine for upstream testing but
 may not be super friendly to others using the puppet manifests on
 permanent slaves. We can wrap the GRANT in a condition in puppet that
 is set only on single use slaves if this is a problem.

 Clark

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat]Heat use as a standalone component for Cloud Managment over multi IAAS

2014-02-28 Thread Alexander Tivelkov
Hi Charles,

If you are looking for the analogues of Juju in OpenStack, you probably may
take a look at Murano Project [1]. It is an application catalog backed with
a powerful workflow execution engine, which is built on top of Heat's
orchestration, but run's at a higher level. It has borrowed lots of idea
from Juju (or, more precisely, both took a lot from Amazon's OpsWorks
ideas).
Also, if you are looking to orchestrate on top of non-openstack clouds


--
Regards,
Alexander Tivelkov

[1[ - 
https://wiki.openstack.org/wiki/Murano

On Wed, Feb 26, 2014 at 5:47 PM, Charles Walker charles.walker...@gmail.com
 wrote:

 Hi,


 I am trying to deploy the proprietary application made in my company on
 the cloud. The pre requisite for this is to have a IAAS which can be either
 a public cloud or private cloud (openstack is an option for a private IAAS).


 The first prototype I made was based on a homemade python orchestrator and
 apache libCloud to interact with IAAS (AWS and Rackspace and GCE).

 The orchestrator part is a python code reading a template file which
 contains the info needed to deploy my application. This template file
 indicates the number of VM and the scripts associated to each VM type to
 install it.


 Now I was trying to have a look on existing open source tool to do the
 orchestration part. I find JUJU (https://juju.ubuntu.com/) or HEAT (
 https://wiki.openstack.org/wiki/Heat).

 I am investigating deeper HEAT and also had a look on
 https://wiki.openstack.org/wiki/Heat/DSL which mentioned:

 *Cloud Service Provider* - A service entity offering hosted cloud
 services on OpenStack or another cloud technology. Also known as a Vendor.


 I think HEAT as its actual version will not match my requirement but I
 have the feeling that it is going to evolve and could cover my needs.


 I would like to know if it would be possible to use HEAT as a standalone
 component in the future (without Nova and other Ostack modules)? The goal
 would be to deploy an application from a template file on multiple cloud
 service (like AWS, GCE).


 Any feedback from people working on HEAT could help me.


 Thanks, Charles.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat]Heat use as a standalone component for Cloud Managment over multi IAAS

2014-02-28 Thread Alexander Tivelkov
Hi Charles,

If you are looking for the analogues of Juju in OpenStack, you probably may
take a look at Murano Project [1]. It is an application catalog backed with
a powerful workflow execution engine, which is built on top of Heat's
orchestration, but run's at a higher level. It has borrowed lots of idea
from Juju (or, more precisely, both took a lot from Amazon's OpsWorks
ideas).
Also, if you are looking to orchestrate on top of non-openstack clouds,
then Murano's DSL may also be an answer: Murano's workflows may be designed
to trigger any external APIs, not necessary OpenStack-only, so the
technical possibility to orchestrate AWS and GCE exists in Murano's design,
yet not present in the current roadmap.

Please feel free to ask for more details either in [Murano] ML or at
#murano channel at Freenode.

Thanks

[1] -
https://wiki.openstack.org/wiki/Murano


--
Regards,
Alexander Tivelkov



--
Regards,
Alexander Tivelkov


On Wed, Feb 26, 2014 at 5:47 PM, Charles Walker charles.walker...@gmail.com
 wrote:

 Hi,


 I am trying to deploy the proprietary application made in my company on
 the cloud. The pre requisite for this is to have a IAAS which can be either
 a public cloud or private cloud (openstack is an option for a private IAAS).


 The first prototype I made was based on a homemade python orchestrator and
 apache libCloud to interact with IAAS (AWS and Rackspace and GCE).

 The orchestrator part is a python code reading a template file which
 contains the info needed to deploy my application. This template file
 indicates the number of VM and the scripts associated to each VM type to
 install it.


 Now I was trying to have a look on existing open source tool to do the
 orchestration part. I find JUJU (https://juju.ubuntu.com/) or HEAT (
 https://wiki.openstack.org/wiki/Heat).

 I am investigating deeper HEAT and also had a look on
 https://wiki.openstack.org/wiki/Heat/DSL which mentioned:

 *Cloud Service Provider* - A service entity offering hosted cloud
 services on OpenStack or another cloud technology. Also known as a Vendor.


 I think HEAT as its actual version will not match my requirement but I
 have the feeling that it is going to evolve and could cover my needs.


 I would like to know if it would be possible to use HEAT as a standalone
 component in the future (without Nova and other Ostack modules)? The goal
 would be to deploy an application from a template file on multiple cloud
 service (like AWS, GCE).


 Any feedback from people working on HEAT could help me.


 Thanks, Charles.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][policy] Blueprint document

2014-02-28 Thread Carlos Gonçalves
Hi all,

As the blueprint document is write-protected, the “See revision history” option 
is greyed out for viewers-only making it difficult to keep track of changes. 
Hence, and if there is no way as a viewer to see the revision history, could 
someone add me to the document please? My Google ID is carlos.ei.goncalves.

Thanks,
Carlos Goncalves


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] heads up, set -o errexit on devstack - things will fail earlier now

2014-02-28 Thread Mauro S M Rodrigues

Awesome! thanks for it!

Btw I guess this will automatically works for grenade, since we use 
devstack to setup X-1 release, am I right? (and it's not a concern for 
the upgrade part since the upgrade-component scripts already contain 
errexit trap on the cleanup functions right?)


--
mauro(sr)


On 02/27/2014 06:17 PM, Sergey Lukjanov wrote:

And a big +1 from me too. It's really useful.

On Fri, Feb 28, 2014 at 12:15 AM, Devananda van der Veen
devananda@gmail.com wrote:

  Thu, Feb 27, 2014 at 9:34 AM, Ben Nemec openst...@nemebean.com wrote:

On 2014-02-27 09:23, Daniel P. Berrange wrote:

On Thu, Feb 27, 2014 at 08:38:22AM -0500, Sean Dague wrote:

This patch is coming through the gate this morning -
https://review.openstack.org/#/c/71996/

The point being to actually make devstack stop when it hits an error,
instead of only once these compound to the point where there is no
moving forward and some service call fails. This should *dramatically*
improve the experience of figuring out a failure in the gate, because
where it fails should be the issue. (It also made us figure out some
wonkiness with stdout buffering, that was making debug difficult).

This works on all the content that devstack gates against. However,
there are a ton of other paths in devstack, including vendor plugins,
which I'm sure aren't clean enough to run under -o errexit. So if all of
a sudden things start failing, this may be why. Fortunately you'll be
pointed at the exact point of the fail.


This is awesome!


+1!  Thanks Sean and everyone else who was involved with this.


Another big +1 for this! I've wished for it every time I tried to add
something to devstack and struggled with debugging it.

-Deva

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev







___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Infra] openstack_citest MySQL user privileges to create databases on CI nodes

2014-02-28 Thread Roman Podoliaka
Hi all,

Just a FYI note, not whining :)

Still failing with 'command denied':
http://logs.openstack.org/63/74963/4/check/gate-oslo-incubator-python27/877792b/console.html

Thanks,
Roman

On Fri, Feb 28, 2014 at 1:41 PM, Sergey Lukjanov slukja...@mirantis.com wrote:
 Slave images are auto rebuilt daily, so, probably, it's not happens
 yet for all providers.

 Anyway I see the following in nodepool logs:

 2014-02-28 02:24:09,255 INFO
 nodepool.image.build.rax-ord.bare-precise:  [0;36mnotice:
 /Stage[main]/Jenkins::Slave/Mysql::Db[openstack_citest]/Database_grant[openstack_citest@localhost/openstack_citest]/privileges:
 privileges changed '' to 'all' [0m

 On Fri, Feb 28, 2014 at 12:28 PM, Roman Podoliaka
 rpodoly...@mirantis.com wrote:
 Hi Clark, all,

 https://review.openstack.org/#/c/76634/ has been merged, but I still
 get 'command denied' errors [1].

 Is there something else, that must be done before we can use new
 privileges of openstack_citest user?

 Thanks,
 Roman

 [1] 
 http://logs.openstack.org/63/74963/4/check/gate-oslo-incubator-python27/e115a5f/console.html

 On Wed, Feb 26, 2014 at 11:54 AM, Roman Podoliaka
 rpodoly...@mirantis.com wrote:
 Hi Clark,

 I think we can safely GRANT ALL on *.* to openstack_citest@localhost and 
 call that good enough
 Works for me.

 Thanks,
 Roman

 On Tue, Feb 25, 2014 at 8:29 PM, Clark Boylan clark.boy...@gmail.com 
 wrote:
 On Tue, Feb 25, 2014 at 2:33 AM, Roman Podoliaka
 rpodoly...@mirantis.com wrote:
 Hi all,

 [1] made it possible for openstack_citest MySQL user to create new
 databases in tests on demand (which is very useful for parallel
 running of tests on MySQL and PostgreSQL, thank you, guys!).

 Unfortunately, openstack_citest user can only create tables in the
 created databases, but not to perform SELECT/UPDATE/INSERT queries.
 Please see the bug [2] filed by Joshua Harlow.

 In PostgreSQL the user who creates a database, becomes the owner of
 the database (and can do everything within this database), and in
 MySQL we have to GRANT those privileges explicitly. But
 openstack_citest doesn't have the permission to do GRANT (even on its
 own databases).

 I think, we could overcome this issue by doing something like this
 while provisioning a node:
 GRANT ALL on `some_predefined_prefix_goes_here\_%`.* to
 'openstack_citest'@'localhost';

 and then create databases giving them names starting with the prefix 
 value.

 Is it an acceptable solution? Or am I missing something?

 Thanks,
 Roman

 [1] https://review.openstack.org/#/c/69519/
 [2] https://bugs.launchpad.net/openstack-ci/+bug/1284320

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 The problem with the prefix approach is it doesn't scale. At some
 point we will decide we need a new prefix then a third and so on
 (which is basically what happened at the schema level). That said we
 recently switched to using single use slaves for all unittesting so I
 think we can safely GRANT ALL on *.* to openstack_citest@localhost and
 call that good enough. This should work fine for upstream testing but
 may not be super friendly to others using the puppet manifests on
 permanent slaves. We can wrap the GRANT in a condition in puppet that
 is set only on single use slaves if this is a problem.

 Clark

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 --
 Sincerely yours,
 Sergey Lukjanov
 Savanna Technical Lead
 Mirantis Inc.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Live migration, auth token lifetimes.

2014-02-28 Thread jang
There's a problem with live block migrations. They can take an arbitrarily 
long time to complete. That, in itself, isn't the matter:

https://bugs.launchpad.net/nova/+bug/1286142

At the moment, nova.compute.manager.live_migration takes a context, which 
it passes into a call to its driver's live_migration method. That'll end 
up calling back to one of 
nova.compute.manager.{_post_live_migration,_rollback_live_migration} - 
passing that credential along.

If the credential's expired, in the meantime, then the post- steps will 
fail as they attempt to finish up the migration.

There appear, fundamentally, to be three approaches to take with this. The 
first is to bake sufficient admin credentials (for the block and the 
network layers) into the nova process so that it can run the cleanup with 
appropriate rights.

The second would be to have a way for the nova process to extend proxy 
credentials until such point as they are required by the post- stages. 
I'll elide the potential security concerns over putting such an API call 
into keystone, but it should probably be considered.

I suppose the third way is to have a way for a client to continue to 
inject live tokens into a running migration process - thereby shifting the 
burden onto an external person/process/entity who's driving the live 
migration.

This all potentially being contentious, I'm basically soliciting opinions 
on avenues for this.

With thanks in advance for your time,
jan

-- 
Jan Grant (j...@ioctl.org; jan.gr...@hp.com)
...and then three milkmaids turned up
(to the delight and delactation of the crowd).

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Reply: [Neutron][IPv6] tox run forever

2014-02-28 Thread Shixiong Shang
I started thinking whether passing the test of Mr. Jenkins is a mindless 
dreaming….:)  What does the “TOX” say? :)

Shixiong



Begin forwarded message:

 From: Shixiong Shang sparkofwisdom.cl...@gmail.com
 Subject: Re: [openstack-dev] [Neutron] tox run forever
 Date: February 27, 2014 at 9:41:34 PM EST
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 
 Hi, Clark:
 
 Thanks a lot for the prompt response! I added the OS_TEST_TIMEOUT value (300 
 sec) and was tailing the tmp file. It turned out that the TOX run stopped at 
 the following point. My machine was tossed so badly that it became 
 unresponsive and I had to hard reboot it….I am pulling my teeth off now…..Is 
 it normal to see Traceback?
 
 2014-02-27 21:33:51,212 INFO [neutron.api.extensions] Extension 'agent' 
 provides no backward compatibility map for extended attributes
 2014-02-27 21:33:51,212 INFO [neutron.api.extensions] Extension 'Allowed 
 Address Pairs' provides no backward compatibility map for extended attributes
 2014-02-27 21:33:51,212 INFO [neutron.api.extensions] Extension 'Neutron 
 Extra Route' provides no backward compatibility map for extended attributes
 2014-02-27 21:33:51,522ERROR 
 [neutron.api.rpc.agentnotifiers.dhcp_rpc_agent_api] No DHCP agents are 
 associated with network '397fab50-26aa-4cb7-8aa4-c4d43909a00b'. Unable to 
 send notification for 'network_create_end' with payload: {'network': 
 {'status': 'ACTIVE', 'subnets': [], 'name': 'net1', 
 'provider:physical_network': u'physnet1', 'admin_state_up': True, 
 'tenant_id': 'test-tenant', 'provider:network_type': 'vlan', 'shared': False, 
 'id': '397fab50-26aa-4cb7-8aa4-c4d43909a00b', 'provider:segmentation_id': 
 1000}}
 2014-02-27 21:33:51,567ERROR [neutron.api.v2.resource] create failed
 Traceback (most recent call last):
   File neutron/api/v2/resource.py, line 84, in resource
 result = method(request=request, **args)
   File neutron/api/v2/base.py, line 347, in create
 allow_bulk=self._allow_bulk)
   File neutron/api/v2/base.py, line 600, in prepare_request_body
 raise webob.exc.HTTPBadRequest(msg)
 HTTPBadRequest: Invalid input for cidr. Reason: '10.0.2.0' isn't a recognized 
 IP subnet cidr, '10.0.2.0/32' is recommended.
 
 
 Thanks again!
 
 Shixiong
 
 
 
 
 
 Shixiong Shang
 
 !--- Stay Hungry, Stay Foolish ---!
 
 On Feb 27, 2014, at 8:28 PM, Clark Boylan clark.boy...@gmail.com wrote:
 
 On Thu, Feb 27, 2014 at 4:43 PM, Shixiong Shang
 sparkofwisdom.cl...@gmail.com wrote:
 Hi, guys:
 
 I created a fresh local repository and pulled the most recent Neutron code. 
 Before I put in my own code, I did TOX run. However, seems like it is stuck 
 to the following condition for over a hour and didn't go any further. 
 Yesterday, the TOX had been running with a fresh copy of Neutron, but 
 didn't return SUCCESS after the entire night.
 
 I assume the copy from MASTER BRANCH should already be 
 sanitized.However, what I saw in the past 48 hours told me different 
 story. Did I do anything wrong?
 
 
 shshang@net-ubuntu2:~/github/neutron$ tox -e py27
 py27 create: /home/shshang/github/neutron/.tox/py27
 py27 installdeps: -r/home/shshang/github/neutron/requirements.txt, 
 -r/home/shshang/github/neutron/test-requirements.txt, setuptools_git=0.4
 py27 develop-inst: /home/shshang/github/neutron
 py27 runtests: commands[0] | python -m neutron.openstack.common.lockutils 
 python setup.py testr --slowest --testr-args=
 [pbr] Excluding argparse: Python 2.6 only dependency
 running testr
 running=OS_STDOUT_CAPTURE=1 OS_STDERR_CAPTURE=1 OS_LOG_CAPTURE=1 
 ${PYTHON:-python} -m subunit.run discover -t ./ 
 ${OS_TEST_PATH:-./neutron/tests/unit} --list
 running=OS_STDOUT_CAPTURE=1 OS_STDERR_CAPTURE=1 OS_LOG_CAPTURE=1 
 ${PYTHON:-python} -m subunit.run discover -t ./ 
 ${OS_TEST_PATH:-./neutron/tests/unit}  --load-list /tmp/tmpbZwLwg
 running=OS_STDOUT_CAPTURE=1 OS_STDERR_CAPTURE=1 OS_LOG_CAPTURE=1 
 ${PYTHON:-python} -m subunit.run discover -t ./ 
 ${OS_TEST_PATH:-./neutron/tests/unit}  --load-list /tmp/tmp39qJYM
 running=OS_STDOUT_CAPTURE=1 OS_STDERR_CAPTURE=1 OS_LOG_CAPTURE=1 
 ${PYTHON:-python} -m subunit.run discover -t ./ 
 ${OS_TEST_PATH:-./neutron/tests/unit}  --load-list /tmp/tmpppXiTc
 running=OS_STDOUT_CAPTURE=1 OS_STDERR_CAPTURE=1 OS_LOG_CAPTURE=1 
 ${PYTHON:-python} -m subunit.run discover -t ./ 
 ${OS_TEST_PATH:-./neutron/tests/unit}  --load-list /tmp/tmpPhJZDc
 
 Thanks!
 
 Shixiong
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 I think there are two potential problems here. Either a test is
 deadlocking due to something it has done or
 neutron.openstack.common.lockutils is deadlocking. In either case
 OS_TEST_TIMEOUT is not set in .testr.conf so the test suite will not
 timeout individual tests if necessary. I would start by setting that
 in the 

Re: [openstack-dev] [nova][baremetal] Deprovision of bare-metal nodes

2014-02-28 Thread Dickson, Mike (HP Servers)
On Fri, 2014-02-28 at 00:04 -0800, Taurus Cheung wrote:
 Hi,
 
  
 
 I am working on deploying images to bare-metal machines using nova
 bare-metal. After deployment, I would like to deprovision
 (disconnect) bare-metal nodes from OpenStack controller/compute, so
 these bare-metal nodes can run standalone.
 
  
 
 A typical scenario is that I have a workstation with OpenStack
 controller and nova baremetal compute installed. During bare-metal
 deployment, I plug the workstation into the network. After deployment,
 I disconnect it from the network.
 
  
 
 Is this use-case typical, possible and without side-effect?

I'll be curious to see what other responses you get as I am fairly new
to OpenStack and I find the current behaviour bugged.

My understanding when using the default pxe driver for deployment is
that it will always pxe boot.  The MBR and boot files aren't installed.
So once provisioning is complete the node must remain on the network for
it to continue to boot.  As I understand it this behaviour is also the
default in Ironic.

I'd love to understand why that is considered required behavior as I am
fairly certain a number of enterprise users will not find that
acceptable behavior.  Ideally the image should get customized with the
boot files and an MBR written on install.  When there is no provisioning
work to do the default behaviour IMO should be to fall through to local
booting.  If that were the case your use case would work.

Even better if the platform supports a one time boot option it could be
set to use that when provisioning steps are run.

In any event thats my understanding. I'd love someone else to correct or
confirm it.  And help me understand why thats the default.

Mike

  
 
 Regards,
 
 Taurus
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Porting executor and engine to oslo.messaging

2014-02-28 Thread Renat Akhmerov
Hi Joshua,

Sorry, I’ve been very busy for the last couple of days and didn’t respond 
quickly enough.

Well, first of all, it’s my bad that I’ve not been following TaskFlow progress 
for a while and, honestly, I just need to get more info on the current TaskFlow 
status. So I’ll do that and get back to you soon. As you know, there were 
reasons why we decided to go this path (without using TaskFlow) but I’ve always 
thought we will be able to align our efforts as we move forward once 
requirements and design of Mistral become more clear. I really want to use 
TaskFlow for Mistral implementation. We just need to make sure that it will 
bring more value than pain (sorry if it sounds harsh).

Thanks for your feedback and this info. We’ll get in touch with you soon.

Renat Akhmerov
@ Mirantis Inc.



On 27 Feb 2014, at 03:22, Joshua Harlow harlo...@yahoo-inc.com wrote:

 So this design is starting to look pretty familiar to a what we have in 
 taskflow.
 
 Any reason why it can't just be used instead?
 
 https://etherpad.openstack.org/p/TaskFlowWorkerBasedEngine
 
 This code is in a functional state right now, using kombu (for the moment, 
 until oslo.messaging becomes py3 compliant).
 
 The concept of a engine which puts messages on a queue for a remote executor 
 is in-fact exactly the case taskflow is doing (the remote exeuctor/worker 
 will then respond when it is done and the engine will then initiate the next 
 piece of work to do) in the above listed etherpad (and which is implemented).
 
 Is it the case that in mistral the engine will be maintaining the 
 'orchestration' of the workflow during the lifetime of that workflow? In the 
 case of mistral what is an engine server? Is this a server that has engines 
 in it (where each engine is 'orchestrating' the remote/local workflows and 
 monitoring and recording the state transitions and data flow that is 
 occurring)? The details @ 
 https://blueprints.launchpad.net/mistral/+spec/mistral-engine-standalone-process
  seems to be already what taskflow provides via its engine object, creating a 
 application which runs engines and those engines initiate workflows is made 
 to be dead simple.
 
 From previous discussions with the mistral folks it seems like the overlap 
 here is getting more and more, which seems to be bad (and means something is 
 broken/wrong). In fact most of the concepts that u have blueprints for have 
 already been completed in taskflow (data-flow, engine being disconnected from 
 the rest api…) and ones u don't have listed (resumption, reversion…). 
 
 What can we do to fix this situation?
 
 -Josh
 
 From: Nikolay Makhotkin nmakhot...@mirantis.com
 Reply-To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Date: Tuesday, February 25, 2014 at 11:30 PM
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Mistral] Porting executor and engine to 
 oslo.messaging
 
 Looks good. Thanks, Winson! 
 
 Renat, What do you think?
 
 
 On Wed, Feb 26, 2014 at 10:00 AM, W Chan m4d.co...@gmail.com wrote:
 The following link is the google doc of the proposed engine/executor 
 message flow architecture.  
 https://drive.google.com/file/d/0B4TqA9lkW12PZ2dJVFRsS0pGdEU/edit?usp=sharing
   
 
 The diagram on the right is the scalable engine where one or more engine 
 sends requests over a transport to one or more executors.  The executor 
 client, transport, and executor server follows the RPC client/server design 
 pattern in oslo.messaging.
 
 The diagram represents the local engine.  In reality, it's following the 
 same RPC client/server design pattern.  The only difference is that it'll 
 be configured to use a fake RPC backend driver.  The fake driver uses in 
 process queues shared between a pair of engine and executor.
 
 The following are the stepwise changes I will make.
 1) Keep the local and scalable engine structure intact.  Create the 
 Executor Client at ./mistral/engine/scalable/executor/client.py.  Create 
 the Executor Server at ./mistral/engine/scalable/executor/service.py and 
 implement the task operations under 
 ./mistral/engine/scalable/executor/executor.py.  Delete 
 ./mistral/engine/scalable/executor/executor.py.  Modify the launcher 
 ./mistral/cmd/task_executor.py.  Modify ./mistral/engine/scalable/engine.py 
 to use the Executor Client instead of sending the message directly to 
 rabbit via pika.  The sum of this is the atomic change that keeps existing 
 structure and without breaking the code.
 2) Remove the local engine. 
 https://blueprints.launchpad.net/mistral/+spec/mistral-inproc-executor
 3) Implement versioning for the engine.  
 https://blueprints.launchpad.net/mistral/+spec/mistral-engine-versioning
 4) Port abstract engine to use oslo.messaging and implement the engine 
 client, engine server, and modify the API layer to consume the engine 
 client. 
 

Re: [openstack-dev] [nova] Future of the Nova API

2014-02-28 Thread Day, Phil
The current set of reviews on this change seems relevant to this debate:  
https://review.openstack.org/#/c/43822/

In effect a fully working and tested change which makes the nova-net / neutron 
compatibility via the V2 API that little bit closer to being complete is being 
blocked because it's thought that by not having it people will be quicker to 
move to V3 instead.

Folks this is just madness - no one is going to jump to using V3 just because 
we don't fix minor things like this in V2,  they're just as likely to start 
jumping to something completely different because that Openstack stuff is just 
too hard to work with. User's don't think like developers, and you can't 
force them into a new API by deliberately keeping the old one bad - at least 
not if you want to keep them as users in the long term.

I can see an argument (maybe) for not adding lots of completely new features 
into V2 if V3 was already available in a stable form - but V2 already provides 
a nearly complete support for nova-net features on top of Neutron.I fail to 
see what is wrong with continuing to improve that.

Phil

 -Original Message-
 From: Day, Phil
 Sent: 28 February 2014 11:07
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [nova] Future of the Nova API
 
  -Original Message-
  From: Chris Behrens [mailto:cbehr...@codestud.com]
  Sent: 26 February 2014 22:05
  To: OpenStack Development Mailing List (not for usage questions)
  Subject: Re: [openstack-dev] [nova] Future of the Nova API
 
 
  This thread is many messages deep now and I'm busy with a conference
  this week, but I wanted to carry over my opinion from the other v3
  API in Icehouse thread and add a little to it.
 
  Bumping versions is painful. v2 is going to need to live for a long
  time to create the least amount of pain. I would think that at least
  anyone running a decent sized Public Cloud would agree, if not anyone
  just running any sort of decent sized cloud. I don't think there's a
  compelling enough reason to deprecate v2 and cause havoc with what we
  currently have in v3. I'd like us to spend more time on the proposed
  tasks changes. And I think we need more time to figure out if we're
  doing versioning in the correct way. If we've got it wrong, a v3
  doesn't fix the problem and we'll just be causing more havoc with a v4.
 
  - Chris
 
 Like Chris I'm struggling to keep up with this thread,  but of all the various
 messages I've read this is the one that resonates most with me.
 
 My perception of the V3 API improvements (in order to importance to me):
 i) The ability to version individual extensions Crazy that small improvements
 can't be introduced without having to create a new extension,  when often
 the extension really does nothing more that indicate that some other part of
 the API code has changed.
 
 ii) The opportunity to get the proper separation between Compute and
 Network APIs Being (I think) one of the few clouds that provides both the
 Nova and Neutron API this is a major source of confusion and hence support
 calls.
 
 iii) The introduction of the task model
 I like the idea of tasks, and think it will be a much easier way for users to
 interact with the system.   Not convinced that it couldn't co-exist in V2
 thought rather than having to co-exist as V2 and V3
 
 iv)Clean-up of a whole bunch of minor irritations / inconsistencies
 There are lots of things that are really messy (inconsistent error codes,
 aspects of core that are linked to just Xen, etc, etc).  They annoy people the
 first time they hit them, then the code around them and move on.Probably
 I've had more hate mail from people writing language bindings than
 application developers (who tend to be abstracted from this by the clients)
 
 
  Phil
 
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Status of Docker CI

2014-02-28 Thread Eric Windisch


 The number of things that don't work with this driver is a big issue, I
 think.  However, we haven't really set rules on a baseline for what we
 expect every driver to support.  This is something I'd like to tackle in
 the Juno cycle, including another deadline.


Increased feature parity is something I'd like to see as well, but also
something that has been difficult to accomplish in tandem with the CI
requirement. Thankfully, the CI requirement will make it easier to test and
verify changes as we seek to add features in Juno.


 I would
 sprint toward getting everything passing, even if it means applying
 fixes to your env that haven't merged yet to demonstrate it working sooner.


This is precisely what I'm doing. I have been submitting patches into
code-review but have been testing and deploying off my own branch which
includes these patches (using the NOVA_REPO / NOVA_BRANCH variables in
devstack).

-- 
Regards,
Eric Windisch
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] heads up, set -o errexit on devstack - things will fail earlier now

2014-02-28 Thread Sean Dague
Actually grenade has always run under errexit, devstack just had enough
legacy cruft in it that it took a while to get it to run clean.

-Sean

On 02/28/2014 09:13 AM, Mauro S M Rodrigues wrote:
 Awesome! thanks for it!
 
 Btw I guess this will automatically works for grenade, since we use
 devstack to setup X-1 release, am I right? (and it's not a concern for
 the upgrade part since the upgrade-component scripts already contain
 errexit trap on the cleanup functions right?)
 
 -- 
 mauro(sr)
 
 
 On 02/27/2014 06:17 PM, Sergey Lukjanov wrote:
 And a big +1 from me too. It's really useful.

 On Fri, Feb 28, 2014 at 12:15 AM, Devananda van der Veen
 devananda@gmail.com wrote:
   Thu, Feb 27, 2014 at 9:34 AM, Ben Nemec openst...@nemebean.com
 wrote:
 On 2014-02-27 09:23, Daniel P. Berrange wrote:
 On Thu, Feb 27, 2014 at 08:38:22AM -0500, Sean Dague wrote:
 This patch is coming through the gate this morning -
 https://review.openstack.org/#/c/71996/

 The point being to actually make devstack stop when it hits an error,
 instead of only once these compound to the point where there is no
 moving forward and some service call fails. This should
 *dramatically*
 improve the experience of figuring out a failure in the gate, because
 where it fails should be the issue. (It also made us figure out some
 wonkiness with stdout buffering, that was making debug difficult).

 This works on all the content that devstack gates against. However,
 there are a ton of other paths in devstack, including vendor plugins,
 which I'm sure aren't clean enough to run under -o errexit. So if
 all of
 a sudden things start failing, this may be why. Fortunately you'll be
 pointed at the exact point of the fail.

 This is awesome!

 +1!  Thanks Sean and everyone else who was involved with this.

 Another big +1 for this! I've wished for it every time I tried to add
 something to devstack and struggled with debugging it.

 -Deva

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat]Heat use as a standalone component for Cloud Managment over multi IAAS

2014-02-28 Thread Clint Byrum
Excerpts from Alexander Tivelkov's message of 2014-02-28 03:52:52 -0800:
 Hi Charles,
 
 If you are looking for the analogues of Juju in OpenStack, you probably may
 take a look at Murano Project [1]. It is an application catalog backed with
 a powerful workflow execution engine, which is built on top of Heat's
 orchestration, but run's at a higher level. It has borrowed lots of idea
 from Juju (or, more precisely, both took a lot from Amazon's OpsWorks
 ideas).
 Also, if you are looking to orchestrate on top of non-openstack clouds

FYI, Juju existed long before OpsWorks.

http://aws.amazon.com/about-aws/whats-new/2013/02/18/announcing-aws-opsworks/

Even my last commit to Juju (the python version..) happened well before
OpsWorks existed:

http://bazaar.launchpad.net/~juju/juju/trunk/revision/599

Anyway, Heat is intended to be able to manage things at a high level
too, just with more of the guts exposed for tinkering. :)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Climate] Meeting minutes

2014-02-28 Thread Dina Belova
Thanks for taking part in our meeting :)

Meeting minutes are:

Minutes:
http://eavesdrop.openstack.org/meetings/climate/2014/climate.2014-02-28-15.00.html

Minutes (text):
http://eavesdrop.openstack.org/meetings/climate/2014/climate.2014-02-28-15.00.txt

Log:
http://eavesdrop.openstack.org/meetings/climate/2014/climate.2014-02-28-15.00.log.html


Best regards,

Dina Belova

Software Engineer

Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] NLS support for database collators

2014-02-28 Thread Steven Kaufer


Hello,

We are trying to understand how the various GET REST APIs handle
sorting/filtering in different NLS environments. For example, when
retrieving sorted String data (ie, display name), the order of the results
should vary based on the NLS of the caller (as opposed to having everything
sorted in English).

For Nova, the instances database has a vm_state column and the value is
an English string (ie, active, error).  Is this value an NLS-key or the
actual text that would be exposed to the caller?

Is there any existing collator support for sorting based on locale?

Links to any documentation or previous discussions would be appreciated.

Thanks,

Steven Kaufer___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] inconsistent naming? node vs host vs vs hypervisor_hostname vs OS-EXT-SRV-ATTR:host

2014-02-28 Thread Chris Friesen

Hi,

I've been working with OpenStack for a while now but I'm still a bit 
fuzzy on the precise meaning of some of the terminology.


It seems reasonably clear that a node is a computer running at least 
one component of an Openstack system.


However, nova service-list talks about the host that a given service 
runs on.  Shouldn't that be node?  Normally host is used to 
distinguish from guest, but that doesn't really make sense for a 
dedicated controller node.


nova show reports OS-EXT-SRV-ATTR:host and 
OS-EXT-SRV-ATTR:hypervisor_hostname for an instance.  What is the 
distinction between the two and how do they relate to OpenStack nodes 
or the host names in nova service-list?


nova hypervisor-list uses the term hypervisor hostname, but nova 
hypervisor-stats talks about compute nodes.  Is this distinction 
accurate or should they both use the hypervisor terminology?  What is 
the distinction between hypervisor/host/node?


nova host-list reports host_name, but seems to include all services. 
 Does host_name correspond to host, hypervisor_host, or node?  And 
just to make things interesting, the other nova host-* commands only 
work on compute hosts, so maybe nova host-list should only output info 
for systems running nova-compute?



Thanks,
Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][IPv6] Testing functionality of IPv6 modes using Horizon

2014-02-28 Thread Abishek Subramanian (absubram)
Hi,

I just wanted to find out if anyone had been able to test using Horizon?
Was everything ok?

Additionally wanted to confirm - the two modes can be updated too yes
when using neutron subnet-update?


Thanks!

On 2/18/14 12:58 PM, Abishek Subramanian (absubram) absub...@cisco.com
wrote:

Hi shshang, all,

I have some preliminary Horizon diffs available and if anyone
would be kind enough to patch them and try to test the
functionality, I'd really appreciate it.
I know I'm able to create subnets successfully with
the two modes but if there's anything else you'd like
to test or have any other user experience comments,
please feel free to let me know.

The diffs are at -  https://review.openstack.org/74453

Thanks!!



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] Bug in is_*_enabled functions?

2014-02-28 Thread Brian Haley
On 02/27/2014 11:55 AM, Dean Troyer wrote:
 There is a problem in two of DevStack's exercises, floating_ips.sh and
 volume.sh, where lib/neutron is not set up properly to handle the ping_check()
 function calls.  That is what leads to what you see.
  https://review.openstack.org/#/c/76867/ fixes the problem in the exercises.

Thanks for the patch Dean, it does fix the problem I was seeing.

Of course now boot_from_volume.sh fails because it doesn't include lib/neutron,
I've just pushed a patch for that, https://review.openstack.org/#/c/77212/

-Brian

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Future of the Nova API

2014-02-28 Thread Jay Pipes
On Fri, 2014-02-28 at 11:35 +, Day, Phil wrote:
  -Original Message-
  From: Jay Pipes [mailto:jaypi...@gmail.com]
  Sent: 24 February 2014 23:49
  To: openstack-dev@lists.openstack.org
  Subject: Re: [openstack-dev] [nova] Future of the Nova API
  
  
   Similarly with a Xen vs KVM situation I don't think its an extension
   related issue. In V2 we have features in *core* which are only
   supported by some virt backends. It perhaps comes down to not being
   willing to say either that we will force all virt backends to support
   all features in the API or they don't get in the tree. Or
   alternatively be willing to say no to any feature in the API which can
   not be currently implemented in all virt backends. The former greatly
   increases the barrier to getting a hypervisor included, the latter
   restricts Nova development to the speed of the slowest developing and
   least mature hypervisor supported.
  
  Actually, the problem is not feature parity. The problem lies where two
  drivers implement the same or similar functionality, but the public API for 
  a
  user to call the functionality is slightly different depending on which 
  driver is
  used by the deployer.
  
  There's nothing wrong at all (IMO) in having feature disparity amongst
  drivers.
 
 I agree with the rest of your posy Jay,

Phew. Good to know my posy is agreeable :)

  but I  think there are some feature parity issues - for example having 
 rescue always return a generated admin password when only some (one ?) 
 Hypervisor supports actually setting the password is an issue.

No disagreement from me there! I don't see that as an issue. Having an
optional attribute in the response result is perfectly fine, as long as
it is documented. There's a difference between that and having to issue
different API calls entirely to do the same or a similar action
depending on what the underlying hypervisor or driver implementation is.
Examples of the latter include the how Nova supplies user and
configuration data to instances, as well as things like migrate,
live-migrate, and evacuate all being different API calls or API
extensions, when the operation is essentially the same...

Best,
-jay 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] Bug in is_*_enabled functions?

2014-02-28 Thread Dean Troyer
On Fri, Feb 28, 2014 at 10:21 AM, Brian Haley brian.ha...@hp.com wrote:

 Of course now boot_from_volume.sh fails because it doesn't include
 lib/neutron,
 I've just pushed a patch for that, https://review.openstack.org/#/c/77212/


Thanks.  It's absence from the gate left it the poor stepchild many times.

dt

-- 

Dean Troyer
dtro...@gmail.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Bug Triage - Woo Hoo!!

2014-02-28 Thread Tracy Jones


Hi Folks -   We had our 1st Bug Scrub meeting and it was a great success.  We 
concentrated on tagging all of the untagged bugs with appropriate tags.  The 
work is not complete, so if you would like to help out - please take a look 
here and tag away.


This table shows the official tags we are using, along with owners, count of 
un-triaged bugs, and count of triaged bugs.  Please scan this list for your 
name and do the following

1.  are you the right owner?  If not let me know
2.  triage your New bugs - there are instructions here
3.  please do this at least weekly if not more.


If you see a NO OWNER for an area you would like to own, please let me know.   
I’m looking for volunteers - we only need 5 more people to cover everything

Once we reach FF our focus moves from BP to bugs so you’ll be hearing from me 
more and more until we release icehouse.  :-D



Tag Owner   New Not-New
wat russellb48  617
network  arosen 16  47
libvirt  kchamart   15  90
testing NO OWNER10  38
compute  melwitt7   51
cellscomstud5   12
ec2 NO OWNER5   27
volumes  ndipanov   4   12
api  cyeoh  3   84
console NO OWNER3   5
db   dripton2   46
docker   ewindisch  2   16
lxc  zul2   3
oslo allison2   7
baremetaldevananda  1   38
hyper-v  alexpilotti1   17
novaclient   alaski 1   0
conductordansmith   0   3
nova-manage NO OWNER0   8
postgresql   dripton0   0
rootwrap ttx0   1
scheduler   NO OWNER0   9
unified-objects  dansmith   0   0
vmware   hartsocks  0   64
xenserverjohnthetubaguy 0   44
127 1239



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][IPv6] Testing functionality of IPv6 modes using Horizon

2014-02-28 Thread Martinx - ジェームズ
I'll wait for IceHouse-3 to arrives on Ubuntu 14.04 to start testing the
whole IPv6 features... Lab is ready, two /48 to play with...   =)


On 28 February 2014 12:55, Abishek Subramanian (absubram) 
absub...@cisco.com wrote:

 Hi,

 I just wanted to find out if anyone had been able to test using Horizon?
 Was everything ok?

 Additionally wanted to confirm - the two modes can be updated too yes
 when using neutron subnet-update?


 Thanks!

 On 2/18/14 12:58 PM, Abishek Subramanian (absubram) absub...@cisco.com
 wrote:

 Hi shshang, all,
 
 I have some preliminary Horizon diffs available and if anyone
 would be kind enough to patch them and try to test the
 functionality, I'd really appreciate it.
 I know I'm able to create subnets successfully with
 the two modes but if there's anything else you'd like
 to test or have any other user experience comments,
 please feel free to let me know.
 
 The diffs are at -  https://review.openstack.org/74453
 
 Thanks!!
 


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Bug Triage - Woo Hoo!!

2014-02-28 Thread Chuck Short
Hi,

I would like to own ec2 as well.

thanks
chuck


On Fri, Feb 28, 2014 at 11:52 AM, Tracy Jones tjo...@vmware.com wrote:



 Hi Folks -   We had our 1st Bug Scrub meeting and it was a great success.
  We concentrated on tagging all of the untagged bugs with appropriate tags.
  The work is not complete, so if you would like to help out - please take a
 look 
 herehttps://bugs.launchpad.net/nova/+bugs?field.tag=-*field.status:list=NEW
  and
 tag away.


 This table shows the official tags we are using, along with owners, count
 of un-triaged bugs, and count of triaged bugs.  Please scan this list for
 your name and do the following

 1.  are you the right owner?  If not let me know
 2.  triage your New bugs - there are instructions 
 herehttps://wiki.openstack.org/wiki/BugTriage
 3.  please do this at least weekly if not more.


 If you see a NO OWNER for an area you would like to own,* please let me
 know*.   I'm looking for volunteers - we only need 5 more people to cover
 everything

 Once we reach FF our focus moves from BP to bugs so you'll be hearing from
 me more and more until we release icehouse.  :-D



Tag Owner New Not-New  wat russellb 48 617  network  arosen 16 47
 libvirt  kchamart 15 90  testing NO OWNER 10 38  compute  melwitt 7 51
 cells  comstud 5 12  ec2 NO OWNER 5 27  volumes  ndipanov 4 12  api  cyeoh
 3 84  console NO OWNER 3 5  db  dripton 2 46  docker  ewindisch 2 16  lxc
  zul 2 3  oslo  allison 2 7  baremetal  devananda 1 38  hyper-v
  alexpilotti 1 17  novaclient  alaski 1 0  conductor  dansmith 0 3
 nova-manage NO OWNER 0 8  postgresql  dripton 0 0  rootwrap  ttx 0 1
 scheduler NO OWNER 0 9  unified-objects  dansmith 0 0  vmware  hartsocks 0
 64  xenserver  johnthetubaguy 0 44   127 1239




 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][oslo] Changes to oslo-incubator sync workflow

2014-02-28 Thread Joe Gordon
Lets use https://etherpad.openstack.org/p/Icehouse-nova-oslo-sync to
keep track of things.

On Wed, Feb 26, 2014 at 5:10 PM, Joe Gordon joe.gord...@gmail.com wrote:
 GCB, Ben,

 Thanks for volunteering to help.

 GCB, reminded me that we should be doing this for python-novaclient in
 addition to nova itself. So that being said, as I see it here are the
 steps moving forward:

 Note: as previously mentionedin this thread there already is a team
 working on syncing oslo.db so we can ignore that for now (and once its
 ready they will propose patches, so we just have to do reviews).

 1) Review all outstanding nova/python-novaclient sync patches.
   https://review.openstack.org/#/c/72596/
   https://review.openstack.org/#/c/74560/
   https://review.openstack.org/#/c/75644/
 2) Using update.sh sync all low hanging fruit in both repos all at
 once. Low hanging fruit is anything that doesn't require a change
 outside of */openstack/common. As usual when doing these syncs we
 should list all patches being synced across, as well as document which
 modules we aren't syncing accross
./update.sh --base novaclient --config-file
 ../python-novaclient/openstack-common.conf --dest-dir
 ../python-novaclient/
 https://review.openstack.org/#/c/76642/
   ./update.sh --base nova --config-file ../nova/openstack-common.conf
 --dest-dir ../nova/
 3) At this point we should have a list of modules that are non-trivial
 to sync, now we can triage them and decide if they are oslo bugs or if
 nova/python-novaclient code needs updating.


 So for now we need reviews on the patches listed in 1, and someone to
 work on the low hanging fruit sync for nova. Followed by triaging of
 the non-low hanging fruit.

 Once we have the low hanging fruit out of the way lets sync up about
 how to handle the rest.

 best,
 Joe


 On Fri, Feb 21, 2014 at 6:26 PM, ChangBo Guo glongw...@gmail.com wrote:



 2014-02-22 5:09 GMT+08:00 Ben Nemec openst...@nemebean.com:

 /me finally catches up on -dev list traffic...

 On 2014-02-19 20:27, Doug Hellmann wrote:




 On Wed, Feb 19, 2014 at 8:13 PM, Joe Gordon joe.gord...@gmail.com wrote:

 Hi All,

 As many of you know most oslo-incubator code is wildly out of sync.
 Assuming we consider it a good idea to sync up oslo-incubator code
 before cutting Icehouse, then we have a problem.

 Today oslo-incubator code is synced in ad-hoc manor, resulting in
 duplicated efforts and wildly out of date code. Part of the challenges
 today are backwards incompatible changes and new oslo bugs. I expect
 that once we get a single project to have an up to date oslo-incubator
 copy it will make syncing a second project significantly easier. So
 because I (hopefully) have some karma built up in nova, I would like
 to volunteer nova to be the guinea pig.


 Thank you for volunteering to spear-head this, Joe.

 +1

 To fix this I would like to propose starting an oslo-incubator/nova
 sync team. They would be responsible for getting nova's oslo code up
 to date.  I expect this work to involve:
 * Reviewing lots of oslo sync patches
 * Tracking the current sync patches
 * Syncing over the low hanging fruit, modules that work without changing
 nova.
 * Reporting bugs to oslo team
 * Working with oslo team to figure out how to deal with backwards
 incompatible changes
   * Update nova code or make oslo module backwards compatible
 * Track all this
 * Create a roadmap for other projects to follow (re: documentation)

 I am looking for volunteers to help with this effort, any takers?


 I will help, especially with reviews and tracking.

 I'm happy to help as well.  I always try to help with oslo syncs any time
 I'm made aware of problems anyway.

 What is our first step here?  Get the low-hanging fruit syncs proposed all
 at once?  Do them individually (taking into consideration the module deps,
 of course)?  If we're going to try to get this done for Icehouse then we
 probably need to start ASAP.

 -Ben

  I also would like to be volunteer of the new team :)
  -gcb


 --
 ChangBo Guo(gcb)

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][keystone] Increase of USER_ID length maximum from 64 to 255

2014-02-28 Thread Mark Washenberger
On Wed, Feb 26, 2014 at 5:25 AM, Dolph Mathews dolph.math...@gmail.comwrote:


 On Tue, Feb 25, 2014 at 2:38 PM, Jay Pipes jaypi...@gmail.com wrote:

 On Tue, 2014-02-25 at 11:47 -0800, Morgan Fainberg wrote:
  For purposes of supporting multiple backends for Identity (multiple
  LDAP, mix of LDAP and SQL, federation, etc) Keystone is planning to
  increase the maximum size of the USER_ID field from an upper limit of
  64 to an upper limit of 255. This change would not impact any
  currently assigned USER_IDs (they would remain in the old simple UUID
  format), however, new USER_IDs would be increased to include the IDP
  identifier (e.g. USER_ID@@IDP_IDENTIFIER).

 -1

 I think a better solution would be to have a simple translation table
 only in Keystone that would store this longer identifier (for folks
 using federation and/or LDAP) along with the Keystone user UUID that is
 used in foreign key relations and other mapping tables through Keystone
 and other projects.


 Morgan and I talked this suggestion through last night and agreed it's
 probably the best approach, and has the benefit of zero impact on other
 services, which is something we're obviously trying to avoid. I imagine it
 could be as simple as a user_id to domain_id lookup table. All we really
 care about is given a globally unique user ID, which identity backend is
 the user from?

 On the downside, it would likely become bloated with unused ephemeral user
 IDs, so we'll need enough metadata about the mapping to implement a purging
 behavior down the line.


Is this approach planning on reusing the existing user-id field, then? It
seems like this creates a migration problem for folks who are currently
using user-ids that are generated by their identity backends.





 The only identifiers that would ever be communicated to any non-Keystone
 OpenStack endpoint would be the UUID user and tenant IDs.

  There is the obvious concern that projects are utilizing (and storing)
  the user_id in a field that cannot accommodate the increased upper
  limit. Before this change is merged in, it is important for the
  Keystone team to understand if there are any places that would be
  overflowed by the increased size.

 I would go so far as to say the user_id and tenant_id fields should be
 *reduced* in size to a fixed 16-char BINARY or 32-char CHAR field for
 performance reasons. Lengthening commonly-used and frequently-joined
 identifier fields is not a good option, IMO.

 Best,
 -jay

  The review that would implement this change in size
  is https://review.openstack.org/#/c/74214 and is actively being worked
  on/reviewed.
 
 
  I have already spoken with the Nova team, and a single instance has
  been identified that would require a migration (that will have a fix
  proposed for the I3 timeline).
 
 
  If there are any other known locations that would have issues with an
  increased USER_ID size, or any concerns with this change to USER_ID
  format, please respond so that the issues/concerns can be addressed.
   Again, the plan is not to change current USER_IDs but that new ones
  could be up to 255 characters in length.
 
 
  Cheers,
  Morgan Fainberg
  --
  Morgan Fainberg
  Principal Software Engineer
  Core Developer, Keystone
  m...@metacloud.com
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] inconsistent naming? node vs host vs vs hypervisor_hostname vs OS-EXT-SRV-ATTR:host

2014-02-28 Thread Jiang, Yunhong
One reason of the confusion is, in some virt driver (maybe xenapi or 
vmwareapi), one compute service manages multiple node.

--jyh

 -Original Message-
 From: Chris Friesen [mailto:chris.frie...@windriver.com]
 Sent: Friday, February 28, 2014 7:40 AM
 To: openstack-dev@lists.openstack.org
 Subject: [openstack-dev] inconsistent naming? node vs host vs vs
 hypervisor_hostname vs OS-EXT-SRV-ATTR:host
 
 Hi,
 
 I've been working with OpenStack for a while now but I'm still a bit
 fuzzy on the precise meaning of some of the terminology.
 
 It seems reasonably clear that a node is a computer running at least
 one component of an Openstack system.
 
 However, nova service-list talks about the host that a given service
 runs on.  Shouldn't that be node?  Normally host is used to
 distinguish from guest, but that doesn't really make sense for a
 dedicated controller node.
 
 nova show reports OS-EXT-SRV-ATTR:host and
 OS-EXT-SRV-ATTR:hypervisor_hostname for an instance.  What is the
 distinction between the two and how do they relate to OpenStack nodes
 or the host names in nova service-list?
 
 nova hypervisor-list uses the term hypervisor hostname, but nova
 hypervisor-stats talks about compute nodes.  Is this distinction
 accurate or should they both use the hypervisor terminology?  What is
 the distinction between hypervisor/host/node?
 
 nova host-list reports host_name, but seems to include all services.
   Does host_name correspond to host, hypervisor_host, or node?  And
 just to make things interesting, the other nova host-* commands only
 work on compute hosts, so maybe nova host-list should only output info
 for systems running nova-compute?
 
 
 Thanks,
 Chris
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] [neutron] OpenDaylight devstack support questions

2014-02-28 Thread Salvatore Orlando
Hi Kyle,

I think conceptually your approach is fine.
I would have had concerns if you were trying to manage ODL life cycle
through devstack (like installing/uninstalling it or configuring the ODL
controller).
But looking at your code it seems you're just setting up the host so that
it could work with opendaylight.

I agree however that extras.d is probably not the right place, as devstack
already has hooks in places for plugin configuration.
I think they are at least:
- configure
- check
- init
- install
- start

big switch, midokura, nec, ryu, and nsx already use these hooks.
I appreciate the fact that since this is a mech driver rather than a
plugin, this solution won't work out of the box, but at first glance it
should not be to hard to adapt it.

Salvatore



On 26 February 2014 22:47, Kyle Mestery mest...@noironetworks.com wrote:

 So, I have this review [1] which attempts to add support for OpenDaylight
 to devstack. What this currently does, in Patch 7, is that it uses the
 extras functionality of devstack to program the OVS on the host so that
 OpenDaylight can control it. On teardown, it does the reverse. Simple and
 straightforward. I've received feedback this isn't the correct approach
 here,
 and that using a plugin approach in lib/neutron_plugin/opendaylight would
 be better. I need hooks for when devstack is finished running, and when
 unstack is called. Those don't appear in the plugin interface for Neutron
 in devstack.

 Another point of inconsistency I'd like to bring up is the fact that
 patches
 for Neutron in devstack which propose running an Open Source controller
 are being flagged with -1. However, the Ryu plugin is already doing this. I
 suspect it was grandfathered in, but it sets an inconsistent precedent
 here.
 I propose we either remove Ryu from devstack, or continue to let other
 Open Source SDN controllers run inside devstack. Please see Patch 6
 of the review below for the minimal work it took me to add OpenDaylight
 there.

 Feedback appreciated here, I've been sitting on this devstack patch with
 minimal changes for a month. I'm also working with the Linux Foundation
 for the 3rd party testing requirements for ODL so the ML2 MechanismDriver
 can also go in.

 Thanks,
 Kyle

 [1] https://review.openstack.org/#/c/69774/

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Enterprise Ready Features

2014-02-28 Thread Brandon Logan
Thanks for your response Jay.  I'm not a big fan of the term enterprise either 
but its the best single word term I could come up with to describe large scale, 
multi tenant deployments.  I know these are things every project wants but I'm 
just gauging how important it is to accomplish these goals in this project.  

As for Atlas LB, it has been dead for a year or two now.  Unless it somehow got 
resurrected and we don't know about it.  I really liked the API and object 
models, it allowed for multiple vips and was planned to implement a form of 
flavors (or types), not exactly the same way obviously, but the idea was 
there.  I also like that it was a standalone project.  A big problem with that 
project, though, was that it was going to be written in Java.  There were also 
other political reasons for it dying but those will remain unsaid.

The fragmentation is a bit of a concern but hopefully it will end up with the 
best ideas from all the projects going into one project that the community can 
agree on.  

We, Rackspace, are hoping to use Neutron LBaaS it but it does need to get into 
a more mature state.  The Cloud Load Balancing team (the team I am on) is 
looking to start contributing to this project to help get it to where we need 
it for this to happen. Obviously, we need some ramp up time to fully understand 
the project and get more involved in the discussions.  Hopefully we can 
contribute code and also share our experiences in what we learned in our 
successes and failures.  We are all looking forward to working with the 
community on this.

Thanks,
Brandon

From: Jay Pipes [jaypi...@gmail.com]
Sent: Thursday, February 27, 2014 8:52 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Neutron][LBaaS] Enterprise Ready Features

On Wed, 2014-02-26 at 18:46 +, Brandon Logan wrote:
 TL;DR: Are enterprise needed features (HA, scalability, resource
 management, etc) on this project's roadmap.

Yes. Although, due to my disdain for the term enterprise, I'd point
out that all of those features are things that most everyone wants, not
just shops with stodgy, old, legacy IT departments -- I mean...
enterprises ;)

  If so, how much of a priority is it?

Not sure.

 I've been doing some research on Neutron LBaaS to determine the
 viability and what needs to be done to allow for it to become an
 enterprise ready solution.

Out of curiosity, since you are at Rackspace, what about Atlas LB?

  Since I am fairly new to this project please forgive me, and also
 correct me, if my understanding of some of these things is false.
 I've already spoken to Eugene about some of this, but I think it would
 be nice to get everyone's opinion.  And since the object model
 discussions are going on right now I believe this to be a good time to
 bring it up.

Ya, no worries. I'm new to the LBaaS discussions myself.

 As of its current incarnation Neutron LBaaS does not seem to be HA,
 scalable, and doesn't isolate resources for each load balancer.  I
 know there is a blueprint for HA for the agent
 (https://blueprints.launchpad.net/neutron/+spec/lbaas-ha-agent) and HA
 for HaProxy
 (https://blueprints.launchpad.net/neutron/+spec/lbaas-ha-haproxy).
 That is only for HaProxy, though, and sounds like it has to be
 implemented at the driver level.

Right. Different drivers will enable HA in different ways.

 Is that the intended direction for implementing these goals, to
 implement them at the driver level?

I *believe* that is the intended direction, yes. The ongoing
conversations about Neutron flavors as well as the conversation about
the future object model and API have really been about how to expose the
capabilities of different drivers -- and match those capabilities to
requested capabilities from the user -- without leaking any particular
driver's implementation specifics into the public API. I think the hope
is that in the coming few months and during the summit, the community
will coalesce around a game plan for implementing flavors (or types,
as I prefer to call them), and from that implementation, contributors
will be able to work on adding these features to drivers and expose this
functionality in a generic fashion.

  I can definitely see why that is the way to do it because some
 drivers may already implement these features, while others don't.  It
 would be nice if there was a way to give those features to drivers
 that do not have it out of the box.

Well, on the HA and scaling front, one solution is to run multiple
instances of something like HA Proxy and have some healthcheck software
that would fail over operations from one load balancer instance to
another if a failure condition occurred. In fact, that is similar to
what the Libra project does with its node pool manager [1].

 Basically, I'd like this project to have these enterprise level
 features to that it can be adopted in an enterprise cloud.  It will
 require a lot of work to achieve 

Re: [openstack-dev] inconsistent naming? node vs host vs vs hypervisor_hostname vs OS-EXT-SRV-ATTR:host

2014-02-28 Thread Chris Friesen

On 02/28/2014 11:38 AM, Jiang, Yunhong wrote:

One reason of the confusion is, in some virt driver (maybe xenapi or
vmwareapi), one compute service manages multiple node.


Okay, so in the scenario above, is the nova-compute service running on a
node or a host?  (And if it's a host, then what is the compute
node?)

What is the distinction between OS-EXT-SRV-ATTR:host and
OS-EXT-SRV-ATTR:hypervisor_hostname in the above case?

Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][keystone] Increase of USER_ID length maximum from 64 to 255

2014-02-28 Thread Henry Nash
Hi Mark,

So we would not modify any existing IDs, so no migration required.

Henry
On 28 Feb 2014, at 17:38, Mark Washenberger mark.washenber...@markwash.net 
wrote:

 
 
 
 On Wed, Feb 26, 2014 at 5:25 AM, Dolph Mathews dolph.math...@gmail.com 
 wrote:
 
 On Tue, Feb 25, 2014 at 2:38 PM, Jay Pipes jaypi...@gmail.com wrote:
 On Tue, 2014-02-25 at 11:47 -0800, Morgan Fainberg wrote:
  For purposes of supporting multiple backends for Identity (multiple
  LDAP, mix of LDAP and SQL, federation, etc) Keystone is planning to
  increase the maximum size of the USER_ID field from an upper limit of
  64 to an upper limit of 255. This change would not impact any
  currently assigned USER_IDs (they would remain in the old simple UUID
  format), however, new USER_IDs would be increased to include the IDP
  identifier (e.g. USER_ID@@IDP_IDENTIFIER).
 
 -1
 
 I think a better solution would be to have a simple translation table
 only in Keystone that would store this longer identifier (for folks
 using federation and/or LDAP) along with the Keystone user UUID that is
 used in foreign key relations and other mapping tables through Keystone
 and other projects.
 
 Morgan and I talked this suggestion through last night and agreed it's 
 probably the best approach, and has the benefit of zero impact on other 
 services, which is something we're obviously trying to avoid. I imagine it 
 could be as simple as a user_id to domain_id lookup table. All we really care 
 about is given a globally unique user ID, which identity backend is the user 
 from?
 
 On the downside, it would likely become bloated with unused ephemeral user 
 IDs, so we'll need enough metadata about the mapping to implement a purging 
 behavior down the line.
 
 Is this approach planning on reusing the existing user-id field, then? It 
 seems like this creates a migration problem for folks who are currently using 
 user-ids that are generated by their identity backends.
  
  
 
 The only identifiers that would ever be communicated to any non-Keystone
 OpenStack endpoint would be the UUID user and tenant IDs.
 
  There is the obvious concern that projects are utilizing (and storing)
  the user_id in a field that cannot accommodate the increased upper
  limit. Before this change is merged in, it is important for the
  Keystone team to understand if there are any places that would be
  overflowed by the increased size.
 
 I would go so far as to say the user_id and tenant_id fields should be
 *reduced* in size to a fixed 16-char BINARY or 32-char CHAR field for
 performance reasons. Lengthening commonly-used and frequently-joined
 identifier fields is not a good option, IMO.
 
 Best,
 -jay
 
  The review that would implement this change in size
  is https://review.openstack.org/#/c/74214 and is actively being worked
  on/reviewed.
 
 
  I have already spoken with the Nova team, and a single instance has
  been identified that would require a migration (that will have a fix
  proposed for the I3 timeline).
 
 
  If there are any other known locations that would have issues with an
  increased USER_ID size, or any concerns with this change to USER_ID
  format, please respond so that the issues/concerns can be addressed.
   Again, the plan is not to change current USER_IDs but that new ones
  could be up to 255 characters in length.
 
 
  Cheers,
  Morgan Fainberg
  —
  Morgan Fainberg
  Principal Software Engineer
  Core Developer, Keystone
  m...@metacloud.com
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Notification When Creating/Deleting a Tenant in openstack

2014-02-28 Thread Nader Lahouti
 The idea behind this when we originally implemented notifications in
 Keystone was to
 provide the resource being changed, such as 'user', 'project', 'trust' and
 the uuid of that
 resource. From there your plugin and could request more information from
 Keystone by doing a
 GET on that resource. This way would could keep the payload of the
 notification sent minimal
 in case all the information on the resource wasn't required.


The issue is that, notification is send after project is deleted, so no
additional information can be fetched (i.e. project name,...).  The GET
request fails. As there is only project ID is in resource_info in the
notification. In my case I at least need the name of the project.

Thanks,
Nader.




On Mon, Feb 24, 2014 at 10:50 AM, Lance D Bragstad ldbra...@us.ibm.comwrote:

 Response below.


 Best Regards,

 Lance Bragstad
 ldbra...@us.ibm.com

 Nader Lahouti nader.laho...@gmail.com wrote on 02/24/2014 11:31:10 AM:

  From: Nader Lahouti nader.laho...@gmail.com
  To: OpenStack Development Mailing List (not for usage questions)
  openstack-dev@lists.openstack.org,
  Date: 02/24/2014 11:37 AM
  Subject: Re: [openstack-dev] [keystone] Notification When Creating/
  Deleting a Tenant in openstack

 
  Hi Swann,
 
  I was able to listen to keystone notification by setting
  notifications in the keystone.conf file. I only needed the
  notification (CURD) for project and handle it in my plugin code so
  don't need ceilometer to handle them.
  The other issue is that the notification is only for limited to
  resource_id  and don't have other information such as project name.

 The idea behind this when we originally implemented notifications in
 Keystone was to
 provide the resource being changed, such as 'user', 'project', 'trust' and
 the uuid of that
 resource. From there your plugin and could request more information from
 Keystone by doing a
 GET on that resource. This way would could keep the payload of the
 notification sent minimal
 in case all the information on the resource wasn't required.


The issue that I'm facing is that GET fails as the project is deleted from
database, so cannot get any info from the resource_info in the notification
from



 
  Thanks,
  Nader.
 
 

  On Mon, Feb 24, 2014 at 2:10 AM, Swann Croiset swan...@gmail.com
 wrote:
 
  Hi Nader,
 
  These notifications must be handled by Ceilometer like others [1].
  it is surprising that it does not already identity meters indeed...
  probably nobody needs them before you.
  I guess it remains to open a BP and code them like I recently did for
 Heat [2]
 
 
  http://docs.openstack.org/developer/ceilometer/measurements.html
 
 https://blueprints.launchpad.net/ceilometer/+spec/handle-heat-notifications
 

  2014-02-20 19:10 GMT+01:00 Nader Lahouti nader.laho...@gmail.com:
 
  Thanks Dolph for link. The document shows the format of the message
  and doesn't give any info on how to listen to the notification.
  Is there any other document showing the detail on how to listen or
  get these notifications ?
 
  Regards,
  Nader.
 
  On Feb 20, 2014, at 9:06 AM, Dolph Mathews dolph.math...@gmail.com
 wrote:

  Yes, see:
 
http://docs.openstack.org/developer/keystone/event_notifications.html
 
  On Thu, Feb 20, 2014 at 10:54 AM, Nader Lahouti nader.laho...@gmail.com
   wrote:
  Hi All,
 
  I have a question regarding creating/deleting a tenant in openstack
  (using horizon or CLI). Is there any notification mechanism in place
  so that an application get informed of such an event?
 
  If not, can it be done using plugin to send create/delete
  notification to an application?
 
  Appreciate your suggestion and help.
 
  Regards,
  Nader.
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Porting executor and engine to oslo.messaging

2014-02-28 Thread Joshua Harlow
Sounds good,

Lets connect, the value of central oslo connected projects is that shared 
libraries == share the pain. Duplicating features and functionality is always 
more pain. In the end we are a community, not silos, so it seems like before 
mistral goes down the path of duplicating more and more features (I understand 
the desire to POC mistral and learn what mistral wants to become, and all that) 
that we should start the path to working together. I personally am worried that 
mistral will start to apply for incubation and then the question will come up 
as to this (mistral was doing POC, kept on doing POC, never came back to using 
common libraries, and then gets asked why this happened).

I'd like to make us all successful, and as a old saying goes,

“A single twig breaks, but the bundle of twigs is strong”, openstack needs to 
be a cohesive bundle and not a single twig ;)

From: Renat Akhmerov rakhme...@mirantis.commailto:rakhme...@mirantis.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Friday, February 28, 2014 at 6:31 AM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Mistral] Porting executor and engine to 
oslo.messaging

Hi Joshua,

Sorry, I’ve been very busy for the last couple of days and didn’t respond 
quickly enough.

Well, first of all, it’s my bad that I’ve not been following TaskFlow progress 
for a while and, honestly, I just need to get more info on the current TaskFlow 
status. So I’ll do that and get back to you soon. As you know, there were 
reasons why we decided to go this path (without using TaskFlow) but I’ve always 
thought we will be able to align our efforts as we move forward once 
requirements and design of Mistral become more clear. I really want to use 
TaskFlow for Mistral implementation. We just need to make sure that it will 
bring more value than pain (sorry if it sounds harsh).

Thanks for your feedback and this info. We’ll get in touch with you soon.

Renat Akhmerov
@ Mirantis Inc.



On 27 Feb 2014, at 03:22, Joshua Harlow 
harlo...@yahoo-inc.commailto:harlo...@yahoo-inc.com wrote:

So this design is starting to look pretty familiar to a what we have in 
taskflow.

Any reason why it can't just be used instead?

https://etherpad.openstack.org/p/TaskFlowWorkerBasedEngine

This code is in a functional state right now, using kombu (for the moment, 
until oslo.messaging becomes py3 compliant).

The concept of a engine which puts messages on a queue for a remote executor is 
in-fact exactly the case taskflow is doing (the remote exeuctor/worker will 
then respond when it is done and the engine will then initiate the next piece 
of work to do) in the above listed etherpad (and which is implemented).

Is it the case that in mistral the engine will be maintaining the 
'orchestration' of the workflow during the lifetime of that workflow? In the 
case of mistral what is an engine server? Is this a server that has engines in 
it (where each engine is 'orchestrating' the remote/local workflows and 
monitoring and recording the state transitions and data flow that is 
occurring)? The details @ 
https://blueprints.launchpad.net/mistral/+spec/mistral-engine-standalone-process
 seems to be already what taskflow provides via its engine object, creating a 
application which runs engines and those engines initiate workflows is made to 
be dead simple.

From previous discussions with the mistral folks it seems like the overlap 
here is getting more and more, which seems to be bad (and means something is 
broken/wrong). In fact most of the concepts that u have blueprints for have 
already been completed in taskflow (data-flow, engine being disconnected from 
the rest api…) and ones u don't have listed (resumption, reversion…).

What can we do to fix this situation?

-Josh

From: Nikolay Makhotkin 
nmakhot...@mirantis.commailto:nmakhot...@mirantis.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Tuesday, February 25, 2014 at 11:30 PM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Mistral] Porting executor and engine to 
oslo.messaging

Looks good. Thanks, Winson!

Renat, What do you think?


On Wed, Feb 26, 2014 at 10:00 AM, W Chan 
m4d.co...@gmail.commailto:m4d.co...@gmail.com wrote:
The following link is the google doc of the proposed engine/executor message 
flow architecture.  
https://drive.google.com/file/d/0B4TqA9lkW12PZ2dJVFRsS0pGdEU/edit?usp=sharing

The diagram on the right is the scalable engine where one or more engine sends 
requests over a transport to one or more executors.  The executor client, 
transport, and 

Re: [openstack-dev] Bug Triage - Woo Hoo!!

2014-02-28 Thread Russell Bryant
On 02/28/2014 11:52 AM, Tracy Jones wrote:
 Tag   Owner   New Not-New
 wat   russellb48  617

'wat' was originally supposed to be 'queue up for tier 2 triage', but
was never really used.  I'm not actually sure it's worth keeping.  We
should probably just force things into proper categories and have the
right experts on each category.

Based on the numbers here though, it doesn't sound like it's actually
the number of bugs tagged with 'wat'.  Are these the bugs not in any of
the other categories?

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Significance of subnet_id for LBaaS Pool

2014-02-28 Thread Samuel Bercovici
Rabi,

This is correct.
The API does allow you to do so.

-Sam.

-Original Message-
From: Rabi Mishra [mailto:ramis...@redhat.com] 
Sent: Wednesday, February 26, 2014 1:53 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] Significance of subnet_id for LBaaS Pool


- Original Message -
 From: Mark McClain mmccl...@yahoo-inc.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Sent: Wednesday, February 26, 2014 3:43:59 AM
 Subject: Re: [openstack-dev] [neutron] Significance of subnet_id for 
 LBaaS Pool
 
 
 On Feb 25, 2014, at 1:06 AM, Rabi Mishra ramis...@redhat.com wrote:
 
  Hi All,
  
  'subnet_id' attribute of LBaaS Pool resource has been documented as 
  The network that pool members belong to
  
  However, with 'HAProxy' driver, it allows to add members belonging 
  to different subnets/networks to a lbaas Pool.
  
 Rabi-
 
 The documentation is a bit misleading here.  The subnet_id in the pool 
 is used to create the port that the load balancer instance uses to 
 connect with the members.

I assume then the validation in horizon to force the VIP ip from this pool 
subnet is incorrect. i.e VIP address can be from a different subnet.

 
 mark
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [rally] [smart-scenario-args]

2014-02-28 Thread Oleg Gelbukh
Sergey,

What do you think about adoption of/integration with other types of
resource definition languages used in OpenStack, for example, Heat
Orchestration Templates?

--
Best regards,
Oleg Gelbukh


On Thu, Feb 27, 2014 at 6:31 PM, Sergey Skripnick
sskripn...@mirantis.comwrote:


 Hello,

  Problem: what about deployment specific parts
 Template string in config? %imageid% or similar?
 Image name regex, rather than image name? so can work with multiple
 deployments, eg ^cirros$



 so we have a few solutions for today: function, vars, and special args.


 FUNCTION
 
 args: {image_id: {$func: img_by_reg, $args: [ubuntu.*]}}

 Flexible but configuration looks complex.

 VARS
 
 vars : {
 $image1 : {$func: img_by_reg, $args: [ubuntu.*]},
 $image2: {$func: img_by_reg, $args: [centos.*]}
 }
 args: {
image_id: $image1,
alt_image_id: $image2
 }

 This may be an addition to the first solution, but personally to me it
 looks like overkill.

 SPECIAL ARGS
 
 args: {image_re: {ubuntu.*}}

 Very simple configuration, but less flexible then others. IMO all three may
 be implemented.

 I vote for special args, and IMO functions may be implemented too.
 Please feel free to propose other solutions.

 --
 Regards,
 Sergey Skripnick

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] Need unique ID for every Network Service

2014-02-28 Thread Stephen Balukoff
Hi y'all!

The ongoing debate in the LBaaS group is whether the concept of a
'Loadbalancer' needs to exist  as an entity. If it is decided that we need
it, I'm sure it'll have a unique ID. (And please feel free to join the
discussion on this as well, eh!)

Stephen


On Thu, Feb 27, 2014 at 10:27 PM, Veera Reddy veerare...@gmail.com wrote:

 Hi,

 Good idea to have unique id for each entry of network functions.
 So that we can configure multiple network function with different
 configuration.


 Regards,
 Veera.


 On Fri, Feb 28, 2014 at 11:23 AM, Srikanth Kumar Lingala 
 srikanth.ling...@freescale.com wrote:

  Hi-

 In the existing Neutron, we have FWaaS, LBaaS, VPNaaS …etc.

 In FWaaS, each Firewall has its own UUID.

 It is good to have a unique ID [UUID] for LBaaS also.



 Please share your comments on the above.



 Regards,

 Srikanth.

 ___
 Mailing list:
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
 Post to : openst...@lists.openstack.org
 Unsubscribe :
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack



 ___
 Mailing list:
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
 Post to : openst...@lists.openstack.org
 Unsubscribe :
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack




-- 
Stephen Balukoff
Blue Box Group, LLC
(800)613-4305 x807
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Porting executor and engine to oslo.messaging

2014-02-28 Thread W Chan
All,
This is a great start.  I think the sooner we have this discussion the
better.  Any uncertainty in the direction/architecture here is going to
stall progress.  How about Convection?  What's the status of the Convection
project and where it's heading?  Should we have similar discussion with the
contributors of that project?

Joshua,
I have a few questions about TaskFlow.
1) How does it handle conditional loop and expression evaluation for
decision branching?  I've looked at the Taskflow wiki/code briefly and it's
not obvious.  I assume it would be logic that user will embed within a task?
2) How about predefined catalog of standard tasks (i.e. REST call, SOAP
call, Email task, etc.)?  Is that within the scope of Taskflow or up to
TaskFlow consumers like Mistral?
3) Does TaskFlow have its own DSL?  The examples provided are mostly code
based.

Thanks.
Winson




On Fri, Feb 28, 2014 at 10:54 AM, Joshua Harlow harlo...@yahoo-inc.comwrote:

  Sounds good,

  Lets connect, the value of central oslo connected projects is that
 shared libraries == share the pain. Duplicating features and functionality
 is always more pain. In the end we are a community, not silos, so it seems
 like before mistral goes down the path of duplicating more and more
 features (I understand the desire to POC mistral and learn what mistral
 wants to become, and all that) that we should start the path to working
 together. I personally am worried that mistral will start to apply for
 incubation and then the question will come up as to this (mistral was doing
 POC, kept on doing POC, never came back to using common libraries, and then
 gets asked why this happened).

  I'd like to make us all successful, and as a old saying goes,

  A single twig breaks, but the bundle of twigs is strong, openstack
 needs to be a cohesive bundle and not a single twig ;)

   From: Renat Akhmerov rakhme...@mirantis.com

 Reply-To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Date: Friday, February 28, 2014 at 6:31 AM

 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Mistral] Porting executor and engine to
 oslo.messaging

   Hi Joshua,

  Sorry, I've been very busy for the last couple of days and didn't
 respond quickly enough.

  Well, first of all, it's my bad that I've not been following TaskFlow
 progress for a while and, honestly, I just need to get more info on the
 current TaskFlow status. So I'll do that and get back to you soon. As you
 know, there were reasons why we decided to go this path (without using
 TaskFlow) but I've always thought we will be able to align our efforts as
 we move forward once requirements and design of Mistral become more clear.
 I really *want* to use TaskFlow for Mistral implementation. We just need
 to make sure that it will bring more value than pain (sorry if it sounds
 harsh).

  Thanks for your feedback and this info. We'll get in touch with you soon.

  Renat Akhmerov
 @ Mirantis Inc.



  On 27 Feb 2014, at 03:22, Joshua Harlow harlo...@yahoo-inc.com wrote:

  So this design is starting to look pretty familiar to a what we have in
 taskflow.

  Any reason why it can't just be used instead?

  https://etherpad.openstack.org/p/TaskFlowWorkerBasedEngine

  This code is in a functional state right now, using kombu (for the
 moment, until oslo.messaging becomes py3 compliant).

  The concept of a engine which puts messages on a queue for a remote
 executor is in-fact exactly the case taskflow is doing (the remote
 exeuctor/worker will then respond when it is done and the engine will then
 initiate the next piece of work to do) in the above listed etherpad (and
 which is implemented).

  Is it the case that in mistral the engine will be maintaining the
 'orchestration' of the workflow during the lifetime of that workflow? In
 the case of mistral what is an engine server? Is this a server that has
 engines in it (where each engine is 'orchestrating' the remote/local
 workflows and monitoring and recording the state transitions and data flow
 that is occurring)? The details @
 https://blueprints.launchpad.net/mistral/+spec/mistral-engine-standalone-process
  seems
 to be already what taskflow provides via its engine object, creating a
 application which runs engines and those engines initiate workflows is made
 to be dead simple.

  From previous discussions with the mistral folks it seems like the
 overlap here is getting more and more, which seems to be bad (and means
 something is broken/wrong). In fact most of the concepts that u have
 blueprints for have already been completed in taskflow (data-flow, engine
 being disconnected from the rest api...) and ones u don't have listed
 (resumption, reversion...).

  What can we do to fix this situation?

  -Josh

   From: Nikolay Makhotkin nmakhot...@mirantis.com
 Reply-To: OpenStack Development Mailing List (not for usage questions) 
 

Re: [openstack-dev] [Neutron][LBaaS] Enterprise Ready Features

2014-02-28 Thread Stephen Balukoff
Hi Brandon,

Glad to see more new blood in this project and discussion. (I've also only
recently gotten involved in the discussion here. And I've sent my ideas on
this front to this list in the last couple of weeks-- let me know if you'd
like me to send you links to this discussion.)

I think Jay covered what I understand to be the current state of affairs at
this point, but I would point out that largely the discussion so far has
been about the right way to implement Layer-7 features and SSL, and not
really about how HA and scalability play into this (though I've been trying
to foster the discussion in this direction. :) )

As such, I don't think it's fair to say that the Neutron LBaaS community
has fully considered the HA or scalability problems just yet, nor what
changes are going to need to happen to make these features available. And
given that, I don't get the impression that it's at all decided that HA and
scalability features will be solely delegated to the driver level--  that
is to say, the community may yet decide there are certain entities (like
the idea of a logical load balancer construct) that make sense to apply
to the Neutron LBaaS model which help both driver writers to implement HA
features, and help end users to write their applications without having to
navigate a minefield of optionally supported features of a given load
balancer implementation.  (I'm actually looking forward to seeing Jay's
fleshed out ideas which I understand counter this point which may make a
'loadbalancer' object unnecessary and thus allow a lot more flexibility in
implementation.)

Given we're still struggling to find consensus on the changes that will
enable L7 and SSL, it may be a bit premature to start talking about HA and
scalability (though, again, I would like it if we could be this
forward-thinking about the project. :) ) I'm really hoping we can make
significant progress on this at the Atlanta summit in May.

In any case, welcome to the discussion!

Stephen



On Fri, Feb 28, 2014 at 9:51 AM, Brandon Logan
brandon.lo...@rackspace.comwrote:

 Thanks for your response Jay.  I'm not a big fan of the term enterprise
 either but its the best single word term I could come up with to describe
 large scale, multi tenant deployments.  I know these are things every
 project wants but I'm just gauging how important it is to accomplish these
 goals in this project.

 As for Atlas LB, it has been dead for a year or two now.  Unless it
 somehow got resurrected and we don't know about it.  I really liked the API
 and object models, it allowed for multiple vips and was planned to
 implement a form of flavors (or types), not exactly the same way
 obviously, but the idea was there.  I also like that it was a standalone
 project.  A big problem with that project, though, was that it was going to
 be written in Java.  There were also other political reasons for it dying
 but those will remain unsaid.

 The fragmentation is a bit of a concern but hopefully it will end up with
 the best ideas from all the projects going into one project that the
 community can agree on.

 We, Rackspace, are hoping to use Neutron LBaaS it but it does need to get
 into a more mature state.  The Cloud Load Balancing team (the team I am on)
 is looking to start contributing to this project to help get it to where we
 need it for this to happen. Obviously, we need some ramp up time to fully
 understand the project and get more involved in the discussions.  Hopefully
 we can contribute code and also share our experiences in what we learned in
 our successes and failures.  We are all looking forward to working with the
 community on this.

 Thanks,
 Brandon
 
 From: Jay Pipes [jaypi...@gmail.com]
 Sent: Thursday, February 27, 2014 8:52 PM
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Neutron][LBaaS] Enterprise Ready Features

 On Wed, 2014-02-26 at 18:46 +, Brandon Logan wrote:
  TL;DR: Are enterprise needed features (HA, scalability, resource
  management, etc) on this project's roadmap.

 Yes. Although, due to my disdain for the term enterprise, I'd point
 out that all of those features are things that most everyone wants, not
 just shops with stodgy, old, legacy IT departments -- I mean...
 enterprises ;)

   If so, how much of a priority is it?

 Not sure.

  I've been doing some research on Neutron LBaaS to determine the
  viability and what needs to be done to allow for it to become an
  enterprise ready solution.

 Out of curiosity, since you are at Rackspace, what about Atlas LB?

   Since I am fairly new to this project please forgive me, and also
  correct me, if my understanding of some of these things is false.
  I've already spoken to Eugene about some of this, but I think it would
  be nice to get everyone's opinion.  And since the object model
  discussions are going on right now I believe this to be a good time to
  bring it up.

 Ya, no worries. I'm new to the 

Re: [openstack-dev] [Heat] Thoughts on adding a '--progress' option?

2014-02-28 Thread Zane Bitter

On 28/02/14 02:28, Qiming Teng wrote:


The creation a stack is usually a time costly process, considering that
there are cases where software packages need to be installed and
configured.

There are also cases where a stack consists of more than one VM instance
and the dependency between instances.  The instances may have to be
created one by one.

Are Heat people considering adding some progress updates during the
deployment?  For example, a simple log that can be printed by heatclient
telling the user what progress has been made:

Refreshing known resources types
Receiving template ...
Validating template ...
Creating resource my_lb [AWS::EC2:LoadBalancer]
Creating resource lb_instance1 [AWS::EC2::Instance]
Creating resource latency_watcher [AWS::CloudWatch::Alarm]

...


This would be useful for users to 'debug' their templates, especially
when the template syntax is okay but its activities are not the intended
one.


Yes, we need some sort of back-channel to feed information to the user - 
not only on progress but things like warnings. Right now we have to 
choose between failing a whole stack or not notifying the user at all 
when something is suspect.


The ReST model is unfortunately not conducive to this, as you generally 
don't want to keep the HTTP connection open and block until something is 
complete. One good idea floating around is to send the messages to a 
Marconi queue that the user can connect to.


It's all up for discussion in this blueprint:

https://blueprints.launchpad.net/heat/+spec/user-visible-logs


Do we have to rely on heat-cfn-api to get these notifications?


No, you can also use the native api ;)

cheers,
Zane.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Oslo] Questions about syncing non-imported files

2014-02-28 Thread Doug Hellmann
On Fri, Feb 28, 2014 at 3:35 AM, ChangBo Guo glongw...@gmail.com wrote:

 1)
 I found modules tracked in openstack-common.conf is not consistent with
 actual
 modules in directoy 'openstack/common' in some projects like Nova. I
 drafted a script
 to enforce the check in https://review.openstack.org/#/c/76901/. Maybe
 need more work
 to improve it. Please help review :).


We allow automatic dependency resolution so that projects don't have to
keep up with the imports that occur between the incubated libraries.



 2)
 Some projects include README ,which is out of date in direcotry
 'openstack/common'
 like Nova, Cinder. But other projects don't include it. Should we keep the
 file in
 directory 'openstack/common'? or move to other pace or just remove it.


I like the idea of having the README, but agree we should either update or
remove it.




 3) What kind of module can be recorded in openstack-common.conf ? only
 modules in
 directory openstack/common ? This is an example:
 https://github.com/openstack/nova/blob/master/openstack-common.conf#L17
 


Some of the tools, like the config sample generator and the test running
scripts, can also be copied out of the incubator. We should leave that
ability in place until those tools graduate to their own libraries.




 4) We have some useful check scripts in tools, is there any plan and rule
 to
  sync them to downstream projects ? I would like to be volunteer for this.


I would rather we focus on moving tools like that into installable
packages. I'd be happy to have some help making a list of what those tools
are and what libraries they might move into. If you want to make some notes
in https://wiki.openstack.org/wiki/Oslo/GraduationStatus that would be a
big help.

Doug






 --
 ChangBo Guo(gcb)

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Future of the Nova API

2014-02-28 Thread Christopher Lefelhocz
+1 

If we don't recognize that v2 is going to be around for a long time, have
some growth/require support we are kidding ourselves.  Establishing a
timeline to deprecation prior to release of v3 is not the right decision
point.  We should determine if v3 is ready for primetime, be willing to
support both ongoing, and ready to bring information to the table to
determine the right timeline for deprecation in a future release (a
timeline for a timeline).

Christopher

On 2/28/14 8:48 AM, Day, Phil philip@hp.com wrote:

The current set of reviews on this change seems relevant to this debate:
https://review.openstack.org/#/c/43822/

In effect a fully working and tested change which makes the nova-net /
neutron compatibility via the V2 API that little bit closer to being
complete is being blocked because it's thought that by not having it
people will be quicker to move to V3 instead.

Folks this is just madness - no one is going to jump to using V3 just
because we don't fix minor things like this in V2,  they're just as
likely to start jumping to something completely different because that
Openstack stuff is just too hard to work with. User's don't think
like developers, and you can't force them into a new API by deliberately
keeping the old one bad - at least not if you want to keep them as users
in the long term.

I can see an argument (maybe) for not adding lots of completely new
features into V2 if V3 was already available in a stable form - but V2
already provides a nearly complete support for nova-net features on top
of Neutron.I fail to see what is wrong with continuing to improve
that.

Phil

 -Original Message-
 From: Day, Phil
 Sent: 28 February 2014 11:07
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [nova] Future of the Nova API
 
  -Original Message-
  From: Chris Behrens [mailto:cbehr...@codestud.com]
  Sent: 26 February 2014 22:05
  To: OpenStack Development Mailing List (not for usage questions)
  Subject: Re: [openstack-dev] [nova] Future of the Nova API
 
 
  This thread is many messages deep now and I'm busy with a conference
  this week, but I wanted to carry over my opinion from the other v3
  API in Icehouse thread and add a little to it.
 
  Bumping versions is painful. v2 is going to need to live for a long
  time to create the least amount of pain. I would think that at least
  anyone running a decent sized Public Cloud would agree, if not anyone
  just running any sort of decent sized cloud. I don't think there's a
  compelling enough reason to deprecate v2 and cause havoc with what we
  currently have in v3. I'd like us to spend more time on the proposed
  tasks changes. And I think we need more time to figure out if we're
  doing versioning in the correct way. If we've got it wrong, a v3
  doesn't fix the problem and we'll just be causing more havoc with a
v4.
 
  - Chris
 
 Like Chris I'm struggling to keep up with this thread,  but of all the
various
 messages I've read this is the one that resonates most with me.
 
 My perception of the V3 API improvements (in order to importance to me):
 i) The ability to version individual extensions Crazy that small
improvements
 can't be introduced without having to create a new extension,  when
often
 the extension really does nothing more that indicate that some other
part of
 the API code has changed.
 
 ii) The opportunity to get the proper separation between Compute and
 Network APIs Being (I think) one of the few clouds that provides both
the
 Nova and Neutron API this is a major source of confusion and hence
support
 calls.
 
 iii) The introduction of the task model
 I like the idea of tasks, and think it will be a much easier way for
users to
 interact with the system.   Not convinced that it couldn't co-exist in
V2
 thought rather than having to co-exist as V2 and V3
 
 iv)Clean-up of a whole bunch of minor irritations / inconsistencies
 There are lots of things that are really messy (inconsistent error
codes,
 aspects of core that are linked to just Xen, etc, etc).  They annoy
people the
 first time they hit them, then the code around them and move on.
Probably
 I've had more hate mail from people writing language bindings than
 application developers (who tend to be abstracted from this by the
clients)
 
 
  Phil
 
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Porting executor and engine to oslo.messaging

2014-02-28 Thread Joshua Harlow
Convection? Afaik u guys are building convection (convection was just an idea, 
I see mistral as the POC/impl) ;)

https://wiki.openstack.org/wiki/Convection#NOTICE:_Similar_project_-.3E_Mistral

So questions around taskflow:

  1.  Correct u put it in your task, there was previous ideas/work done by the 
team @ https://etherpad.openstack.org/p/BrainstormFlowConditions but from 
previous people that have build said systems it was determined that actually 
there wasn't much use for conditionals being useful (yet). But expression 
evaluation, not sure what that means, being a library, any type of expression 
evaluation is just whatever u can imagine in python. Conditional tasks (and 
such) being managed by taskflows engines we can reconsider  might even be 
possible but this is imho dangerous territory that is being approached, 
expression evaluation and conditional branching and loops is basically a 
language specification ;)
  2.  I don't see taskflow managing a catalog (currently), that seems out of 
scope of a library that provides the execution, resumption parts (any consumer 
of taskflow should be free to define and organize there catalog as they choose).
  3.  Negative, taskflow is a execution and state-management library (not a 
full framework imho) that helps build the upper layers that services like 
mistral can use (or nova, or glance or…). I don't feel its the right place to 
have taskflow force a DSL onto people, since the underlying primitives that can 
form a upper level DSL are more service/app level choices (heat has there DSL, 
mistral has theres, both are fine, and both likely can take advantage of the 
same taskflow execution and state-management primitives to use in there 
service).

Hope that helps :)

-Josh

From: W Chan m4d.co...@gmail.commailto:m4d.co...@gmail.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Friday, February 28, 2014 at 12:02 PM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Mistral] Porting executor and engine to 
oslo.messaging

All,
This is a great start.  I think the sooner we have this discussion the better.  
Any uncertainty in the direction/architecture here is going to stall progress.  
How about Convection?  What's the status of the Convection project and where 
it's heading?  Should we have similar discussion with the contributors of that 
project?

Joshua,
I have a few questions about TaskFlow.
1) How does it handle conditional loop and expression evaluation for decision 
branching?  I've looked at the Taskflow wiki/code briefly and it's not obvious. 
 I assume it would be logic that user will embed within a task?
2) How about predefined catalog of standard tasks (i.e. REST call, SOAP call, 
Email task, etc.)?  Is that within the scope of Taskflow or up to TaskFlow 
consumers like Mistral?
3) Does TaskFlow have its own DSL?  The examples provided are mostly code based.

Thanks.
Winson




On Fri, Feb 28, 2014 at 10:54 AM, Joshua Harlow 
harlo...@yahoo-inc.commailto:harlo...@yahoo-inc.com wrote:
Sounds good,

Lets connect, the value of central oslo connected projects is that shared 
libraries == share the pain. Duplicating features and functionality is always 
more pain. In the end we are a community, not silos, so it seems like before 
mistral goes down the path of duplicating more and more features (I understand 
the desire to POC mistral and learn what mistral wants to become, and all that) 
that we should start the path to working together. I personally am worried that 
mistral will start to apply for incubation and then the question will come up 
as to this (mistral was doing POC, kept on doing POC, never came back to using 
common libraries, and then gets asked why this happened).

I'd like to make us all successful, and as a old saying goes,

“A single twig breaks, but the bundle of twigs is strong”, openstack needs to 
be a cohesive bundle and not a single twig ;)

From: Renat Akhmerov rakhme...@mirantis.commailto:rakhme...@mirantis.com

Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Friday, February 28, 2014 at 6:31 AM

To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Mistral] Porting executor and engine to 
oslo.messaging

Hi Joshua,

Sorry, I’ve been very busy for the last couple of days and didn’t respond 
quickly enough.

Well, first of all, it’s my bad that I’ve not been following TaskFlow progress 
for a while and, honestly, I just need to get more info on the current TaskFlow 
status. So I’ll do that and get back to you soon. As you know, there were 
reasons why we decided to go this path 

[openstack-dev] [Cinder] Get volumes REST API with filters and limit

2014-02-28 Thread Steven Kaufer


I am investigating some pagination enhancements in nova and cinder (see
nova blueprint
https://blueprints.launchpad.net/nova/+spec/nova-pagination).

In cinder, it appears that all filtering is done after the volumes are
retrieved from the database (see the API.get_all function in
https://github.com/openstack/cinder/blob/master/cinder/volume/api.py).
Therefore, the usage combination of filters and limit will only work if all
volumes matching the filters are in the page of data being retrieved from
the database.

For example, assume that all of the volumes with a name of foo would be
retrieved from the database starting at index 100 and that you query for
all volumes with a name of foo while specifying a limit of 50.  In this
case, the query would yield 0 results since the filter did not match any of
the first 50 entries retrieved from the database.

Is this a known problem?
Is this considered a bug?
How should this get resolved?  As a blueprint for juno?

I am new to the community and am trying to determine how this should be
addressed.

Thanks,

Steven Kaufer___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][policy] Blueprint document

2014-02-28 Thread Kyle Mestery
On Feb 28, 2014, at 8:01 AM, Carlos Gonçalves m...@cgoncalves.pt wrote:

 Hi all,
 
 As the blueprint document is write-protected, the “See revision history” 
 option is greyed out for viewers-only making it difficult to keep track of 
 changes. Hence, and if there is no way as a viewer to see the revision 
 history, could someone add me to the document please? My Google ID is 
 carlos.ei.goncalves.
 
I’ve added you Carlos.

Thanks,
Kyle

 Thanks,
 Carlos Goncalves
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] [neutron] OpenDaylight devstack support questions

2014-02-28 Thread Kyle Mestery
Thanks Salvatore. I spent some time talking with dtroyer on IRC a few
days ago, and I have a path forward. One thing I wanted to point out is
that eventually the goal is to manage the lifecycle of ODL inside of
devstack. This is similar to what is being done with Ryu and Trema
already in devstack. But the first cut will simply allow for configuration
and de-configuration of OVS to work with ODL.

On Feb 28, 2014, at 11:46 AM, Salvatore Orlando sorla...@nicira.com wrote:

 Hi Kyle,
 
 I think conceptually your approach is fine.
 I would have had concerns if you were trying to manage ODL life cycle through 
 devstack (like installing/uninstalling it or configuring the ODL controller).
 But looking at your code it seems you're just setting up the host so that it 
 could work with opendaylight.
 
 I agree however that extras.d is probably not the right place, as devstack 
 already has hooks in places for plugin configuration.
 I think they are at least:
 - configure
 - check
 - init
 - install
 - start 
 
 big switch, midokura, nec, ryu, and nsx already use these hooks.
 I appreciate the fact that since this is a mech driver rather than a plugin, 
 this solution won't work out of the box, but at first glance it should not be 
 to hard to adapt it.
 
 Salvatore
 
 
 
 On 26 February 2014 22:47, Kyle Mestery mest...@noironetworks.com wrote:
 So, I have this review [1] which attempts to add support for OpenDaylight
 to devstack. What this currently does, in Patch 7, is that it uses the
 extras functionality of devstack to program the OVS on the host so that
 OpenDaylight can control it. On teardown, it does the reverse. Simple and
 straightforward. I've received feedback this isn't the correct approach here,
 and that using a plugin approach in lib/neutron_plugin/opendaylight would
 be better. I need hooks for when devstack is finished running, and when
 unstack is called. Those don't appear in the plugin interface for Neutron
 in devstack.
 
 Another point of inconsistency I'd like to bring up is the fact that patches
 for Neutron in devstack which propose running an Open Source controller
 are being flagged with -1. However, the Ryu plugin is already doing this. I
 suspect it was grandfathered in, but it sets an inconsistent precedent here.
 I propose we either remove Ryu from devstack, or continue to let other
 Open Source SDN controllers run inside devstack. Please see Patch 6
 of the review below for the minimal work it took me to add OpenDaylight
 there.
 
 Feedback appreciated here, I've been sitting on this devstack patch with
 minimal changes for a month. I'm also working with the Linux Foundation
 for the 3rd party testing requirements for ODL so the ML2 MechanismDriver
 can also go in.
 
 Thanks,
 Kyle
 
 [1] https://review.openstack.org/#/c/69774/
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][keystone] Increase of USER_ID length maximum from 64 to 255

2014-02-28 Thread Mark Washenberger
On Fri, Feb 28, 2014 at 10:39 AM, Henry Nash hen...@linux.vnet.ibm.comwrote:

 Hi Mark,

 So we would not modify any existing IDs, so no migration required.


Okay, I just want to be painfully clear--we're not proposing changing any
of the current restrictions on the user-id field. We will not:
  - require it to be a uuid
  - encode it as binary instead of char
  - shrink its size below the current 64 characters

Any of those could require a migration for existing IDs depending on how
your identity driver functions.

If I'm just being Chicken Little, please reassure me once more and I'll be
quiet :-)




 Henry

 On 28 Feb 2014, at 17:38, Mark Washenberger 
 mark.washenber...@markwash.net wrote:




 On Wed, Feb 26, 2014 at 5:25 AM, Dolph Mathews dolph.math...@gmail.comwrote:


 On Tue, Feb 25, 2014 at 2:38 PM, Jay Pipes jaypi...@gmail.com wrote:

 On Tue, 2014-02-25 at 11:47 -0800, Morgan Fainberg wrote:
  For purposes of supporting multiple backends for Identity (multiple
  LDAP, mix of LDAP and SQL, federation, etc) Keystone is planning to
  increase the maximum size of the USER_ID field from an upper limit of
  64 to an upper limit of 255. This change would not impact any
  currently assigned USER_IDs (they would remain in the old simple UUID
  format), however, new USER_IDs would be increased to include the IDP
  identifier (e.g. USER_ID@@IDP_IDENTIFIER).

 -1

 I think a better solution would be to have a simple translation table
 only in Keystone that would store this longer identifier (for folks
 using federation and/or LDAP) along with the Keystone user UUID that is
 used in foreign key relations and other mapping tables through Keystone
 and other projects.


 Morgan and I talked this suggestion through last night and agreed it's
 probably the best approach, and has the benefit of zero impact on other
 services, which is something we're obviously trying to avoid. I imagine it
 could be as simple as a user_id to domain_id lookup table. All we really
 care about is given a globally unique user ID, which identity backend is
 the user from?

 On the downside, it would likely become bloated with unused ephemeral
 user IDs, so we'll need enough metadata about the mapping to implement a
 purging behavior down the line.


 Is this approach planning on reusing the existing user-id field, then? It
 seems like this creates a migration problem for folks who are currently
 using user-ids that are generated by their identity backends.





 The only identifiers that would ever be communicated to any non-Keystone
 OpenStack endpoint would be the UUID user and tenant IDs.

  There is the obvious concern that projects are utilizing (and storing)
  the user_id in a field that cannot accommodate the increased upper
  limit. Before this change is merged in, it is important for the
  Keystone team to understand if there are any places that would be
  overflowed by the increased size.

 I would go so far as to say the user_id and tenant_id fields should be
 *reduced* in size to a fixed 16-char BINARY or 32-char CHAR field for
 performance reasons. Lengthening commonly-used and frequently-joined
 identifier fields is not a good option, IMO.

 Best,
 -jay

  The review that would implement this change in size
  is https://review.openstack.org/#/c/74214 and is actively being worked
  on/reviewed.
 
 
  I have already spoken with the Nova team, and a single instance has
  been identified that would require a migration (that will have a fix
  proposed for the I3 timeline).
 
 
  If there are any other known locations that would have issues with an
  increased USER_ID size, or any concerns with this change to USER_ID
  format, please respond so that the issues/concerns can be addressed.
   Again, the plan is not to change current USER_IDs but that new ones
  could be up to 255 characters in length.
 
 
  Cheers,
  Morgan Fainberg
  --
  Morgan Fainberg
  Principal Software Engineer
  Core Developer, Keystone
  m...@metacloud.com
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list

Re: [openstack-dev] sqlalchemy-migrate release impending

2014-02-28 Thread Matt Riedemann



On 2/26/2014 11:34 AM, Sean Dague wrote:

On 02/26/2014 11:24 AM, David Ripton wrote:

I'd like to release a new version of sqlalchemy-migrate in the next
couple of days.  The only major new feature is DB2 support.  If anyone
thinks this is a bad time, please let me know.



So it would be nice if someone could actually work through the 0.9 sqla
support, because I think it's basically just a change in quoting
behavior that's left (mostly where quoting gets called) -
https://review.openstack.org/#/c/66156/

-Sean



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Looks like the 0.8.3 tag is up so it's just a matter of time before it 
shows up on pypi?


https://review.openstack.org/gitweb?p=stackforge/sqlalchemy-migrate.git;a=commit;h=21fcdad0f485437d010e5743626c63ab3acdaec5

--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack][Runtime Policy] A proposal for OpenStack run time policy to manage compute/storage resource

2014-02-28 Thread Tim Hinrichs
Hi Jay,

I think the Solver Scheduler is a better fit for your needs than Congress 
because you know what kinds of constraints and enforcement you want.  I'm not 
sure this topic deserves an entire design session--maybe just talking a bit at 
the summit would suffice (I *think* I'll be attending).

Tim

- Original Message -
| From: Jay Lau jay.lau@gmail.com
| To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
| Sent: Wednesday, February 26, 2014 6:30:54 PM
| Subject: Re: [openstack-dev] [OpenStack][Runtime Policy] A proposal for 
OpenStack run time policy to manage
| compute/storage resource
| 
| 
| 
| 
| 
| 
| Hi Tim,
| 
| I'm not sure if we can put resource monitor and adjust to
| solver-scheduler (Gantt), but I have proposed this to Gantt design
| [1], you can refer to [1] and search jay-lau-513.
| 
| IMHO, Congress does monitoring and also take actions, but the actions
| seems mainly for adjusting single VM network or storage. It did not
| consider migrating VM according to hypervisor load.
| 
| Not sure if this topic deserved to be a design session for the coming
| summit, but I will try to propose.
| 
| 
| 
| 
| [1] https://etherpad.openstack.org/p/icehouse-external-scheduler
| 
| 
| 
| Thanks,
| 
| 
| Jay
| 
| 
| 
| 2014-02-27 1:48 GMT+08:00 Tim Hinrichs  thinri...@vmware.com  :
| 
| 
| Hi Jay and Sylvain,
| 
| The solver-scheduler sounds like a good fit to me as well. It clearly
| provisions resources in accordance with policy. Does it monitor
| those resources and adjust them if the system falls out of
| compliance with the policy?
| 
| I mentioned Congress for two reasons. (i) It does monitoring. (ii)
| There was mention of compute, networking, and storage, and I
| couldn't tell if the idea was for policy that spans OS components or
| not. Congress was designed for policies spanning OS components.
| 
| 
| Tim
| 
| - Original Message -
| 
| | From: Jay Lau  jay.lau@gmail.com 
| | To: OpenStack Development Mailing List (not for usage questions)
| |  openstack-dev@lists.openstack.org 
| 
| 
| | Sent: Tuesday, February 25, 2014 10:13:14 PM
| | Subject: Re: [openstack-dev] [OpenStack][Runtime Policy] A proposal
| | for OpenStack run time policy to manage
| | compute/storage resource
| | 
| | 
| | 
| | 
| | 
| | Thanks Sylvain and Tim for the great sharing.
| | 
| | @Tim, I also go through with Congress and have the same feeling
| | with
| | Sylvai, it is likely that Congress is doing something simliar with
| | Gantt providing a holistic way for deploying. What I want to do is
| | to provide some functions which is very similar with VMWare DRS
| | that
| | can do some adaptive scheduling automatically.
| | 
| | @Sylvain, can you please show more detail for what Pets vs.
| | Cattles
| | analogy means?
| | 
| | 
| | 
| | 
| | 2014-02-26 9:11 GMT+08:00 Sylvain Bauza  sylvain.ba...@gmail.com 
| | :
| | 
| | 
| | 
| | Hi Tim,
| | 
| | 
| | As per I'm reading your design document, it sounds more likely
| | related to something like Solver Scheduler subteam is trying to
| | focus on, ie. intelligent agnostic resources placement on an
| | holistic way [1]
| | IIRC, Jay is more likely talking about adaptive scheduling
| | decisions
| | based on feedback with potential counter-measures that can be done
| | for decreasing load and preserving QoS of nodes.
| | 
| | 
| | That said, maybe I'm wrong ?
| | 
| | 
| | [1] https://blueprints.launchpad.net/nova/+spec/solver-scheduler
| | 
| | 
| | 
| | 2014-02-26 1:09 GMT+01:00 Tim Hinrichs  thinri...@vmware.com  :
| | 
| | 
| | 
| | 
| | Hi Jay,
| | 
| | The Congress project aims to handle something similar to your use
| | cases. I just sent a note to the ML with a Congress status update
| | with the tag [Congress]. It includes links to our design docs. Let
| | me know if you have trouble finding it or want to follow up.
| | 
| | Tim
| | 
| | 
| | 
| | - Original Message -
| | | From: Sylvain Bauza  sylvain.ba...@gmail.com 
| | | To: OpenStack Development Mailing List (not for usage
| | | questions)
| | |  openstack-dev@lists.openstack.org 
| | | Sent: Tuesday, February 25, 2014 3:58:07 PM
| | | Subject: Re: [openstack-dev] [OpenStack][Runtime Policy] A
| | | proposal
| | | for OpenStack run time policy to manage
| | | compute/storage resource
| | | 
| | | 
| | | 
| | | Hi Jay,
| | | 
| | | 
| | | Currently, the Nova scheduler only acts upon user request (either
| | | live migration or boot an instance). IMHO, that's something Gantt
| | | should scope later on (or at least there could be some space
| | | within
| | | the Scheduler) so that Scheduler would be responsible for
| | | managing
| | | resources on a dynamic way.
| | | 
| | | 
| | | I'm thinking of the Pets vs. Cattles analogy, and I definitely
| | | think
| | | that Compute resources could be treated like Pets, provided the
| | | Scheduler does a move.
| | | 
| | | 
| | | -Sylvain
| | | 
| | | 
| | | 
| | | 2014-02-26 0:40 

Re: [openstack-dev] [keystone] how to enable logging for unit tests

2014-02-28 Thread Clark Boylan
On Fri, Feb 28, 2014 at 1:30 PM, John Dennis jden...@redhat.com wrote:
 I'd like to enable debug logging while running some specific unit tests
 and I've not been able to find the right combination of levers to pull
 to get logging output on the console.

 In keystone/etc/keystone.conf.sample (which is config file loaded for
 the unit tests) I've set debug to True, I've verified CONF.debug is true
 when the test executes. I've also tried setting log_file and log_dir to
 see if I could get logging written to a log file instead, but no luck.

 I have noticed when a test fails I'll see all the debug logging emitted
 inbetween

 {{{
 }}}

 which I think is something testtools is doing.

 This leads me to the theory testtools is somehow consuming the logging
 output. Is that correct?

 How do I get the debug logging to show up on the console during a test run?

 --
 John

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

I was going to respond to this and say it is easy, you set
OS_LOG_CAPTURE=False in your test env and rerun the tests. But it
doesn't look like keystone has made log capturing configurable [0]. I
thought we had set this variable properly in places but I have
apparently misremembered. You could add an OS_LOG_CAPTURE flag and set
it in .testr.conf and see if it helps. The other thing you can do is
refer to the subunit log file in .testrepository/$TEST_ID after tests
have run.

[0] 
https://git.openstack.org/cgit/openstack/keystone/tree/keystone/tests/core.py#n338

Clark

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] New Websockify Release

2014-02-28 Thread Solly Ross
Whoops! In case anyone was wondering, I wasn't telling myself that I had a good 
idea :-P.  I just clicked reply to the wrong email.  The correct email to which 
this was a reply is below.

 Good idea -- here's the blueprint: 
 https://blueprints.launchpad.net/nova/+spec/update-to-latest-websockify

- Original Message -
From: Abhishek Kekane abhishek.kek...@nttdata.com
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Sent: Friday, February 28, 2014 1:00:59 AM
Subject: Re: [openstack-dev] [nova] New Websockify Release

Hi,

Are you going to file new blueprint or log a bug in Launchpad to encounter this 
change.

Thanks,

Abhishek

-Original Message-
From: Nikola Đipanov [mailto:ndipa...@redhat.com] 
Sent: Thursday, February 27, 2014 4:17 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [nova] New Websockify Release

On 02/27/2014 11:22 AM, Thierry Carrez wrote:
 Solly Ross wrote:
 We (the websockify/noVNC team) have released a new version of websockify 
 (0.6.0).  It contains several fixes and features relating to OpenStack (a 
 couple of bugs were fixed, and native support for the `logging` module was 
 added).  Unfortunately, to integrate it into OpenStack, a patch is needed to 
 the websocketproxy code in Nova 
 (https://gist.github.com/DirectXMan12/9233369) due to a refactoring of the 
 websockify API.  My concern is that the various distos most likely have not 
 had time to update the package in their package repositories.  What is the 
 appropriate timescale for updating Nova to work with the new version?
 
 Thanks for reaching out !
 
 I'll let the Nova devs speak, but in that specific case it might make 
 sense to patch the Nova code to support both API versions. That would 
 facilitate the migration to 0.6.0-style code.
 
 At some point in the future (when 0.6.0 is everywhere) we could bump 
 the dep to =0.6.0 and remove the compatibility code.
 

Yes - fully agreed with Thierry.

I will try to put up a patch for this, but if someone gets there before me - 
even better :).

Thanks,

ndipanov



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
Disclaimer:This email and any attachments are sent in strictest confidence for 
the sole use of the addressee and may contain legally privileged, confidential, 
and proprietary data.  If you are not the intended recipient, please advise the 
sender by replying promptly to this email and then delete and destroy this 
email and any attachments without any further use, copying or forwarding

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][keystone] Increase of USER_ID length maximum from 64 to 255

2014-02-28 Thread Jay Pipes
On Fri, 2014-02-28 at 13:10 -0800, Mark Washenberger wrote:
 
 On Fri, Feb 28, 2014 at 10:39 AM, Henry Nash
 hen...@linux.vnet.ibm.com wrote:
 Hi Mark,
 
 
 So we would not modify any existing IDs, so no migration
 required.
 
 
 Okay, I just want to be painfully clear--we're not proposing changing
 any of the current restrictions on the user-id field. We will not:
   - require it to be a uuid
   - encode it as binary instead of char
   - shrink its size below the current 64 characters

The first would be required for the real solution. The second and third
are performance improvements.

 Any of those could require a migration for existing IDs depending on
 how your identity driver functions.

Personally, I think to fix this issue permanently and properly,
migrations for database schemas of Glance and Nova would indeed need to
accompany a proposed patch that restricts the Keystone external user ID
to only a UUID value.

I entirely disagree with allowing non-UUID values for the user ID value
that is exposed outside of Keystone. All other solutions (including the
proposals to continue using the user_id fields with non-UUID values) are
just hacks IMO.

Best,
-jay



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack][Runtime Policy] A proposal for OpenStack run time policy to manage compute/storage resource

2014-02-28 Thread Jay Lau
Hi Yathiraj and Tim,

Really appreciate your comments here ;-)

I will prepare some detailed slides or documents before summit and we can
have a review then. It would be great if OpenStack can provide DRS
features.

Thanks,

Jay



2014-03-01 6:00 GMT+08:00 Tim Hinrichs thinri...@vmware.com:

 Hi Jay,

 I think the Solver Scheduler is a better fit for your needs than Congress
 because you know what kinds of constraints and enforcement you want.  I'm
 not sure this topic deserves an entire design session--maybe just talking a
 bit at the summit would suffice (I *think* I'll be attending).

 Tim

 - Original Message -
 | From: Jay Lau jay.lau@gmail.com
 | To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 | Sent: Wednesday, February 26, 2014 6:30:54 PM
 | Subject: Re: [openstack-dev] [OpenStack][Runtime Policy] A proposal for
 OpenStack run time policy to manage
 | compute/storage resource
 |
 |
 |
 |
 |
 |
 | Hi Tim,
 |
 | I'm not sure if we can put resource monitor and adjust to
 | solver-scheduler (Gantt), but I have proposed this to Gantt design
 | [1], you can refer to [1] and search jay-lau-513.
 |
 | IMHO, Congress does monitoring and also take actions, but the actions
 | seems mainly for adjusting single VM network or storage. It did not
 | consider migrating VM according to hypervisor load.
 |
 | Not sure if this topic deserved to be a design session for the coming
 | summit, but I will try to propose.
 |
 |
 |
 |
 | [1] https://etherpad.openstack.org/p/icehouse-external-scheduler
 |
 |
 |
 | Thanks,
 |
 |
 | Jay
 |
 |
 |
 | 2014-02-27 1:48 GMT+08:00 Tim Hinrichs  thinri...@vmware.com  :
 |
 |
 | Hi Jay and Sylvain,
 |
 | The solver-scheduler sounds like a good fit to me as well. It clearly
 | provisions resources in accordance with policy. Does it monitor
 | those resources and adjust them if the system falls out of
 | compliance with the policy?
 |
 | I mentioned Congress for two reasons. (i) It does monitoring. (ii)
 | There was mention of compute, networking, and storage, and I
 | couldn't tell if the idea was for policy that spans OS components or
 | not. Congress was designed for policies spanning OS components.
 |
 |
 | Tim
 |
 | - Original Message -
 |
 | | From: Jay Lau  jay.lau@gmail.com 
 | | To: OpenStack Development Mailing List (not for usage questions)
 | |  openstack-dev@lists.openstack.org 
 |
 |
 | | Sent: Tuesday, February 25, 2014 10:13:14 PM
 | | Subject: Re: [openstack-dev] [OpenStack][Runtime Policy] A proposal
 | | for OpenStack run time policy to manage
 | | compute/storage resource
 | |
 | |
 | |
 | |
 | |
 | | Thanks Sylvain and Tim for the great sharing.
 | |
 | | @Tim, I also go through with Congress and have the same feeling
 | | with
 | | Sylvai, it is likely that Congress is doing something simliar with
 | | Gantt providing a holistic way for deploying. What I want to do is
 | | to provide some functions which is very similar with VMWare DRS
 | | that
 | | can do some adaptive scheduling automatically.
 | |
 | | @Sylvain, can you please show more detail for what Pets vs.
 | | Cattles
 | | analogy means?
 | |
 | |
 | |
 | |
 | | 2014-02-26 9:11 GMT+08:00 Sylvain Bauza  sylvain.ba...@gmail.com 
 | | :
 | |
 | |
 | |
 | | Hi Tim,
 | |
 | |
 | | As per I'm reading your design document, it sounds more likely
 | | related to something like Solver Scheduler subteam is trying to
 | | focus on, ie. intelligent agnostic resources placement on an
 | | holistic way [1]
 | | IIRC, Jay is more likely talking about adaptive scheduling
 | | decisions
 | | based on feedback with potential counter-measures that can be done
 | | for decreasing load and preserving QoS of nodes.
 | |
 | |
 | | That said, maybe I'm wrong ?
 | |
 | |
 | | [1] https://blueprints.launchpad.net/nova/+spec/solver-scheduler
 | |
 | |
 | |
 | | 2014-02-26 1:09 GMT+01:00 Tim Hinrichs  thinri...@vmware.com  :
 | |
 | |
 | |
 | |
 | | Hi Jay,
 | |
 | | The Congress project aims to handle something similar to your use
 | | cases. I just sent a note to the ML with a Congress status update
 | | with the tag [Congress]. It includes links to our design docs. Let
 | | me know if you have trouble finding it or want to follow up.
 | |
 | | Tim
 | |
 | |
 | |
 | | - Original Message -
 | | | From: Sylvain Bauza  sylvain.ba...@gmail.com 
 | | | To: OpenStack Development Mailing List (not for usage
 | | | questions)
 | | |  openstack-dev@lists.openstack.org 
 | | | Sent: Tuesday, February 25, 2014 3:58:07 PM
 | | | Subject: Re: [openstack-dev] [OpenStack][Runtime Policy] A
 | | | proposal
 | | | for OpenStack run time policy to manage
 | | | compute/storage resource
 | | |
 | | |
 | | |
 | | | Hi Jay,
 | | |
 | | |
 | | | Currently, the Nova scheduler only acts upon user request (either
 | | | live migration or boot an instance). IMHO, that's something Gantt
 | | | should scope later on (or at least there could be some space
 | | | within
 | | 

Re: [openstack-dev] [all][keystone] Increase of USER_ID length maximum from 64 to 255

2014-02-28 Thread Mark Washenberger
On Fri, Feb 28, 2014 at 2:26 PM, Jay Pipes jaypi...@gmail.com wrote:

 On Fri, 2014-02-28 at 13:10 -0800, Mark Washenberger wrote:
 
  On Fri, Feb 28, 2014 at 10:39 AM, Henry Nash
  hen...@linux.vnet.ibm.com wrote:
  Hi Mark,
 
 
  So we would not modify any existing IDs, so no migration
  required.
 
 
  Okay, I just want to be painfully clear--we're not proposing changing
  any of the current restrictions on the user-id field. We will not:
- require it to be a uuid
- encode it as binary instead of char
- shrink its size below the current 64 characters

 The first would be required for the real solution. The second and third
 are performance improvements.

  Any of those could require a migration for existing IDs depending on
  how your identity driver functions.

 Personally, I think to fix this issue permanently and properly,
 migrations for database schemas of Glance and Nova would indeed need to
 accompany a proposed patch that restricts the Keystone external user ID
 to only a UUID value.

 I entirely disagree with allowing non-UUID values for the user ID value
 that is exposed outside of Keystone. All other solutions (including the
 proposals to continue using the user_id fields with non-UUID values) are
 just hacks IMO.


I believe we have some agreement here. Other openstack services should be
able to use a strongly typed identifier for users. I just think if we want
to go that route, we probably need to create a new field to act as the
proper user uuid, rather than repurposing the existing field. It sounds
like many existing LDAP deployments would break if we repurpose the
existing field.


 Best,
 -jay



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Live migration, auth token lifetimes.

2014-02-28 Thread Brant Knudson
On Fri, Feb 28, 2014 at 8:13 AM, j...@ioctl.org wrote:


 The second would be to have a way for the nova process to extend proxy
 credentials until such point as they are required by the post- stages.
 I'll elide the potential security concerns over putting such an API call
 into keystone, but it should probably be considered.


This facility is already implemented in Keystone, it's called trusts[1].

[1]
https://github.com/openstack/identity-api/blob/master/openstack-identity-api/v3/src/markdown/identity-api-v3-os-trust-ext.md

- Brant
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][keystone] Increase of USER_ID length maximum from 64 to 255

2014-02-28 Thread Clint Byrum
Excerpts from Jay Pipes's message of 2014-02-28 14:26:26 -0800:
 On Fri, 2014-02-28 at 13:10 -0800, Mark Washenberger wrote:
  
  On Fri, Feb 28, 2014 at 10:39 AM, Henry Nash
  hen...@linux.vnet.ibm.com wrote:
  Hi Mark,
  
  
  So we would not modify any existing IDs, so no migration
  required.
  
  
  Okay, I just want to be painfully clear--we're not proposing changing
  any of the current restrictions on the user-id field. We will not:
- require it to be a uuid
- encode it as binary instead of char
- shrink its size below the current 64 characters
 
 The first would be required for the real solution. The second and third
 are performance improvements.
 
  Any of those could require a migration for existing IDs depending on
  how your identity driver functions.
 
 Personally, I think to fix this issue permanently and properly,
 migrations for database schemas of Glance and Nova would indeed need to
 accompany a proposed patch that restricts the Keystone external user ID
 to only a UUID value.
 
 I entirely disagree with allowing non-UUID values for the user ID value
 that is exposed outside of Keystone. All other solutions (including the
 proposals to continue using the user_id fields with non-UUID values) are
 just hacks IMO.

+1. A Keystone record belongs to Keystone, and it should have a Keystone
ID. External records that are linked should be linked separately.

It may not be obvious to everyone, but MySQL uses B-trees for indexes.
B-trees cannot have variable-length keys. So varchar(64) means 64-byte
index keys. If you aren't careful and let that column be stored as utf-8,
this actually means *192* byte index keys, because MySQL uses 3-byte
utf-8 and thus a 64 character column could have 192 bytes. This does
not scale well as you are doing index scans and range lookups, not to
mention just generally raising memory and I/O pressure on the server.

What Jay is suggesting is that we actually be opinionated and store
Keystone users with 16-byte binary UUID's, and only ever use the UUID (in
the 32-byte text notation where appropriate) when returning a keystone ID.

Then only the initial authentication step where the user presents
external identification requires access to anything larger, allowing
all other Keystone operations to perform much better and keeping the
keystone database footprint smaller.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Use spice-html5proxy to connect to spice tls port

2014-02-28 Thread Meghal Gosalia
Hi folks,

Has there been any discussion in the community
for using nova-spicehtml5proxy to connect to spice tls port ?

For example, if a vm is booted using qemu by enabling spice_tls mode in 
qemu.conf,
it enables tls encryption on spice server for that vm.

nova-spicehtml5proxy is based on websockify, which supports ssl wrapped sockets.
It would be great, if nova-spicehtml5proxy could decide based on config param 
in nova.conf
regarding use of normal sockets or ssl wrapped sockets to connect to spice 
server.

This would ensure SSL encryption from spice proxy to spice server.
Are there any concerns with this approach ?

Thanks,
Meghal



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [horizon][infra]Sync notifier module from oslo-incubator met django DatabseError

2014-02-28 Thread jackychen

hi,
I have commit a patch to sync notifier module under horizon with 
oslo-incubator, I met the gate-horizon-python error, all the errors are 
aimed at DatabseError.

Code Review Link: https://review.openstack.org/#/c/76439/

The specific error: django.db.utils.DatabaseError: DatabaseWrapper 
objects created in a thread can only be used in that same thread. The 
object with alias 'default' was created in thread id 140664492599040 and 
this is thread id 56616752.


So I have google it and find that there are two ways to fix this

1. https://code.djangoproject.com/ticket/17998
import eventlet
eventlet.monkey_patch()

2.https://bitbucket.org/akoha/django-digest/issue/10/conflict-with-global-databasewrapper
replace
cursor = self.db.connection.cursor()
with
cursor = db.connections[DEFAULT_DB_ALIAS].cursor()
everywhere it appears in storage.py, and add to the imports:
from django import db from django.db.utils import DEFAULT_DB_ALIAS

Anyway, all these two solution are all not can be handled in my code 
commit,

So do you have any point of view to make this work?
Thanks for all your help.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] milestone-proposed branches

2014-02-28 Thread James Slagle
On Wed, Jan 22, 2014 at 6:46 PM, Thierry Carrez thie...@openstack.org
wrote:
 James Slagle wrote:
 I read through that wiki page. I did have a couple of questions:

 Who usually runs through the steps there? You? or a project member?

 Me for integrated projects (and most incubated ones). A project member
 for everything else.

 When repo_tarball_diff.sh is run, are there any acceptable missing
 files? I'm seeing an AUTHORS and ChangeLog file showing up in the
 output from our repos, those are automatically generated, so I assume
 those are ok. There are also some egg_info files showing up, which I
 also think can be safely ignored. (I submitted a patch that updates
 the grep command used in the script:
 https://review.openstack.org/#/c/68471/ )

 Yes, there is a number of normal things appearing there, like the
 autogenerated AUTHORS, Changelog, ignored files and egg_info stuff. The
 goal of the script is to spot any unusual thing.


Hi Thierry,

I'd like to ask that the following repositories for TripleO be included in
next week's cutting of icehouse-3:

http://git.openstack.org/openstack/tripleo-incubator
http://git.openstack.org/openstack/tripleo-image-elements
http://git.openstack.org/openstack/tripleo-heat-templates
http://git.openstack.org/openstack/diskimage-builder
http://git.openstack.org/openstack/os-collect-config
http://git.openstack.org/openstack/os-refresh-config
http://git.openstack.org/openstack/os-apply-config

Are you willing to run through the steps on the How_To_Release wiki for
these repos, or should I do it next week? Just let me know how or what to
coordinate. Thanks.

-- 
-- James Slagle
--
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] How to configure DevStack to use Ceilometer?

2014-02-28 Thread Mike Spreitzer
So far I have found three different sources, and they all say different 
things.

http://techs.enovance.com/5991/autoscaling-with-heat-and-ceilometer
http://devstack.org/lib/ceilometer.html
http://docs.openstack.org/developer/ceilometer/install/development.html

Thanks,
Mike___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Some changes on config drive

2014-02-28 Thread Jiang, Yunhong
Hi, Michael and all,

I created some changes to config_drive, and hope to get some feedback. 
The patches are at 
https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:config_drive_cleanup,n,z
 

The basically ideas of the changes are:
1) Instead of using host based config option to decide config_drive and 
config_drive_format, fetch such information from image property. Accordingly to 
Michael, its image that decide if it need config drive, and I think it's image 
that decide what's the config drive format supported. (like cloudinit verion 
1.0 does not support iso9660 format. 
(http://cloudinit.readthedocs.org/en/latest/topics/datasources.html#version-1)

2) I noticed some virt drivers like VMWare/hyperv support only iso9660 
format, , thus select the host based on image property, for example, if a host 
can't support vfat, don't try to schedule a server requires 'vfat' to that host.

The implementation detais are:

1) Image can provide two properties, 'config_drive' and 
'config_drive_format'.

2) There is a cloud wise force_config_drive option (in the api service) 
to decide if the config_drive will be forced applied.

3) There is a host specific config_drive_format to set the default 
config_drive format if not specified in the image property.

4) In the image property filter, we will select the host that support 
the config_drive_format in image property

Any feedback is welcome to these changes.

Thanks
--jyh

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] How to configure DevStack to use Ceilometer?

2014-02-28 Thread Swapnil Kulkarni
I am able to configure devstack with ceilometer adding following to localrc

enable_service ceilometer-acompute ceilometer-acentral ceilometer-collector
ceilometer-api
enable_service ceilometer-alarm-notifier ceilometer-alarm-evaluator

Best Regards,
Swapnil Kulkarni
irc : coolsvap



On Sat, Mar 1, 2014 at 10:48 AM, Mike Spreitzer mspre...@us.ibm.com wrote:

 So far I have found three different sources, and they all say different
 things.

 http://techs.enovance.com/5991/autoscaling-with-heat-and-ceilometer
 http://devstack.org/lib/ceilometer.html
 http://docs.openstack.org/developer/ceilometer/install/development.html

 Thanks,
 Mike
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nuetron][IPv6]

2014-02-28 Thread Shixiong Shang
Hi, guys:

What should I do to fix these “tempest” failures? Any suggestions or pointers 
are highly appreciated!

Thanks!

Shixiong


Jenkins 12:36 AM
Patch Set 13: Doesn't seem to work
Build failed. For information on how to proceed, see 
https://wiki.openstack.org/wiki/GerritJenkinsGit#Test_Failures
gate-neutron-pep8 SUCCESS in 1m 59s
gate-neutron-docs SUCCESS in 2m 27s
gate-neutron-python26 SUCCESS in 19m 13s
gate-neutron-python27 SUCCESS in 13m 04s
check-tempest-dsvm-neutron FAILURE in 10m 46s
check-tempest-dsvm-neutron-full FAILURE in 11m 20s (non-voting)
check-tempest-dsvm-neutron-pg FAILURE in 12m 58s
check-tempest-dsvm-neutron-isolated FAILURE in 9m 27s
check-tempest-dsvm-neutron-pg-isolated FAILURE in 10m 01s
gate-tempest-dsvm-neutron-large-ops FAILURE in 25m 45s
check-grenade-dsvm-neutron FAILURE in 25m 49s (non-voting)
check-devstack-dsvm-neutron FAILURE in 11m 38s (non-voting)




Shixiong Shang

!--- Stay Hungry, Stay Foolish ---!

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] inconsistent naming? node vs host vs vs hypervisor_hostname vs OS-EXT-SRV-ATTR:host

2014-02-28 Thread Jiang, Yunhong

 -Original Message-
 From: Chris Friesen [mailto:chris.frie...@windriver.com]
 Sent: Friday, February 28, 2014 10:07 AM
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] inconsistent naming? node vs host vs vs
 hypervisor_hostname vs OS-EXT-SRV-ATTR:host
 
 On 02/28/2014 11:38 AM, Jiang, Yunhong wrote:
  One reason of the confusion is, in some virt driver (maybe xenapi or
  vmwareapi), one compute service manages multiple node.
 
 Okay, so in the scenario above, is the nova-compute service running on a

I think the nova compute service runs on a host, as you can see from 
compute/manager.py and manager.py. 

 node or a host?  (And if it's a host, then what is the compute
 node?)

Check the update_available_resource() at compute/manager.py for the node idea.

 
 What is the distinction between OS-EXT-SRV-ATTR:host and
 OS-EXT-SRV-ATTR:hypervisor_hostname in the above case?

According to _extend_server() at 
./api/openstack/compute/contrib/extended_server_attributes.py, the 
OS-EXT-SRV-ATTR:hypervisor_hostname is the node and the  
OS-EXT-SRV-ATTR:host is the host.

I agree this is a bit confusing, especially the document is not clearly, I'd 
call the  OS-EXT-SRV-ATTR:hypervisor_hostname as 
 OS-EXT-SRV-ATTR:hypervisor_nodename, which makes more sense and more 
clearly. Per my understanding with the xenapi, there is a hypervisor on each 
compute node, and XenAPI (or any name for that software layer) manage multiple 
(or 1 in extreme case) nodes, that XenAPI software layer interact with nova 
service and is like a host from nova point of view.

Dan has some interesting discussion on the Nova meet up on this and the cell 
(so called cloud NUMA IIRC?)

Thanks
--jyh

 
 Chris
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] Some changes on config drive

2014-02-28 Thread Jiang, Yunhong
Sorry forgot nova prefix in subject.

--jyh

 -Original Message-
 From: Jiang, Yunhong [mailto:yunhong.ji...@intel.com]
 Sent: Friday, February 28, 2014 9:32 PM
 To: openstack-dev@lists.openstack.org
 Subject: [openstack-dev] Some changes on config drive
 
 Hi, Michael and all,
 
   I created some changes to config_drive, and hope to get some
 feedback. The patches are at
 https://review.openstack.org/#/q/status:open+project:openstack/nova+b
 ranch:master+topic:config_drive_cleanup,n,z
 
   The basically ideas of the changes are:
   1) Instead of using host based config option to decide config_drive
 and config_drive_format, fetch such information from image property.
 Accordingly to Michael, its image that decide if it need config drive, and I
 think it's image that decide what's the config drive format supported. (like
 cloudinit verion 1.0 does not support iso9660 format.
 (http://cloudinit.readthedocs.org/en/latest/topics/datasources.html#versi
 on-1)
 
   2) I noticed some virt drivers like VMWare/hyperv support only
 iso9660 format, , thus select the host based on image property, for
 example, if a host can't support vfat, don't try to schedule a server
 requires 'vfat' to that host.
 
   The implementation detais are:
 
   1) Image can provide two properties, 'config_drive' and
 'config_drive_format'.
 
   2) There is a cloud wise force_config_drive option (in the api service)
 to decide if the config_drive will be forced applied.
 
   3) There is a host specific config_drive_format to set the default
 config_drive format if not specified in the image property.
 
   4) In the image property filter, we will select the host that support 
 the
 config_drive_format in image property
 
   Any feedback is welcome to these changes.
 
 Thanks
 --jyh
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][IPv6] Testing functionality of IPv6 modes using Horizon

2014-02-28 Thread Shixiong Shang
Hi, Abishek:

I tried your code and shot you an email with some questions….Would you please 
let me know the dependency between your code and Sean’s?

W.r.t. to your second question, I cannot recall on top of my head (with 
exhausted brain) right now which event subnet update will trigger. I need to 
read the code tomorrow to confirm. If the event can in turn be consumed by 
dhcp_agent and invoke dnsmasq reload, then we should be good to go. Otherwise, 
I suggest we postpone it to next major release. For Icehouse, user have to 
delete the subnet and recreate it if they want to make any change. We can list 
it as caveat/limitation. 

Thanks!

Shixiong




On Feb 28, 2014, at 10:55 AM, Abishek Subramanian (absubram) 
absub...@cisco.com wrote:

 Hi,
 
 I just wanted to find out if anyone had been able to test using Horizon?
 Was everything ok?
 
 Additionally wanted to confirm - the two modes can be updated too yes
 when using neutron subnet-update?
 
 
 Thanks!
 
 On 2/18/14 12:58 PM, Abishek Subramanian (absubram) absub...@cisco.com
 wrote:
 
 Hi shshang, all,
 
 I have some preliminary Horizon diffs available and if anyone
 would be kind enough to patch them and try to test the
 functionality, I'd really appreciate it.
 I know I'm able to create subnets successfully with
 the two modes but if there's anything else you'd like
 to test or have any other user experience comments,
 please feel free to let me know.
 
 The diffs are at -  https://review.openstack.org/74453
 
 Thanks!!
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Shixiong Shang

!--- Stay Hungry, Stay Foolish ---!

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] How to configure DevStack to use Ceilometer?

2014-02-28 Thread Mike Spreitzer
Swapnil Kulkarni swapnilkulkarni2...@gmail.com wrote on 03/01/2014 
12:36:49 AM:


 I am able to configure devstack with ceilometer adding following to 
localrc
 
 enable_service ceilometer-acompute ceilometer-acentral ceilometer-
 collector ceilometer-api
 enable_service ceilometer-alarm-notifier ceilometer-alarm-evaluator

No ceilometer-anotification service?  No settings for notification 
drivers?  From where did you get your configuration information?

Thanks,
Mike___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][IPv6]

2014-02-28 Thread Henry Gessau
On Sat, Mar 01, at 0:46 am, Shixiong Shang  wrote:

 What should I do to fix these “tempest” failures? Any suggestions or
 pointers are highly appreciated!

Your patch depends on review 52983, which needs to rebase and update its
migration script with the latest down revision. Then you need to update your
dependency.

 
 Thanks!
 
 Shixiong
 
 
 Jenkins   12:36 AM
 
 Patch Set 13: Doesn't seem to work
 Build failed. For information on how to proceed,
 see https://wiki.openstack.org/wiki/GerritJenkinsGit#Test_Failures
 
   * gate-neutron-pep8
 http://logs.openstack.org/49/70649/13/check/gate-neutron-pep8/33e21e1 
 SUCCESS in
 1m 59s
   * gate-neutron-docs
 
 http://docs-draft.openstack.org/49/70649/13/check/gate-neutron-docs/e72ba83/doc/build/html/
  SUCCESS in
 2m 27s
   * gate-neutron-python26
 
 http://logs.openstack.org/49/70649/13/check/gate-neutron-python26/4e90064 
 SUCCESS in
 19m 13s
   * gate-neutron-python27
 
 http://logs.openstack.org/49/70649/13/check/gate-neutron-python27/e234487 
 SUCCESS in
 13m 04s
   * check-tempest-dsvm-neutron
 
 http://logs.openstack.org/49/70649/13/check/check-tempest-dsvm-neutron/b71b75b
  FAILURE in
 10m 46s
   * check-tempest-dsvm-neutron-full
 
 http://logs.openstack.org/49/70649/13/check/check-tempest-dsvm-neutron-full/2e09b13
  FAILURE in
 11m 20s (non-voting)
   * check-tempest-dsvm-neutron-pg
 
 http://logs.openstack.org/49/70649/13/check/check-tempest-dsvm-neutron-pg/baa5c6e
  FAILURE in
 12m 58s
   * check-tempest-dsvm-neutron-isolated
 
 http://logs.openstack.org/49/70649/13/check/check-tempest-dsvm-neutron-isolated/80c7169
  FAILURE in
 9m 27s
   * check-tempest-dsvm-neutron-pg-isolated
 
 http://logs.openstack.org/49/70649/13/check/check-tempest-dsvm-neutron-pg-isolated/892585b
  FAILURE in
 10m 01s
   * gate-tempest-dsvm-neutron-large-ops
 
 http://logs.openstack.org/49/70649/13/check/gate-tempest-dsvm-neutron-large-ops/50a2c00
  FAILURE in
 25m 45s
   * check-grenade-dsvm-neutron
 
 http://logs.openstack.org/49/70649/13/check/check-grenade-dsvm-neutron/a95c732
  FAILURE in
 25m 49s (non-voting)
   * check-devstack-dsvm-neutron
 
 http://logs.openstack.org/49/70649/13/check/check-devstack-dsvm-neutron/3bf333a
  FAILURE in
 11m 38s (non-voting)
 
 
 
 *
 
 *
 *Shixiong Shang*
 *
 *
 *!--- Stay Hungry, Stay Foolish ---!*
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev