Re: [openstack-dev] [all] [tc] [PTL] Cascading vs. Cells – summit recap and move forward

2014-12-12 Thread Steve Gordon
- Original Message -
 From: henry hly henry4...@gmail.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 
 On Fri, Dec 12, 2014 at 11:41 AM, Dan Smith d...@danplanet.com
 wrote:
  [joehuang] Could you pls. make it more clear for the deployment
  mode
  of cells when used for globally distributed DCs with single API.
  Do
  you mean cinder/neutron/glance/ceilometer will be shared by all
  cells, and use RPC for inter-dc communication, and only support
  one
  vendor's OpenStack distribution? How to do the cross data center
  integration and troubleshooting with RPC if the
  driver/agent/backend(storage/network/sever) from different vendor.
 
  Correct, cells only applies to single-vendor distributed
  deployments. In
  both its current and future forms, it uses private APIs for
  communication between the components, and thus isn't suited for a
  multi-vendor environment.
 
  Just MHO, but building functionality into existing or new
  components to
  allow deployments from multiple vendors to appear as a single API
  endpoint isn't something I have much interest in.
 
  --Dan
 
 
 Even with the same distribution, cell still face many challenges
 across multiple DC connected with WAN. Considering OAM, it's easier
 to
 manage autonomous systems connected with external northband interface
 across remote sites, than a single monolithic system connected with
 internal RPC message.

The key question here is this primarily the role of OpenStack or an external 
cloud management platform, and I don't profess to know the answer. What do 
people use (workaround or otherwise) for these use cases *today*? Another 
question I have is, one of the stated use cases is for managing OpenStack 
clouds from multiple vendors - is the implication here that some of these have 
additional divergent API extensions or is the concern solely the 
incompatibilities inherent in communicating using the RPC mechanisms? If there 
are divergent API extensions, how is that handled from a proxying point of view 
if not all underlying OpenStack clouds necessarily support it (I guess same 
applies when using distributions without additional extensions but of different 
versions - e.g. Icehouse vs Juno which I believe was also a targeted use case?)?

 Although Cell did some separation and modulation (not to say it's
 still internal RPC across WAN), they leaves cinder, neutron,
 ceilometer. Shall we wait for all these projects to re-factor with
 Cell-like hierarchy structure, or adopt a more loose coupled way, to
 distribute them into autonomous units at the basis of the whole
 Openstack (except Keystone which can handle multiple region
 naturally)?

Similarly though, is the intent with Cascading that each new project would have 
to also implement and provide a proxy for use in these deployments? One of the 
challenges with maintaining/supporting the existing Cells implementation has 
been that it's effectively it's own thing and as a result it is often not 
considered when adding new functionality.

 As we can see, compared with Cell, much less work is needed to build
 a
 Cascading solution, No patch is needed except Neutron (waiting some
 upcoming features not landed in Juno), nearly all work lies in the
 proxy, which is in fact another kind of driver/agent.

Right, but the proxies still appear to be a not insignificant amount of code - 
is the intent not that the proxies would eventually reside within the relevant 
projects? I've been assuming yes but I am wondering if this was an incorrect 
assumption on my part based on your comment.

Thanks,

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder][infra] Ceph CI status update

2014-12-12 Thread Deepak Shetty
On Thu, Dec 11, 2014 at 10:33 PM, Anita Kuno ante...@anteaya.info wrote:

 On 12/11/2014 09:36 AM, Jon Bernard wrote:
  Heya, quick Ceph CI status update.  Once the test_volume_boot_pattern
  was marked as skipped, only the revert_resize test was failing.  I have
  submitted a patch to nova for this [1], and that yields an all green
  ceph ci run [2].  So at the moment, and with my revert patch, we're in
  good shape.
 
  I will fix up that patch today so that it can be properly reviewed and
  hopefully merged.  From there I'll submit a patch to infra to move the
  job to the check queue as non-voting, and we can go from there.
 
  [1] https://review.openstack.org/#/c/139693/
  [2]
 http://logs.openstack.org/93/139693/1/experimental/check-tempest-dsvm-full-ceph/12397fd/console.html
 
  Cheers,
 
 Please add the name of your CI account to this table:
 https://wiki.openstack.org/wiki/ThirdPartySystems

 As outlined in the third party CI requirements:
 http://ci.openstack.org/third_party.html#requirements

 Please post system status updates to your individual CI wikipage that is
 linked to this table.


How is posting status there different than here :
https://wiki.openstack.org/wiki/Cinder/third-party-ci-status

thanx,
deepak
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleP] CI report: 2014-12-5 - 2014-12-11

2014-12-12 Thread James Polley
In the week since the last email we've had no major CI failures. This makes
it very easy for me to write my first CI report.

There was a brief period where all the Ubuntu tests failed while an update
was rolling out to various mirrors. DerekH worked around this quickly by
dropping in a DNS hack, which remains in place. A long term fix for this
problem probably involves setting up our own apt mirrors.

check-tripleo-ironic-overcloud-precise-ha remains flaky, and hence
non-voting.

As always more details can be found here (although this week there's
nothing to see)
https://etherpad.openstack.org/p/tripleo-ci-breakages
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Openstack] [Ceilometer] [API] Batch alarm creation

2014-12-12 Thread Rao Dingyuan
Hi Eoghan and folks,

I'm thinking of adding an API to create multiple alarms in a batch.

I think adding an API to create multiple alarms is a good option to solve the 
problem that once an *alarm target* (a vm or a new group of vms) is created, 
multiple requests will be fired because multiple alarms are to be created.

In the our current project, this requiement is specially urgent since our alarm 
target is one VM, and 6 alarms are to be created when one VM is created.

What do you guys think?


Best Regards,
Kurt Rao



- Original -
发件人: Eoghan Glynn [mailto:egl...@redhat.com] 
发送时间: 2014年12月3日 20:34
收件人: Rao Dingyuan
抄送: openst...@lists.openstack.org
主题: Re: [Openstack] [Ceilometer] looking for alarm best practice - please help



 Hi folks,
 
 
 
 I wonder if anyone could share some best practice regarding to the 
 usage of ceilometer alarm. We are using the alarm 
 evaluation/notification of ceilometer and we don’t feel very well of 
 the way we use it. Below is our
 problem:
 
 
 
 
 
 Scenario:
 
 When cpu usage or memory usage above a certain threshold, alerts 
 should be displayed on admin’s web page. There should be a 3 level 
 alerts according to meter value, namely notice, warning, fatal. Notice 
 means the meter value is between 50% ~ 70%, warning means between 70% 
 ~ 85% and fatal means above 85%
 
 For example:
 
 * when one vm’s cpu usage is 72%, an alert message should be displayed 
 saying
 “Warning: vm[d9b7018b-06c4-4fba-8221-37f67f6c6b8c] cpu usage is above 70%”.
 
 * when one vm’s memory usage is 90%, another alert message should be 
 created saying “Fatal: vm[d9b7018b-06c4-4fba-8221-37f67f6c6b8c] memory 
 usage is above 85%”
 
 
 
 Our current Solution:
 
 We used ceilometer alarm evaluation/notification to implement this. To 
 distinguish which VM and which meter is above what value, we’ve 
 created one alarm for each VM by each condition. So, to monitor 1 VM, 
 6 alarms will be created because there are 2 meters and for each meter there 
 are 3 levels.
 That means, if there are 100 VMs to be monitored, 600 alarms will be 
 created.
 
 
 
 Problems:
 
 * The first problem is, when the number of meters increases, the 
 number of alarms will be multiplied. For example, customer may want 
 alerts on disk and network IO rates, and if we do that, there will be 
 4*3=12 alarms for each VM.
 
 * The second problem is, when one VM is created, multiple alarms will 
 be created, meaning multiple http requests will be fired. In the case 
 above, 6 HTTP requests will be needed once a VM is created. And this 
 number also increases as the number of meters goes up.

One way of reducing both the number of alarms and the volume of notifications 
would be to group related VMs, if such a concept exists in your use-case.

This is effectively how Heat autoscaling uses ceilometer, alarming on the 
average of some statistic over a set of instances (as opposed to triggering on 
individual instances).

The VMs could be grouped by setting user-metadata of form:

  nova boot ... --meta metering.my_server_group=foobar

Any user-metadata prefixed with 'metering.' will be preserved by ceilometer in 
the resource_metadata.user_metedata stored for each sample, so that it can used 
to select the statistics on which the alarm is based, e.g.

  ceilometer alarm-threshold-create --name cpu_high_foobar \
--description 'warning: foobar instance group running hot' \
--meter-name cpu_util --threshold 70.0 \
--comparison-operator gt --statistic avg \
...
--query metadata.user_metedata.my_server_group=foobar

This approach is of course predicated on the there being some natural grouping 
relation between instances in your environment.

Cheers,
Eoghan


 =
 
 
 
 Do anyone have any suggestions?
 
 
 
 
 
 
 
 Best Regards!
 
 Kurt Rao
 
 
 ___
 Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
 Post to : openst...@lists.openstack.org
 Unsubscribe : 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [NFV][Telco] pxe-boot

2014-12-12 Thread Steve Gordon
- Original Message -
 From: Pasquale Porreca pasquale.porr...@dektech.com.au
 To: openstack-dev@lists.openstack.org
 
 Well, one of the main reason to choose an open source product is to
 avoid vendor lock-in. I think it is not
 
 advisable to embed in the software running in an instance a call to
 OpenStack specific services.

Possibly a stupid question, but even if PXE boot was supported would the SC not 
still have to trigger the creation of the PL instance(s) via a call to Nova 
anyway (albeit with boot media coming from PXE instead of Glance)?

-Steve

 On 12/10/14 00:20, Joe Gordon wrote:
 On Wed, Dec 3, 2014 at 1:16 AM, Pasquale Porreca 
 pasquale.porr...@dektech.com.au  wrote:
 
 
 The use case we were thinking about is a Network Function (e.g. IMS
 Nodes) implementation in which the high availability is based on
 OpenSAF. In this scenario there is an Active/Standby cluster of 2
 System Controllers (SC) plus several Payloads (PL) that boot from
 network, controlled by the SC. The logic of which service to deploy
 on each payload is inside the SC.
 
 In OpenStack both SCs and PLs will be instances running in the cloud,
 anyway the PLs should still boot from network under the control of
 the SC. In fact to use Glance to store the image for the PLs and
 keep the control of the PLs in the SC, the SC should trigger the
 boot of the PLs with requests to Nova/Glance, but an application
 running inside an instance should not directly interact with a cloud
 infrastructure service like Glance or Nova.
 
 Why not? This is a fairly common practice.
 --
 Pasquale Porreca
 
 DEK Technologies
 Via dei Castelli Romani, 22
 00040 Pomezia (Roma)
 
 Mobile +39 3394823805
 Skype paskporr
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

-- 
Steve Gordon, RHCE
Sr. Technical Product Manager,
Red Hat Enterprise Linux OpenStack Platform

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] [PTL] Cascading vs. Cells – summit recap and move forward

2014-12-12 Thread henry hly
On Fri, Dec 12, 2014 at 4:10 PM, Steve Gordon sgor...@redhat.com wrote:
 - Original Message -
 From: henry hly henry4...@gmail.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org

 On Fri, Dec 12, 2014 at 11:41 AM, Dan Smith d...@danplanet.com
 wrote:
  [joehuang] Could you pls. make it more clear for the deployment
  mode
  of cells when used for globally distributed DCs with single API.
  Do
  you mean cinder/neutron/glance/ceilometer will be shared by all
  cells, and use RPC for inter-dc communication, and only support
  one
  vendor's OpenStack distribution? How to do the cross data center
  integration and troubleshooting with RPC if the
  driver/agent/backend(storage/network/sever) from different vendor.
 
  Correct, cells only applies to single-vendor distributed
  deployments. In
  both its current and future forms, it uses private APIs for
  communication between the components, and thus isn't suited for a
  multi-vendor environment.
 
  Just MHO, but building functionality into existing or new
  components to
  allow deployments from multiple vendors to appear as a single API
  endpoint isn't something I have much interest in.
 
  --Dan
 

 Even with the same distribution, cell still face many challenges
 across multiple DC connected with WAN. Considering OAM, it's easier
 to
 manage autonomous systems connected with external northband interface
 across remote sites, than a single monolithic system connected with
 internal RPC message.

 The key question here is this primarily the role of OpenStack or an external 
 cloud management platform, and I don't profess to know the answer. What do 
 people use (workaround or otherwise) for these use cases *today*? Another 
 question I have is, one of the stated use cases is for managing OpenStack 
 clouds from multiple vendors - is the implication here that some of these 
 have additional divergent API extensions or is the concern solely the 
 incompatibilities inherent in communicating using the RPC mechanisms? If 
 there are divergent API extensions, how is that handled from a proxying point 
 of view if not all underlying OpenStack clouds necessarily support it (I 
 guess same applies when using distributions without additional extensions but 
 of different versions - e.g. Icehouse vs Juno which I believe was also a 
 targeted use case?)?

It's not about divergent northband API extension. Services between
Openstack projects are SOA based, this is a vertical splitting, so
when building large and distributed system (whatever it is) with
horizontal splitting, shouldn't we prefer clear and stable RESTful
interface between these building blocks?


 Although Cell did some separation and modulation (not to say it's
 still internal RPC across WAN), they leaves cinder, neutron,
 ceilometer. Shall we wait for all these projects to re-factor with
 Cell-like hierarchy structure, or adopt a more loose coupled way, to
 distribute them into autonomous units at the basis of the whole
 Openstack (except Keystone which can handle multiple region
 naturally)?

 Similarly though, is the intent with Cascading that each new project would 
 have to also implement and provide a proxy for use in these deployments? One 
 of the challenges with maintaining/supporting the existing Cells 
 implementation has been that it's effectively it's own thing and as a result 
 it is often not considered when adding new functionality.

Yes we need a new proxy, but nova proxy is just a new type of virt
driver, neutron proxy a new type of agent, cinder proxy a new type of
volume store...They just utilize existing standard driver/agent
mechanism, no influence on other code in tree.


 As we can see, compared with Cell, much less work is needed to build
 a
 Cascading solution, No patch is needed except Neutron (waiting some
 upcoming features not landed in Juno), nearly all work lies in the
 proxy, which is in fact another kind of driver/agent.

 Right, but the proxies still appear to be a not insignificant amount of code 
 - is the intent not that the proxies would eventually reside within the 
 relevant projects? I've been assuming yes but I am wondering if this was an 
 incorrect assumption on my part based on your comment.

 Thanks,

 Steve

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] UniqueConstraint for name and tenant_id in security group

2014-12-12 Thread Cory Benfield
On Thu, Dec 11, 2014 at 23:05:01, Mathieu Gagné wrote:
 When no security group is provided, Nova will default to the default
 security group. However due to the fact 2 security groups had the same
 name, nova-compute got confused, put the instance in ERROR state and
 logged this traceback [1]:
 
NoUniqueMatch: Multiple security groups found matching 'default'.
 Use
 an ID to be more specific.
 

We've hit this in our automated testing in the past as well. Similarly, we have 
no idea how we managed to achieve this, but it's clearly something that the 
APIs allow you to do. That feels unwise.

 - the instance request should be blocked before it ends up on a compute
 node with nova-compute. It shouldn't be the job of nova-compute to
 find
 out issues about duplicated names. It should be the job of nova-api.
 Don't waste your time scheduling and spawning an instance that will
 never spawn with success.
 
 - From an end user perspective, this means nova boot returns no error
 and it's only later that the user is informed of the confusion with
 security group names.
 
 - Why does it have to crash with a traceback? IMO, traceback means we
 didn't think about this use case, here is more information on how to
 find the source. As an operator, I don't care about the traceback if
 it's a known limitation of Nova/Neutron. Don't pollute my logs with
 normal exceptions. (Log rationalization anyone?)

+1 to all of this.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] RemoteError: Remote error: OperationalError (OperationalError) (1048, Column 'instance_uuid' cannot be null)

2014-12-12 Thread joejiang
Hi folks,
when i launch instance use cirros image in the new openstack environment(juno 
version  centos7 OS base), the following piece is error logs from compute node.
 anybody meet the same error? 









2014-12-12 17:16:52.481 12966 ERROR nova.compute.manager [-] [instance: 
67e215e0-2193-439d-89c4-be8c378df78d] Failed to allocate network(s)
2014-12-12 17:16:52.481 12966 TRACE nova.compute.manager [instance: 
67e215e0-2193-439d-89c4-be8c378df78d] Traceback (most recent call last):
2014-12-12 17:16:52.481 12966 TRACE nova.compute.manager [instance: 
67e215e0-2193-439d-89c4-be8c378df78d]   File 
/usr/lib/python2.7/site-packages/nova/compute/manager.py, line 2190, in 
_build_resources
2014-12-12 17:16:52.481 12966 TRACE nova.compute.manager [instance: 
67e215e0-2193-439d-89c4-be8c378df78d] requested_networks, security_groups)
2014-12-12 17:16:52.481 12966 TRACE nova.compute.manager [instance: 
67e215e0-2193-439d-89c4-be8c378df78d]   File 
/usr/lib/python2.7/site-packages/nova/compute/manager.py, line 1683, in 
_build_networks_for_instance
2014-12-12 17:16:52.481 12966 TRACE nova.compute.manager [instance: 
67e215e0-2193-439d-89c4-be8c378df78d] requested_networks, macs, 
security_groups, dhcp_options)
2014-12-12 17:16:52.481 12966 TRACE nova.compute.manager [instance: 
67e215e0-2193-439d-89c4-be8c378df78d]   File 
/usr/lib/python2.7/site-packages/nova/compute/manager.py, line 1717, in 
_allocate_network
2014-12-12 17:16:52.481 12966 TRACE nova.compute.manager [instance: 
67e215e0-2193-439d-89c4-be8c378df78d] 
instance.save(expected_task_state=[None])
2014-12-12 17:16:52.481 12966 TRACE nova.compute.manager [instance: 
67e215e0-2193-439d-89c4-be8c378df78d]   File 
/usr/lib/python2.7/site-packages/nova/objects/base.py, line 189, in wrapper
2014-12-12 17:16:52.481 12966 TRACE nova.compute.manager [instance: 
67e215e0-2193-439d-89c4-be8c378df78d] ctxt, self, fn.__name__, args, kwargs)
2014-12-12 17:16:52.481 12966 TRACE nova.compute.manager [instance: 
67e215e0-2193-439d-89c4-be8c378df78d]   File 
/usr/lib/python2.7/site-packages/nova/conductor/rpcapi.py, line 351, in 
object_action
2014-12-12 17:16:52.481 12966 TRACE nova.compute.manager [instance: 
67e215e0-2193-439d-89c4-be8c378df78d] objmethod=objmethod, args=args, 
kwargs=kwargs)
2014-12-12 17:16:52.481 12966 TRACE nova.compute.manager [instance: 
67e215e0-2193-439d-89c4-be8c378df78d]   File 
/usr/lib/python2.7/site-packages/oslo/messaging/rpc/client.py, line 152, in 
call
2014-12-12 17:16:52.481 12966 TRACE nova.compute.manager [instance: 
67e215e0-2193-439d-89c4-be8c378df78d] retry=self.retry)
2014-12-12 17:16:52.481 12966 TRACE nova.compute.manager [instance: 
67e215e0-2193-439d-89c4-be8c378df78d]   File 
/usr/lib/python2.7/site-packages/oslo/messaging/transport.py, line 90, in 
_send
2014-12-12 17:16:52.481 12966 TRACE nova.compute.manager [instance: 
67e215e0-2193-439d-89c4-be8c378df78d] timeout=timeout, retry=retry)
2014-12-12 17:16:52.481 12966 TRACE nova.compute.manager [instance: 
67e215e0-2193-439d-89c4-be8c378df78d]   File 
/usr/lib/python2.7/site-packages/oslo/messaging/_drivers/amqpdriver.py, line 
408, in send
2014-12-12 17:16:52.481 12966 TRACE nova.compute.manager [instance: 
67e215e0-2193-439d-89c4-be8c378df78d] retry=retry)
2014-12-12 17:16:52.481 12966 TRACE nova.compute.manager [instance: 
67e215e0-2193-439d-89c4-be8c378df78d]   File 
/usr/lib/python2.7/site-packages/oslo/messaging/_drivers/amqpdriver.py, line 
399, in _send
2014-12-12 17:16:52.481 12966 TRACE nova.compute.manager [instance: 
67e215e0-2193-439d-89c4-be8c378df78d] raise result
2014-12-12 17:16:52.481 12966 TRACE nova.compute.manager [instance: 
67e215e0-2193-439d-89c4-be8c378df78d] RemoteError: Remote error: 
OperationalError (OperationalError) (1048, Column 'instance_uuid' cannot be 
null) 'UPDATE instance_extra SET updated_at=%s, instance_uuid=%s WHERE 
instance_extra.id = %s' (datetime.datetime(2014, 12, 12, 9, 16, 52, 434376), 
None, 5L)
2014-12-12 17:16:52.481 12966 TRACE nova.compute.manager [instance: 
67e215e0-2193-439d-89c4-be8c378df78d] [u'Traceback (most recent call last):\n', 
u'  File /usr/lib/python2.7/site-packages/nova/conductor/manager.py, line 
400, in _object_dispatch\nreturn getattr(target, method)(context, *args, 
**kwargs)\n', u'  File /usr/lib/python2.7/site-packages/nova/objects/base.py, 
line 204, in wrapper\nreturn fn(self, ctxt, *args, **kwargs)\n', u'  File 
/usr/lib/python2.7/site-packages/nova/objects/instance.py, line 500, in 
save\ncolumns_to_join=_expected_cols(expected_attrs))\n', u'  File 
/usr/lib/python2.7/site-packages/nova/db/api.py, line 746, in 
instance_update_and_get_original\ncolumns_to_join=columns_to_join)\n', u'  
File /usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py, line 143, in 
wrapper\nreturn f(*args, **kwargs)\n', u'  File 
/usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py, line 2289, in 

Re: [openstack-dev] [Nova][Neutron] out-of-tree plugin for Mech driver/L2 and vif_driver

2014-12-12 Thread Daniel P. Berrange
On Fri, Dec 12, 2014 at 01:21:36PM +0900, Ryu Ishimoto wrote:
 On Thu, Dec 11, 2014 at 7:41 PM, Daniel P. Berrange berra...@redhat.com
 wrote:
 
 
  Yes, I really think this is a key point. When we introduced the VIF type
  mechanism we never intended for there to be soo many different VIF types
  created. There is a very small, finite number of possible ways to configure
  the libvirt guest XML and it was intended that the VIF types pretty much
  mirror that. This would have given us about 8 distinct VIF type maximum.
 
  I think the reason for the larger than expected number of VIF types, is
  that the drivers are being written to require some arbitrary tools to
  be invoked in the plug  unplug methods. It would really be better if
  those could be accomplished in the Neutron code than the Nova code, via
  a host agent run  provided by the Neutron mechanism.  This would let
  us have a very small number of VIF types and so avoid the entire problem
  that this thread is bringing up.
 
  Failing that though, I could see a way to accomplish a similar thing
  without a Neutron launched agent. If one of the VIF type binding
  parameters were the name of a script, we could run that script on
  plug  unplug. So we'd have a finite number of VIF types, and each
  new Neutron mechanism would merely have to provide a script to invoke
 
  eg consider the existing midonet  iovisor VIF types as an example.
  Both of them use the libvirt ethernet config, but have different
  things running in their plug methods. If we had a mechanism for
  associating a plug script with a vif type, we could use a single
  VIF type for both.
 
  eg iovisor port binding info would contain
 
vif_type=ethernet
vif_plug_script=/usr/bin/neutron-iovisor-vif-plug
 
  while midonet would contain
 
vif_type=ethernet
vif_plug_script=/usr/bin/neutron-midonet-vif-plug
 
 
  And so you see implementing a new Neutron mechanism in this way would
  not require *any* changes in Nova whatsoever. The work would be entirely
  self-contained within the scope of Neutron. It is simply a packaging
  task to get the vif script installed on the compute hosts, so that Nova
  can execute it.
 
  This is essentially providing a flexible VIF plugin system for Nova,
  without having to have it plug directly into the Nova codebase with
  the API  RPC stability constraints that implies.
 
 
 +1
 
 Port binding mechanism could vary among different networking technologies,
 which is not nova's concern, so this proposal makes sense.  Note that some
 vendors already provide port binding scripts that are currently executed
 directly from nova's vif.py ('mm-ctl' of midonet and 'ifc_ctl' for iovisor
 are two such examples), and this proposal makes it unnecessary to have
 these hard-coded in nova.  The only question I have is, how would nova
 figure out the arguments for these scripts?  Should nova dictate what they
 are?

We could define some standard set of arguments  environment variables
to pass the information from the VIF to the script in a standard way.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [NFV][Telco] pxe-boot

2014-12-12 Thread Pasquale Porreca
From my point of view it is not advisable to base some functionalities
of the instances on direct calls to Openstack API. This for 2 main
reasons, the first one: if the Openstack code changes (and we know
Openstack code does change) it will be required to change the code of
the software running in the instance too; the second one: if in the
future one wants to pass to another cloud infrastructure it will be more
difficult to achieve it.

On 12/12/14 01:20, Joe Gordon wrote:
 On Wed, Dec 10, 2014 at 7:42 AM, Pasquale Porreca 
 pasquale.porr...@dektech.com.au wrote:
 
   Well, one of the main reason to choose an open source product is to avoid
  vendor lock-in. I think it is not
  advisable to embed in the software running in an instance a call to
  OpenStack specific services.
 
 I'm sorry I don't follow the logic here, can you elaborate.
 
 

-- 
Pasquale Porreca

DEK Technologies
Via dei Castelli Romani, 22
00040 Pomezia (Roma)

Mobile +39 3394823805
Skype paskporr

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [NFV][Telco] pxe-boot

2014-12-12 Thread Pasquale Porreca
It is possible to decide in advance how many PL will be necessary for a
service, so their creation can be decided externally from the SC. Anyway
the role that any PL should assume and so the image to install on each
PL should be decided by the SC.

On 12/12/14 09:54, Steve Gordon wrote:
 - Original Message -
  From: Pasquale Porreca pasquale.porr...@dektech.com.au
  To: openstack-dev@lists.openstack.org
  
  Well, one of the main reason to choose an open source product is to
  avoid vendor lock-in. I think it is not
  
  advisable to embed in the software running in an instance a call to
  OpenStack specific services.
 Possibly a stupid question, but even if PXE boot was supported would the SC 
 not still have to trigger the creation of the PL instance(s) via a call to 
 Nova anyway (albeit with boot media coming from PXE instead of Glance)?

-- 
Pasquale Porreca

DEK Technologies
Via dei Castelli Romani, 22
00040 Pomezia (Roma)

Mobile +39 3394823805
Skype paskporr

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Convergence proof-of-concept showdown

2014-12-12 Thread Murugan, Visnusaran


 -Original Message-
 From: Zane Bitter [mailto:zbit...@redhat.com]
 Sent: Friday, December 12, 2014 6:37 AM
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Heat] Convergence proof-of-concept
 showdown
 
 On 11/12/14 08:26, Murugan, Visnusaran wrote:
  [Murugan, Visnusaran]
  In case of rollback where we have to cleanup earlier version of
  resources,
  we could get the order from old template. We'd prefer not to have a
  graph table.
 
  In theory you could get it by keeping old templates around. But that
  means keeping a lot of templates, and it will be hard to keep track
  of when you want to delete them. It also means that when starting an
  update you'll need to load every existing previous version of the
  template in order to calculate the dependencies. It also leaves the
  dependencies in an ambiguous state when a resource fails, and
  although that can be worked around it will be a giant pain to implement.
 
 
  Agree that looking to all templates for a delete is not good. But
  baring Complexity, we feel we could achieve it by way of having an
  update and a delete stream for a stack update operation. I will
  elaborate in detail in the etherpad sometime tomorrow :)
 
  I agree that I'd prefer not to have a graph table. After trying a
  couple of different things I decided to store the dependencies in the
  Resource table, where we can read or write them virtually for free
  because it turns out that we are always reading or updating the
  Resource itself at exactly the same time anyway.
 
 
  Not sure how this will work in an update scenario when a resource does
  not change and its dependencies do.
 
 We'll always update the requirements, even when the properties don't
 change.
 

Can you elaborate a bit on rollback.  We had an approach with depends_on
and needed_by columns in ResourceTable. But dropped it when we figured out
we had too many DB operations for Update.

  Also taking care of deleting resources in order will be an issue.
 
 It works fine.
 
  This implies that there will be different versions of a resource which
  will even complicate further.
 
 No it doesn't, other than the different versions we already have due to
 UpdateReplace.
 
  This approach reduces DB queries by waiting for completion
  notification
  on a topic. The drawback I see is that delete stack stream will be
  huge as it will have the entire graph. We can always dump such data
  in ResourceLock.data Json and pass a simple flag
  load_stream_from_db to converge RPC call as a workaround for delete
 operation.
 
  This seems to be essentially equivalent to my 'SyncPoint'
  proposal[1], with
  the key difference that the data is stored in-memory in a Heat engine
  rather than the database.
 
  I suspect it's probably a mistake to move it in-memory for similar
  reasons to the argument Clint made against synchronising the marking
  off
  of dependencies in-memory. The database can handle that and the
  problem of making the DB robust against failures of a single machine
  has already been solved by someone else. If we do it in-memory we are
  just creating a single point of failure for not much gain. (I guess
  you could argue it doesn't matter, since if any Heat engine dies
  during the traversal then we'll have to kick off another one anyway,
  but it does limit our options if that changes in the
  future.) [Murugan, Visnusaran] Resource completes, removes itself
  from resource_lock and notifies engine. Engine will acquire parent
  lock and initiate parent only if all its children are satisfied (no child 
  entry in
 resource_lock).
  This will come in place of Aggregator.
 
  Yep, if you s/resource_lock/SyncPoint/ that's more or less exactly what I
 did.
  The three differences I can see are:
 
  1) I think you are proposing to create all of the sync points at the
  start of the traversal, rather than on an as-needed basis. This is
  probably a good idea. I didn't consider it because of the way my
  prototype evolved, but there's now no reason I can see not to do this.
  If we could move the data to the Resource table itself then we could
  even get it for free from an efficiency point of view.
 
  +1. But we will need engine_id to be stored somewhere for recovery
 purpose (easy to be queried format).
 
 Yeah, so I'm starting to think you're right, maybe the/a Lock table is the 
 right
 thing to use there. We could probably do it within the resource table using
 the same select-for-update to set the engine_id, but I agree that we might
 be starting to jam too much into that one table.
 

yeah. Unrelated values in resource table. Upon resource completion we have to 
unset engine_id as well as compared to dropping a row from resource lock.
Both are good. Having engine_id in resource_table will reduce db operaions
in half. We should go with just resource table along with engine_id.

  Sync points are created as-needed. Single resource is enough to restart
 that entire stream.
  I 

Re: [openstack-dev] [Neutron] UniqueConstraint for name and tenant_id in security group

2014-12-12 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

On 12/12/14 00:05, Mathieu Gagné wrote:
 We recently had an issue in production where a user had 2
 default security groups (for reasons we have yet to identify).

This is probably the result of the race condition that is discussed in
the thread: https://bugs.launchpad.net/neutron/+bug/1194579
/Ihar
-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2.0.22 (Darwin)

iQEcBAEBCgAGBQJUitR/AAoJEC5aWaUY1u57VggIALzdTLHnO7Fr8gKlWPS7Uu+o
Su9KV41td8Epzs3pNsGYkH2Kz4T5obAneCORUiZl7boBpAJcnMm3Jt9K8YnTCVUy
t4AbfIxSrTD7drHf3HoMoNEDrSntdnpTHoGpG+idNpFjc0kjBjm81W3y14Gab0k5
5Mw/jV8mdnB6aRs5Zhari50/04X8SZeDpQNgBHL5kY40CZ+sUtS4C8OKfj7OEAuW
LNmkHgDAtwewbNdluntbSdLGVjyl/F9s+21HoajqBcGNhH8ZHpAr4hphMbZv8lBY
iAD2tztxvkacYaGduBFh6bewxVNGaUJBWmmc2xqHAXXbDP3d9aOk5q0wHK3SPQY=
=TDwc
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Convergence proof-of-concept showdown

2014-12-12 Thread Murugan, Visnusaran
Hi zaneb,

Etherpad updated. 

https://etherpad.openstack.org/p/execution-stream-and-aggregator-based-convergence

 -Original Message-
 From: Murugan, Visnusaran
 Sent: Friday, December 12, 2014 4:00 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Heat] Convergence proof-of-concept
 showdown
 
 
 
  -Original Message-
  From: Zane Bitter [mailto:zbit...@redhat.com]
  Sent: Friday, December 12, 2014 6:37 AM
  To: openstack-dev@lists.openstack.org
  Subject: Re: [openstack-dev] [Heat] Convergence proof-of-concept
  showdown
 
  On 11/12/14 08:26, Murugan, Visnusaran wrote:
   [Murugan, Visnusaran]
   In case of rollback where we have to cleanup earlier version of
   resources,
   we could get the order from old template. We'd prefer not to have a
   graph table.
  
   In theory you could get it by keeping old templates around. But
   that means keeping a lot of templates, and it will be hard to keep
   track of when you want to delete them. It also means that when
   starting an update you'll need to load every existing previous
   version of the template in order to calculate the dependencies. It
   also leaves the dependencies in an ambiguous state when a resource
   fails, and although that can be worked around it will be a giant pain to
 implement.
  
  
   Agree that looking to all templates for a delete is not good. But
   baring Complexity, we feel we could achieve it by way of having an
   update and a delete stream for a stack update operation. I will
   elaborate in detail in the etherpad sometime tomorrow :)
  
   I agree that I'd prefer not to have a graph table. After trying a
   couple of different things I decided to store the dependencies in
   the Resource table, where we can read or write them virtually for
   free because it turns out that we are always reading or updating
   the Resource itself at exactly the same time anyway.
  
  
   Not sure how this will work in an update scenario when a resource
   does not change and its dependencies do.
 
  We'll always update the requirements, even when the properties don't
  change.
 
 
 Can you elaborate a bit on rollback.  We had an approach with depends_on
 and needed_by columns in ResourceTable. But dropped it when we figured
 out we had too many DB operations for Update.
 
   Also taking care of deleting resources in order will be an issue.
 
  It works fine.
 
   This implies that there will be different versions of a resource
   which will even complicate further.
 
  No it doesn't, other than the different versions we already have due
  to UpdateReplace.
 
   This approach reduces DB queries by waiting for completion
   notification
   on a topic. The drawback I see is that delete stack stream will be
   huge as it will have the entire graph. We can always dump such data
   in ResourceLock.data Json and pass a simple flag
   load_stream_from_db to converge RPC call as a workaround for
   delete
  operation.
  
   This seems to be essentially equivalent to my 'SyncPoint'
   proposal[1], with
   the key difference that the data is stored in-memory in a Heat
   engine rather than the database.
  
   I suspect it's probably a mistake to move it in-memory for similar
   reasons to the argument Clint made against synchronising the
   marking off
   of dependencies in-memory. The database can handle that and the
   problem of making the DB robust against failures of a single
   machine has already been solved by someone else. If we do it
   in-memory we are just creating a single point of failure for not
   much gain. (I guess you could argue it doesn't matter, since if any
   Heat engine dies during the traversal then we'll have to kick off
   another one anyway, but it does limit our options if that changes
   in the
   future.) [Murugan, Visnusaran] Resource completes, removes itself
   from resource_lock and notifies engine. Engine will acquire parent
   lock and initiate parent only if all its children are satisfied (no
   child entry in
  resource_lock).
   This will come in place of Aggregator.
  
   Yep, if you s/resource_lock/SyncPoint/ that's more or less exactly
   what I
  did.
   The three differences I can see are:
  
   1) I think you are proposing to create all of the sync points at
   the start of the traversal, rather than on an as-needed basis. This
   is probably a good idea. I didn't consider it because of the way my
   prototype evolved, but there's now no reason I can see not to do this.
   If we could move the data to the Resource table itself then we
   could even get it for free from an efficiency point of view.
  
   +1. But we will need engine_id to be stored somewhere for recovery
  purpose (easy to be queried format).
 
  Yeah, so I'm starting to think you're right, maybe the/a Lock table is
  the right thing to use there. We could probably do it within the
  resource table using the same select-for-update to set the engine_id,
  but I agree that 

Re: [openstack-dev] [Fuel] Hard Code Freeze for 6.0

2014-12-12 Thread Tomasz Napierala
Hi,

As with 5.1.x, please inform the list if you are rising priority to critical in 
any bugs targeted to 6.0.

Regards,

 On 09 Dec 2014, at 23:43, Mike Scherbakov mscherba...@mirantis.com wrote:
 
 Hi all,
 I'm glad to announce that we've reached Hard Code Freeze (HCF) [1] criteria 
 for 6.0 milestone. 
 
 stable/6.0 branches for our repos were created.
 
 Bug reporters, please do not forget to target both 6.1 (master) and 6.0 
 (stable/6.0) milestones since now. If the fix is merged to master, it has to 
 be backported to stable/6.0 to make it available in 6.0. Please ensure that 
 you do NOT merge changes to stable branch first. It always has to be a 
 backport with the same Change-ID. Please see more on this at [2].
 
 I hope Fuel DevOps team can quickly update nightly builds [3] to reflect 
 changes.
 
 [1] https://wiki.openstack.org/wiki/Fuel/Hard_Code_Freeze
 [2] 
 https://wiki.openstack.org/wiki/Fuel/How_to_contribute#Backport_bugfixes_to_stable_release_series
 [3] https://fuel-jenkins.mirantis.com/view/ISO/
 
 Thanks,
 -- 
 Mike Scherbakov
 #mihgen
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
Tomasz 'Zen' Napierala
Sr. OpenStack Engineer
tnapier...@mirantis.com







___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] REST and Django

2014-12-12 Thread Tihomir Trifonov

 Here's an example: Admin user Joe has an Domain open and stares at it for
 15 minutes while he updates the description. Admin user Bob is asked to go
 ahead and enable it. He opens the record, edits it, and then saves it. Joe
 finished perfecting the description and saves it. Doing this action would
 mean that the Domain is enabled and the description gets updated. Last man
 in still wins if he updates the same fields, but if they update different
 fields then both of their changes will take affect without them stomping on
 each other. Whether that is good or bad may depend on the situation…



That's a great example. I believe that all of the Openstack APIs support
PATCH updates of arbitrary fields. This way - the frontend(AngularJS) can
detect which fields are being modified, and to submit only these fields for
update. If we however use a form with POST, although we will load the
object before updating it, the middleware cannot find which fields are
actually modified, and will update them all, which is more likely what PUT
should do. Thus having full control in the frontend part, we can submit
only changed fields. If however a service API doesn't support PATCH, it is
actually a problem in the API and not in the client...



The service API documentation almost always lags (although, helped by specs
 now) and the service team takes on the burden of exposing a programmatic
 way to access the API.  This is tested and easily consumable via the
 python clients, which removes some guesswork from using the service.


True. But what if the service team modifies a method signature from let's
say:

def add_something(self, request,
 ​ field1, field2):


to

def add_something(self, request,
 ​ field1, field2, field3):


and in the middleware we have the old signature:

​def add_something(self, request,
 ​ field1, field2):


we still need to modify the middleware to add the new field. If however the
middleware is transparent and just passes **kwargs, it will pass through
whatever the frontend sends. So we just need to update the frontend, which
can be done using custom views, and not necessary going through an upstream
change. My point is why do we need to hide some features of the backend
service API behind a firewall what the middleware in fact is?







On Fri, Dec 12, 2014 at 8:08 AM, Tripp, Travis S travis.tr...@hp.com
wrote:

 I just re-read and I apologize for the hastily written email I previously
 sent. I’ll try to salvage it with a bit of a revision below (please ignore
 the previous email).

 On 12/11/14, 7:02 PM, Tripp, Travis S travis.tr...@hp.com wrote
 (REVISED):

 Tihomir,
 
 Your comments in the patch were very helpful for me to understand your
 concerns about the ease of customizing without requiring upstream
 changes. It also reminded me that I’ve also previously questioned the
 python middleman.
 
 However, here are a couple of bullet points for Devil’s Advocate
 consideration.
 
 
   *   Will we take on auto-discovery of API extensions in two spots
 (python for legacy and JS for new)?
   *   The Horizon team will have to keep an even closer eye on every
 single project and be ready to react if there are changes to the API that
 break things. Right now in Glance, for example, they are working on some
 fixes to the v2 API (soon to become v2.3) that will allow them to
 deprecate v1 somewhat transparently to users of the client library.
   *   The service API documentation almost always lags (although, helped
 by specs now) and the service team takes on the burden of exposing a
 programmatic way to access the API.  This is tested and easily consumable
 via the python clients, which removes some guesswork from using the
 service.
   *   This is going to be an incremental approach with legacy support
 requirements anyway.  So, incorporating python side changes won’t just go
 away.
   *   Which approach would be better if we introduce a server side
 caching mechanism or a new source of data such as elastic search to
 improve performance? Would the client side code have to be changed
 dramatically to take advantage of those improvements or could it be done
 transparently on the server side if we own the exposed API?
 
 I’m not sure I fully understood your example about Cinder.  Was it the
 cinder client that held up delivery of horizon support, the cinder API or
 both?  If the API isn’t in, then it would hold up delivery of the feature
 in any case. There still would be timing pressures to react and build a
 new view that supports it. For customization, with Richard’s approach new
 views could be supported by just dropping in a new REST API decorated
 module with the APIs you want, including direct pass through support if
 desired to new APIs. Downstream customizations / Upstream changes to
 views seem a bit like a bit of a related, but different issue to me as
 long as their is an easy way to drop in new API support.
 
 Finally, regarding the client making two calls to do an update:
 
 ​Do we really 

Re: [openstack-dev] [oslo] deprecation 'pattern' library??

2014-12-12 Thread Flavio Percoco

On 10/12/14 17:23 -0500, Sean Dague wrote:

On 12/10/2014 04:00 PM, Doug Hellmann wrote:


On Dec 10, 2014, at 3:26 PM, Joshua Harlow harlo...@outlook.com wrote:


Hi oslo folks (and others),

I've recently put up a review for some common deprecation patterns:

https://review.openstack.org/#/c/140119/

In summary, this is a common set of patterns that can be used by oslo libraries, other 
libraries... This is different from the versionutils one (which is more of a 
developer-operator deprecation interaction) and is more focused on the developer 
- developer deprecation interaction (developers say using oslo libraries).

Doug had the question about why not just put this out there on pypi with a 
useful name not so strongly connected to oslo; since that review is more of a 
common set of patterns that can be used by libraries outside openstack/oslo as 
well. There wasn't many/any similar libraries that I found (zope.deprecation is 
probably the closest) and twisted has something in-built to it that is 
something similar. So in order to avoid creating our own version of 
zope.deprecation in that review we might as well create a neat name that can be 
useful for oslo/openstack/elsewhere...

Some ideas that were thrown around on IRC (check 
'https://pypi.python.org/pypi/%s' % name for 404 to see if likely not 
registered):

* debtcollector


+1

I suspect we’ll want a minimal spec for the new lib, but let’s wait and hear 
what some of the other cores think.


Not a core, but as someone that will be using it, that seems reasonable.

The biggest issue with the deprecation patterns in projects is
aggressive cleaning tended to clean out all the deprecations at the
beginning of a cycle... and then all the deprecation assist code, as it
was unused sad panda.

Having it in a common lib as a bunch of decorators would be great.
Especially if we can work out things like *not* spamming deprecation
load warnings on every worker start.


Agreed with the above.

(also, debtcollector would be my choice too)

Flavio



-Sean



Doug


* bagman
* deprecate
* deprecation
* baggage

Any other neat names people can think about?

Or in general any other comments/ideas about providing such a deprecation 
pattern library?

-Josh


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
@flaper87
Flavio Percoco


pgpR5xEchxz6w.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] out-of-tree plugin for Mech driver/L2 and vif_driver

2014-12-12 Thread Maxime Leroy
On Fri, Dec 12, 2014 at 10:46 AM, Daniel P. Berrange
berra...@redhat.com wrote:
 On Fri, Dec 12, 2014 at 01:21:36PM +0900, Ryu Ishimoto wrote:
 On Thu, Dec 11, 2014 at 7:41 PM, Daniel P. Berrange berra...@redhat.com
 wrote:

[..]
 Port binding mechanism could vary among different networking technologies,
 which is not nova's concern, so this proposal makes sense.  Note that some
 vendors already provide port binding scripts that are currently executed
 directly from nova's vif.py ('mm-ctl' of midonet and 'ifc_ctl' for iovisor
 are two such examples), and this proposal makes it unnecessary to have
 these hard-coded in nova.  The only question I have is, how would nova
 figure out the arguments for these scripts?  Should nova dictate what they
 are?

 We could define some standard set of arguments  environment variables
 to pass the information from the VIF to the script in a standard way.


Many information are used by the plug/unplug method: vif_id,
vif_address, ovs_interfaceid, firewall, net_id, tenant_id, vnic_type,
instance_uuid...

Not sure we can define a set of standard arguments.

Maybe instead to use a script we should load some plug/unplug
functions from a python module with importlib. So a vif_plug_module
option instead to have a vif_plug_script ?

There are several other problems to solve if we are going to use this
vif_plug_script:

- How to have the authorization to run this script (i.e. rootwrap)?

- How to test plug/unplug function from these scripts?
  Now, we have unity tests in nova/tests/unit/virt/libvirt/test_vif.py
for plug/unplug method.

- How this script will be installed?
   - should it be including in the L2 agent package? Some L2 switch
doesn't have a L2 agent.

Regards,

Maxime

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] out-of-tree plugin for Mech driver/L2 and vif_driver

2014-12-12 Thread Daniel P. Berrange
On Fri, Dec 12, 2014 at 03:05:28PM +0100, Maxime Leroy wrote:
 On Fri, Dec 12, 2014 at 10:46 AM, Daniel P. Berrange
 berra...@redhat.com wrote:
  On Fri, Dec 12, 2014 at 01:21:36PM +0900, Ryu Ishimoto wrote:
  On Thu, Dec 11, 2014 at 7:41 PM, Daniel P. Berrange berra...@redhat.com
  wrote:
 
 [..]
  Port binding mechanism could vary among different networking technologies,
  which is not nova's concern, so this proposal makes sense.  Note that some
  vendors already provide port binding scripts that are currently executed
  directly from nova's vif.py ('mm-ctl' of midonet and 'ifc_ctl' for iovisor
  are two such examples), and this proposal makes it unnecessary to have
  these hard-coded in nova.  The only question I have is, how would nova
  figure out the arguments for these scripts?  Should nova dictate what they
  are?
 
  We could define some standard set of arguments  environment variables
  to pass the information from the VIF to the script in a standard way.
 
 
 Many information are used by the plug/unplug method: vif_id,
 vif_address, ovs_interfaceid, firewall, net_id, tenant_id, vnic_type,
 instance_uuid...
 
 Not sure we can define a set of standard arguments.

That's really not a problem. There will be some set of common info
needed for all. Then for any particular vif type we know what extra
specific fields are define in the port binding metadata. We'll just
set env variables for each of those.

 Maybe instead to use a script we should load some plug/unplug
 functions from a python module with importlib. So a vif_plug_module
 option instead to have a vif_plug_script ?

No, we explicitly do *not* want any usage of the Nova python modules.
That is all private internal Nova implementation detail that nothing
is permitted to rely on - this is why the VIF plugin feature was
removed in the first place.

 There are several other problems to solve if we are going to use this
 vif_plug_script:
 
 - How to have the authorization to run this script (i.e. rootwrap)?

Yes, rootwrap.

 - How to test plug/unplug function from these scripts?
   Now, we have unity tests in nova/tests/unit/virt/libvirt/test_vif.py
 for plug/unplug method.

Integration and/or functional tests run for the VIF impl would
exercise this code still.

 - How this script will be installed?
- should it be including in the L2 agent package? Some L2 switch
 doesn't have a L2 agent.

That's just a normal downstream packaging task which is easily
handled by people doing that work. If there's no L2 agent package
they can trivially just create a new package for the script(s)
that need installing on the comput node. They would have to be
doing exactly this anyway if you had the VIF plugin as a python
module instead.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [ha] potential issue with implicit async-compatible mysql drivers

2014-12-12 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

Reading the latest comments at
https://github.com/PyMySQL/PyMySQL/issues/275, it seems to me that the
issue is not to be solved in drivers themselves but instead in
libraries that arrange connections (sqlalchemy/oslo.db), correct?

Will the proposed connection reopening help?

/Ihar

On 05/12/14 23:43, Mike Bayer wrote:
 Hey list -
 
 I’m posting this here just to get some ideas on what might be
 happening here, as it may or may not have some impact on Openstack
 if and when we move to MySQL drivers that are async-patchable, like
 MySQL-connector or PyMySQL.  I had a user post this issue a few
 days ago which I’ve since distilled into test cases for PyMySQL and
 MySQL-connector separately.   It uses gevent, not eventlet, so I’m
 not really sure if this applies.  But there’s plenty of very smart
 people here so if anyone can shed some light on what is actually
 happening here, that would help.
 
 The program essentially illustrates code that performs several
 steps upon a connection, however if the greenlet is suddenly
 killed, the state from the connection, while damaged, is still
 being allowed to continue on in some way, and what’s
 super-catastrophic here is that you see a transaction actually
 being committed *without* all the statements proceeding on it.
 
 In my work with MySQL drivers, I’ve noted for years that they are
 all very, very bad at dealing with concurrency-related issues.  The
 whole “MySQL has gone away” and “commands out of sync” errors are
 ones that we’ve all just drowned in, and so often these are due to
 the driver getting mixed up due to concurrent use of a connection.
 However this one seems more insidious.   Though at the same time,
 the script has some complexity happening (like a simplistic
 connection pool) and I’m not really sure where the core of the
 issue lies.
 
 The script is at
 https://gist.github.com/zzzeek/d196fa91c40cb515365e and also below.
 If you run it for a few seconds, go over to your MySQL command line
 and run this query:
 
 SELECT * FROM table_b WHERE a_id not in (SELECT id FROM table_a)
 ORDER BY a_id DESC;
 
 and what you’ll see is tons of rows in table_b where the “a_id” is
 zero (because cursor.lastrowid fails), but the *rows are
 committed*.   If you read the segment of code that does this, it
 should be impossible:
 
 connection = pool.get() rowid = execute_sql( connection, INSERT
 INTO table_a (data) VALUES (%s), (a,) )
 
 gevent.sleep(random.random() * 0.2)  try: execute_sql( connection, 
 INSERT INTO table_b (a_id, data) VALUES (%s, %s), (rowid, b,) 
 )  connection.commit()  pool.return_conn(connection)
 
 except Exception: connection.rollback() 
 pool.return_conn(connection)
 
 so if the gevent.sleep() throws a timeout error, somehow we are
 getting thrown back in there, with the connection in an invalid
 state, but not invalid enough to commit.
 
 If a simple check for “SELECT connection_id()” is added, this query
 fails and the whole issue is prevented.  Additionally, if you put a
 foreign key constraint on that b_table.a_id, then the issue is
 prevented, and you see that the constraint violation is happening
 all over the place within the commit() call.   The connection is
 being used such that its state just started after the
 gevent.sleep() call.
 
 Now, there’s also a very rudimental connection pool here.   That is
 also part of what’s going on.  If i try to run without the pool,
 the whole script just runs out of connections, fast, which suggests
 that this gevent timeout cleans itself up very, very badly.
 However, SQLAlchemy’s pool works a lot like this one, so if folks
 here can tell me if the connection pool is doing something bad,
 then that’s key, because I need to make a comparable change in
 SQLAlchemy’s pool.   Otherwise I worry our eventlet use could have
 big problems under high load.
 
 
 
 
 
 # -*- coding: utf-8 -*- import gevent.monkey 
 gevent.monkey.patch_all()
 
 import collections import threading import time import random 
 import sys
 
 import logging logging.basicConfig() log =
 logging.getLogger('foo') log.setLevel(logging.DEBUG)
 
 #import pymysql as dbapi from mysql import connector as dbapi
 
 
 class SimplePool(object): def __init__(self): self.checkedin =
 collections.deque([ self._connect() for i in range(50) ]) 
 self.checkout_lock = threading.Lock() self.checkin_lock =
 threading.Lock()
 
 def _connect(self): return dbapi.connect( user=scott,
 passwd=tiger, host=localhost, db=test)
 
 def get(self): with self.checkout_lock: while not self.checkedin: 
 time.sleep(.1) return self.checkedin.pop()
 
 def return_conn(self, conn): try: conn.rollback() except: 
 log.error(Exception during rollback, exc_info=True) try: 
 conn.close() except: log.error(Exception during close,
 exc_info=True)
 
 # recycle to a new connection conn = self._connect() with
 self.checkin_lock: self.checkedin.append(conn)
 
 
 def verify_connection_id(conn): cursor = conn.cursor() try: 
 

Re: [openstack-dev] [all] [tc] [PTL] Cascading vs. Cells – summit recap and move forward

2014-12-12 Thread Russell Bryant
On 12/11/2014 12:55 PM, Andrew Laski wrote:
 Cells can handle a single API on top of globally distributed DCs.  I
 have spoken with a group that is doing exactly that.  But it requires
 that the API is a trusted part of the OpenStack deployments in those
 distributed DCs.

And the way the rest of the components fit into that scenario is far
from clear to me.  Do you consider this more of a if you can make it
work, good for you, or something we should aim to be more generally
supported over time?  Personally, I see the globally distributed
OpenStack under a single API case much more complex, and worth
considering out of scope for the short to medium term, at least.

For me, this discussion boils down to ...

1) Do we consider these use cases in scope at all?

2) If we consider it in scope, is it enough of a priority to warrant a
cross-OpenStack push in the near term to work on it?

3) If yes to #2, how would we do it?  Cascading, or something built
around cells?

I haven't worried about #3 much, because I consider #2 or maybe even #1
to be a show stopper here.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] Feature delivery rules and automated tests

2014-12-12 Thread Dmitry Pyzhov
Guys,

we've done a good job in 6.0. Most of the features were merged before
feature freeze. Our QA were involved in testing even earlier. It was much
better than before.

We had a discussion with Anastasia. There were several bug reports for
features yesterday, far beyond HCF. So we still have a long way to be
perfect. We should add one rule: we need to have automated tests before HCF.

Actually, we should have results of these tests just after FF. It is quite
challengeable because we have a short development cycle. So my proposal is
to require full deployment and run of automated tests for each feature
before soft code freeze. And it needs to be tracked in checklists and on
feature syncups.

Your opinion?
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] [Plugins] Further development of plugin metadata format

2014-12-12 Thread Evgeniy L
Hi,

I don't agree with many of your statements but, I would like to
continue discussion about really important topic i.e. UI flow, my
suggestion was to add groups, for plugin in metadata.yaml plugin
developer can have description of the groups which it belongs to:

groups:
  - id: storage
subgroup:
  - id: cinder

With this information we can show a new option on UI (wizard),
if option is selected, it means that plugin is enabled, if plugin belongs
to several groups, we can use OR statement.

The main point is, for environment creation we must specify
ids of plugins. Yet another reason for that is plugins multiversioning,
we must know exactly which plugin with which version
is used for environment, and I don't see how conditions can help
us with it.

Thanks,




On Wed, Dec 10, 2014 at 8:23 PM, Vitaly Kramskikh vkramsk...@mirantis.com
wrote:



 2014-12-10 19:31 GMT+03:00 Evgeniy L e...@mirantis.com:



 On Wed, Dec 10, 2014 at 6:50 PM, Vitaly Kramskikh 
 vkramsk...@mirantis.com wrote:



 2014-12-10 16:57 GMT+03:00 Evgeniy L e...@mirantis.com:

 Hi,

 First let me describe what our plans for the nearest release. We want
 to deliver
 role as a simple plugin, it means that plugin developer can define his
 own role
 with yaml and also it should work fine with our current approach when
 user can
 define several fields on the settings tab.

 Also I would like to mention another thing which we should probably
 discuss
 in separate thread, how plugins should be implemented. We have two types
 of plugins, simple and complicated, the definition of simple - I can do
 everything
 I need with yaml, the definition of complicated - probably I have to
 write some
 python code. It doesn't mean that this python code should do absolutely
 everything it wants, but it means we should implement stable, documented
 interface where plugin is connected to the core.

 Now lets talk about UI flow, our current problem is how to get the
 information
 if plugins is used in the environment or not, this information is
 required for
 backend which generates appropriate tasks for task executor, also this
 information can be used in the future if we decide to implement plugins
 deletion
 mechanism.

 I didn't come up with a some new solution, as before we have two
 options to
 solve the problem:

 # 1

 Use conditional language which is currently used on UI, it will look
 like
 Vitaly described in the example [1].
 Plugin developer should:

 1. describe at least one element for UI, which he will be able to use
 in task

 2. add condition which is written in our own programming language

 Example of the condition for LBaaS plugin:

 condition: settings:lbaas.metadata.enabled == true

 3. add condition to metadata.yaml a condition which defines if plugin
 is enabled

 is_enabled: settings:lbaas.metadata.enabled == true

 This approach has good flexibility, but also it has problems:

 a. It's complicated and not intuitive for plugin developer.

 It is less complicated than python code


 I'm not sure why are you talking about python code here, my point
 is we should not force developer to use this conditions in any language.

 But that's how current plugin-like stuff works. There are various tasks
 which are run only if some checkboxes are set, so stuff like Ceph and
 vCenter will need conditions to describe tasks.

 Anyway I don't agree with the statement there are more people who know
 python than fuel ui conditional language.


 b. It doesn't cover case when the user installs 3rd party plugin
 which doesn't have any conditions (because of # a) and
 user doesn't have a way to disable it for environment if it
 breaks his configuration.

 If plugin doesn't have conditions for tasks, then it has invalid
 metadata.


 Yep, and it's a problem of the platform, which provides a bad interface.

 Why is it bad? It plugin writer doesn't provide plugin name or version,
 then metadata is invalid also. It is plugin writer's fault that he didn't
 write metadata properly.




 # 2

 As we discussed from the very beginning after user selects a release he
 can
 choose a set of plugins which he wants to be enabled for environment.
 After that we can say that plugin is enabled for the environment and we
 send
 tasks related to this plugin to task executor.

  My approach also allows to eliminate enableness of plugins which
 will cause UX issues and issues like you described above. vCenter and Ceph
 also don't have enabled state. vCenter has hypervisor and storage, Ceph
 provides backends for Cinder and Glance which can be used simultaneously or
 only one of them can be used.

 Both of described plugins have enabled/disabled state, vCenter is
 enabled
 when vCenter is selected as hypervisor. Ceph is enabled when it's
 selected
 as a backend for Cinder or Glance.

 Nope, Ceph for Volumes can be used without Ceph for Images. Both of
 these plugins can also have some granular tasks which are enabled by
 various checkboxes (like 

[openstack-dev] tempest - object_storage

2014-12-12 Thread Anton Massoud
Hi,

We are running tempest on icehouse, in our tempest runs, we found that the 
below test cases are failing in random iterations but not always.

tempest.api.object_storage.test_account_services.AccountTest.test_update_account_metadata_with_delete_matadata_key[gate,smoke]
tempest.api.object_storage.test_container_services.ContainerTest.test_list_container_contents_with_path[gate,smoke]
tempest.api.object_storage.test_object_expiry.ObjectExpiryTest.test_get_object_after_expiry_time[gate]
tempest.api.object_storage.test_object_expiry.ObjectExpiryTest.test_get_object_at_expiry_time[gate]
tempest.api.object_storage.test_object_services.ObjectTest.test_object_upload_in_segments[gate]
tempest.api.object_storage.test_object_services.ObjectTest.test_update_object_metadata[gate,smoke]
tempest.api.object_storage.test_object_services.ObjectTest.test_update_object_metadata_with_create_and_remove_metadata[gate,smoke]
tempest.api.object_storage.test_object_services.ObjectTest.test_update_object_metadata_with_remove_metadata
tempest.api.object_storage.test_account_services.AccountTest.test_update_account_metadata_with_create_and_delete_metadata[gate,smoke]
tempest.api.object_storage.test_container_services.ContainerTest.test_create_container[gate,smoke]
tempest.api.object_storage.test_container_services.ContainerTest.test_delete_container[gate,smoke]
tempest.api.object_storage.test_object_services.ObjectTest.test_copy_object_2d_way[gate,smoke]


Anyone  else has experienced the same, any idea what that may be caused of

/Anton
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] [Ceilometer] [API] Batch alarm creation

2014-12-12 Thread Ryan Brown
On 12/12/2014 03:37 AM, Rao Dingyuan wrote:
 Hi Eoghan and folks,
 
 I'm thinking of adding an API to create multiple alarms in a batch.
 
 I think adding an API to create multiple alarms is a good option to solve the 
 problem that once an *alarm target* (a vm or a new group of vms) is created, 
 multiple requests will be fired because multiple alarms are to be created.
 
 In the our current project, this requiement is specially urgent since our 
 alarm target is one VM, and 6 alarms are to be created when one VM is created.
 
 What do you guys think?
 
 
 Best Regards,
 Kurt Rao

Allowing batch operations is definitely a good idea, though it may not
be a solution to all of the problems you outlined.

One way to batch object creations would be to give clients the option to
POST a collection of alarms instead of a single alarm. Currently your
API looks like[1]:

POST /v2/alarms

BODY:
{
  alarm_actions: ...
  ...
}

For batches you could modify your API to accept a body like:

{
  alarms: [
{alarm_actions: ...},
{alarm_actions: ...},
{alarm_actions: ...},
{alarm_actions: ...}
  ]
}

It could (pretty easily) be a backwards-compatible change since the
schemata don't conflict, and you can even add some kind of a
batch:true flag to make it explicit that the user wants to create a
collection. The API-WG has a spec[2] out right now explaining the
rationale behind collection representations.


[1]:
http://docs.openstack.org/developer/ceilometer/webapi/v2.html#post--v2-alarms
[2]:
https://review.openstack.org/#/c/133660/11/guidelines/representation_structure.rst,unified
 
 
 
 - Original -
 发件人: Eoghan Glynn [mailto:egl...@redhat.com] 
 发送时间: 2014年12月3日 20:34
 收件人: Rao Dingyuan
 抄送: openst...@lists.openstack.org
 主题: Re: [Openstack] [Ceilometer] looking for alarm best practice - please help
 
 
 
 Hi folks,



 I wonder if anyone could share some best practice regarding to the 
 usage of ceilometer alarm. We are using the alarm 
 evaluation/notification of ceilometer and we don’t feel very well of 
 the way we use it. Below is our
 problem:



 

 Scenario:

 When cpu usage or memory usage above a certain threshold, alerts 
 should be displayed on admin’s web page. There should be a 3 level 
 alerts according to meter value, namely notice, warning, fatal. Notice 
 means the meter value is between 50% ~ 70%, warning means between 70% 
 ~ 85% and fatal means above 85%

 For example:

 * when one vm’s cpu usage is 72%, an alert message should be displayed 
 saying
 “Warning: vm[d9b7018b-06c4-4fba-8221-37f67f6c6b8c] cpu usage is above 70%”.

 * when one vm’s memory usage is 90%, another alert message should be 
 created saying “Fatal: vm[d9b7018b-06c4-4fba-8221-37f67f6c6b8c] memory 
 usage is above 85%”



 Our current Solution:

 We used ceilometer alarm evaluation/notification to implement this. To 
 distinguish which VM and which meter is above what value, we’ve 
 created one alarm for each VM by each condition. So, to monitor 1 VM, 
 6 alarms will be created because there are 2 meters and for each meter there 
 are 3 levels.
 That means, if there are 100 VMs to be monitored, 600 alarms will be 
 created.



 Problems:

 * The first problem is, when the number of meters increases, the 
 number of alarms will be multiplied. For example, customer may want 
 alerts on disk and network IO rates, and if we do that, there will be 
 4*3=12 alarms for each VM.

 * The second problem is, when one VM is created, multiple alarms will 
 be created, meaning multiple http requests will be fired. In the case 
 above, 6 HTTP requests will be needed once a VM is created. And this 
 number also increases as the number of meters goes up.
 
 One way of reducing both the number of alarms and the volume of notifications 
 would be to group related VMs, if such a concept exists in your use-case.
 
 This is effectively how Heat autoscaling uses ceilometer, alarming on the 
 average of some statistic over a set of instances (as opposed to triggering 
 on individual instances).
 
 The VMs could be grouped by setting user-metadata of form:
 
   nova boot ... --meta metering.my_server_group=foobar
 
 Any user-metadata prefixed with 'metering.' will be preserved by ceilometer 
 in the resource_metadata.user_metedata stored for each sample, so that it can 
 used to select the statistics on which the alarm is based, e.g.
 
   ceilometer alarm-threshold-create --name cpu_high_foobar \
 --description 'warning: foobar instance group running hot' \
 --meter-name cpu_util --threshold 70.0 \
 --comparison-operator gt --statistic avg \
 ...
 --query metadata.user_metedata.my_server_group=foobar
 
 This approach is of course predicated on the there being some natural 
 grouping relation between instances in your environment.
 
 Cheers,
 Eoghan
 
 
 =



 Do anyone have any suggestions?







 Best Regards!

 Kurt Rao


 

Re: [openstack-dev] [all] [tc] [PTL] Cascading vs. Cells – summit recap and move forward

2014-12-12 Thread Andrew Laski


On 12/12/2014 09:50 AM, Russell Bryant wrote:

On 12/11/2014 12:55 PM, Andrew Laski wrote:

Cells can handle a single API on top of globally distributed DCs.  I
have spoken with a group that is doing exactly that.  But it requires
that the API is a trusted part of the OpenStack deployments in those
distributed DCs.

And the way the rest of the components fit into that scenario is far
from clear to me.  Do you consider this more of a if you can make it
work, good for you, or something we should aim to be more generally
supported over time?  Personally, I see the globally distributed
OpenStack under a single API case much more complex, and worth
considering out of scope for the short to medium term, at least.


I do consider this to be out of scope for cells, for at least the medium 
term as you've said.  There is additional complexity in making that a 
supported configuration that is not being addressed in the cells 
effort.  I am just making the statement that this is something cells 
could address if desired, and therefore doesn't need an additional solution.



For me, this discussion boils down to ...

1) Do we consider these use cases in scope at all?

2) If we consider it in scope, is it enough of a priority to warrant a
cross-OpenStack push in the near term to work on it?

3) If yes to #2, how would we do it?  Cascading, or something built
around cells?

I haven't worried about #3 much, because I consider #2 or maybe even #1
to be a show stopper here.




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] UniqueConstraint for name and tenant_id in security group

2014-12-12 Thread Anna Kamyshnikova
Thanks everyone for sharing yours opinion! I will create a separate change
with another option that was suggested.

Yes, I'm currently working on this bug
https://bugs.launchpad.net/neutron/+bug/1194579.



On Fri, Dec 12, 2014 at 2:41 PM, Ihar Hrachyshka ihrac...@redhat.com
wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA512

 On 12/12/14 00:05, Mathieu Gagné wrote:
  We recently had an issue in production where a user had 2
  default security groups (for reasons we have yet to identify).

 This is probably the result of the race condition that is discussed in
 the thread: https://bugs.launchpad.net/neutron/+bug/1194579
 /Ihar
 -BEGIN PGP SIGNATURE-
 Version: GnuPG/MacGPG2 v2.0.22 (Darwin)

 iQEcBAEBCgAGBQJUitR/AAoJEC5aWaUY1u57VggIALzdTLHnO7Fr8gKlWPS7Uu+o
 Su9KV41td8Epzs3pNsGYkH2Kz4T5obAneCORUiZl7boBpAJcnMm3Jt9K8YnTCVUy
 t4AbfIxSrTD7drHf3HoMoNEDrSntdnpTHoGpG+idNpFjc0kjBjm81W3y14Gab0k5
 5Mw/jV8mdnB6aRs5Zhari50/04X8SZeDpQNgBHL5kY40CZ+sUtS4C8OKfj7OEAuW
 LNmkHgDAtwewbNdluntbSdLGVjyl/F9s+21HoajqBcGNhH8ZHpAr4hphMbZv8lBY
 iAD2tztxvkacYaGduBFh6bewxVNGaUJBWmmc2xqHAXXbDP3d9aOk5q0wHK3SPQY=
 =TDwc
 -END PGP SIGNATURE-

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] Moving _conf and _scripts to dashboard

2014-12-12 Thread David Lyle
Not entirely sure why they both exist either.

So by move, you meant override (nuance). That's different and I have no
issue with that.

I'm also fine with attempting to consolidate _conf and _scripts.

David

On Thu, Dec 11, 2014 at 1:22 PM, Thai Q Tran tqt...@us.ibm.com wrote:


 It would not create a circular dependency, dashboard would depend on
 horizon - not the latter.
 Scripts that are library specific will live in horizon while scripts that
 are panel specific will live in dashboard.
 Let me draw a more concrete example.

 In Horizon
 We know that _script and _conf are included in the base.html
 We create a _script and _conf placeholder file for project overrides
 (similar to _stylesheets and _header)
 In Dashboard
 We create a _script and _conf file with today's content
 It overrides the _script and _conf file in horizon
 Now we can include panel specific scripts without causing circular
 dependency.

 In fact, I would like to go further and suggest that _script and _conf be
 combine into a single file.
 Not sure why we need two places to include scripts.


 -David Lyle dkly...@gmail.com wrote: -
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 From: David Lyle dkly...@gmail.com
 Date: 12/11/2014 09:23AM
 Subject: Re: [openstack-dev] [Horizon] Moving _conf and _scripts to
 dashboard


 I'm probably not understanding the nuance of the question but moving the
 _scripts.html file to openstack_dashboard creates some circular
 dependencies, does it not? templates/base.html in the horizon side of the
 repo includes _scripts.html and insures that the javascript needed by the
 existing horizon framework is present.

 _conf.html seems like a better candidate for moving as it's more closely
 tied to the application code.

 David


 On Wed, Dec 10, 2014 at 7:20 PM, Thai Q Tran tqt...@us.ibm.com wrote:

 Sorry for duplicate mail, forgot the subject.

 -Thai Q Tran/Silicon Valley/IBM wrote: -
 To: OpenStack Development Mailing List \(not for usage questions\) 
 openstack-dev@lists.openstack.org
 From: Thai Q Tran/Silicon Valley/IBM
 Date: 12/10/2014 03:37PM
 Subject: Moving _conf and _scripts to dashboard

 The way we are structuring our javascripts today is complicated. All of
 our static javascripts reside in /horizon/static and are imported through
 _conf.html and _scripts.html. Notice that there are already some panel
 specific javascripts like: horizon.images.js, horizon.instances.js,
 horizon.users.js. They do not belong in horizon. They belong in
 openstack_dashboard because they are specific to a panel.

 Why am I raising this issue now? In Angular, we need controllers written
 in javascript for each panel. As we angularize more and more panels, we
 need to store them in a way that make sense. To me, it make sense for us to
 move _conf and _scripts to openstack_dashboard. Or if this is not possible,
 then provide a mechanism to override them in openstack_dashboard.

 Thoughts?
 Thai



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Services are now split out and neutron is open for commits!

2014-12-12 Thread Doug Wiegley
Hi all,

Neutron grenade jobs have been failing since late afternoon Thursday, due to 
split fallout.  Armando has a fix, and it’s working it’s way through the gate:

https://review.openstack.org/#/c/141256/

Get your rechecks ready!

Thanks,
Doug


From: Douglas Wiegley do...@a10networks.commailto:do...@a10networks.com
Date: Wednesday, December 10, 2014 at 10:29 PM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [neutron] Services are now split out and neutron 
is open for commits!

Hi all,

I’d like to echo the thanks to all involved, and thanks for the patience during 
this period of transition.

And a logistical note: if you have any outstanding reviews against the now 
missing files/directories (db/{loadbalancer,firewall,vpn}, services/, or 
tests/unit/services), you must re-submit your review against the new repos.  
Existing neutron reviews for service code will be summarily abandoned in the 
near future.

Lbaas folks, hold off on re-submitting feature/lbaasv2 reviews.  I’ll have that 
branch merged in the morning, and ping in channel when it’s ready for 
submissions.

Finally, if any tempest lovers want to take a crack at splitting the tempest 
runs into four, perhaps using salv’s reviews of splitting them in two as a 
guide, and then creating jenkins jobs, we need some help getting those going.  
Please ping me directly (IRC: dougwig).

Thanks,
doug


From: Kyle Mestery mest...@mestery.commailto:mest...@mestery.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Wednesday, December 10, 2014 at 4:10 PM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: [openstack-dev] [neutron] Services are now split out and neutron is 
open for commits!

Folks, just a heads up that we have completed splitting out the services 
(FWaaS, LBaaS, and VPNaaS) into separate repositores. [1] [2] [3]. This was all 
done in accordance with the spec approved here [4]. Thanks to all involved, but 
a special thanks to Doug and Anita, as well as infra. Without all of their work 
and help, this wouldn't have been possible!

Neutron and the services repositories are now open for merges again. We're 
going to be landing some major L3 agent refactoring across the 4 repositories 
in the next four days, look for Carl to be leading that work with the L3 team.

In the meantime, please report any issues you have in launchpad [5] as bugs, 
and find people in #openstack-neutron or send an email. We've verified things 
come up and all the tempest and API tests for basic neutron work fine.

In the coming week, we'll be getting all the tests working for the services 
repositories. Medium term, we need to also move all the advanced services 
tempest tests out of tempest and into the respective repositories. We also need 
to beef these tests up considerably, so if you want to help out on a critical 
project for Neutron, please let me know.

Thanks!
Kyle

[1] http://git.openstack.org/cgit/openstack/neutron-fwaas
[2] http://git.openstack.org/cgit/openstack/neutron-lbaas
[3] http://git.openstack.org/cgit/openstack/neutron-vpnaas
[4] 
http://git.openstack.org/cgit/openstack/neutron-specs/tree/specs/kilo/services-split.rst
[5] https://bugs.launchpad.net/neutron
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Services are now split out and neutron is open for commits!

2014-12-12 Thread Kyle Mestery
This has merged now, FYI.

On Fri, Dec 12, 2014 at 10:28 AM, Doug Wiegley do...@a10networks.com
wrote:

  Hi all,

  Neutron grenade jobs have been failing since late afternoon Thursday,
 due to split fallout.  Armando has a fix, and it’s working it’s way through
 the gate:

  https://review.openstack.org/#/c/141256/

  Get your rechecks ready!

  Thanks,
 Doug


   From: Douglas Wiegley do...@a10networks.com
 Date: Wednesday, December 10, 2014 at 10:29 PM
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [neutron] Services are now split out and
 neutron is open for commits!

   Hi all,

  I’d like to echo the thanks to all involved, and thanks for the patience
 during this period of transition.

  And a logistical note: if you have any outstanding reviews against the
 now missing files/directories (db/{loadbalancer,firewall,vpn}, services/,
 or tests/unit/services), you must re-submit your review against the new
 repos.  Existing neutron reviews for service code will be summarily
 abandoned in the near future.

  Lbaas folks, hold off on re-submitting feature/lbaasv2 reviews.  I’ll
 have that branch merged in the morning, and ping in channel when it’s ready
 for submissions.

  Finally, if any tempest lovers want to take a crack at splitting the
 tempest runs into four, perhaps using salv’s reviews of splitting them in
 two as a guide, and then creating jenkins jobs, we need some help getting
 those going.  Please ping me directly (IRC: dougwig).

  Thanks,
 doug


   From: Kyle Mestery mest...@mestery.com
 Reply-To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Date: Wednesday, December 10, 2014 at 4:10 PM
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Subject: [openstack-dev] [neutron] Services are now split out and neutron
 is open for commits!

   Folks, just a heads up that we have completed splitting out the
 services (FWaaS, LBaaS, and VPNaaS) into separate repositores. [1] [2] [3].
 This was all done in accordance with the spec approved here [4]. Thanks to
 all involved, but a special thanks to Doug and Anita, as well as infra.
 Without all of their work and help, this wouldn't have been possible!

 Neutron and the services repositories are now open for merges again. We're
 going to be landing some major L3 agent refactoring across the 4
 repositories in the next four days, look for Carl to be leading that work
 with the L3 team.

  In the meantime, please report any issues you have in launchpad [5] as
 bugs, and find people in #openstack-neutron or send an email. We've
 verified things come up and all the tempest and API tests for basic neutron
 work fine.

 In the coming week, we'll be getting all the tests working for the
 services repositories. Medium term, we need to also move all the advanced
 services tempest tests out of tempest and into the respective repositories.
 We also need to beef these tests up considerably, so if you want to help
 out on a critical project for Neutron, please let me know.

 Thanks!
 Kyle

 [1] http://git.openstack.org/cgit/openstack/neutron-fwaas
 [2] http://git.openstack.org/cgit/openstack/neutron-lbaas
 [3] http://git.openstack.org/cgit/openstack/neutron-vpnaas
 [4]
 http://git.openstack.org/cgit/openstack/neutron-specs/tree/specs/kilo/services-split.rst
 [5] https://bugs.launchpad.net/neutron

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Questions regarding Functional Testing (Paris Summit)

2014-12-12 Thread Sean Toner
Hi everyone,

I have been reading the etherpad from the Paris summit wrt to moving the
functional tests into their respective projects 
(https://etherpad.openstack.org/p/kilo-crossproject-move-func-tests-to-projects).
  I am mostly interested this from the nova project 
perspective. However, I still have a lot of questions.

For example, is it permissible (or a good idea) to use the python-
*clients as a library for the tasks?  I know these were not allowed in 
Tempest, but I don't see why they couldn't be used here (especially 
since, AFAIK, there is no testing done on the SDK clients themselves).

Another question is also about a difference between Tempest and these 
new functional tests.  In nova's case, it would be very useful to 
actually utilize the libvirt library in order to touch the hypervisor 
itself.  In Tempest, it's not allowed to do that.  Would it make sense 
to be able to make calls to libvirt within a nova functional test?

Basically, since Tempest was a public only library, there needs to be 
a different set of rules as to what can and can't be done.  Even the 
definition of what exactly a functional test is should be more clearly 
stated.  

For example, I have been working on a project for some nova tests that 
also use the glance and keystone clients (since I am using the python 
SDK clients).  I saw this quote from the etherpad:

Many api tests in Tempest require more than one service (eg, nova 
api tests require glance)

Is this an API test or an integration test or a functional test? 
sounds to me like cross project integration tests +1+1

I would disagree that a functional test should belong to only one 
project.  IMHO, a functional test is essentially a black box test that 
might span one or more projects, though the projects should be related.  
For example, I have worked on one of the new features where the config 
drive image property is set in the glance image itself, rather than 
specified during the nova boot call.  

I believe that's how a functional test can be defined.  A black box test 
which may require looking under the hood that Tempest does not allow.

Has there been any other work or thoughts on how functional testing 
should be done?

Thanks,
Sean Toner

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] UniqueConstraint for name and tenant_id in security group

2014-12-12 Thread Maru Newby

On Dec 11, 2014, at 2:27 PM, Sean Dague s...@dague.net wrote:

 On 12/11/2014 04:16 PM, Jay Pipes wrote:
 On 12/11/2014 04:07 PM, Vishvananda Ishaya wrote:
 On Dec 11, 2014, at 1:04 PM, Jay Pipes jaypi...@gmail.com wrote:
 On 12/11/2014 04:01 PM, Vishvananda Ishaya wrote:
 
 On Dec 11, 2014, at 8:00 AM, Henry Gessau ges...@cisco.com wrote:
 
 On Thu, Dec 11, 2014, Mark McClain m...@mcclain.xyz wrote:
 
 On Dec 11, 2014, at 8:43 AM, Jay Pipes jaypi...@gmail.com
 mailto:jaypi...@gmail.com wrote:
 
 I'm generally in favor of making name attributes opaque, utf-8
 strings that
 are entirely user-defined and have no constraints on them. I
 consider the
 name to be just a tag that the user places on some resource. It
 is the
 resource's ID that is unique.
 
 I do realize that Nova takes a different approach to *some*
 resources,
 including the security group name.
 
 End of the day, it's probably just a personal preference whether
 names
 should be unique to a tenant/user or not.
 
 Maru had asked me my opinion on whether names should be unique and I
 answered my personal opinion that no, they should not be, and if
 Neutron
 needed to ensure that there was one and only one default security
 group for
 a tenant, that a way to accomplish such a thing in a race-free
 way, without
 use of SELECT FOR UPDATE, was to use the approach I put into the
 pastebin on
 the review above.
 
 
 I agree with Jay.  We should not care about how a user names the
 resource.
 There other ways to prevent this race and Jay’s suggestion is a
 good one.
 
 However we should open a bug against Horizon because the user
 experience there
 is terrible with duplicate security group names.
 
 The reason security group names are unique is that the ec2 api
 supports source
 rule specifications by tenant_id (user_id in amazon) and name, so
 not enforcing
 uniqueness means that invocation in the ec2 api will either fail or be
 non-deterministic in some way.
 
 So we should couple our API evolution to EC2 API then?
 
 -jay
 
 No I was just pointing out the historical reason for uniqueness, and
 hopefully
 encouraging someone to find the best behavior for the ec2 api if we
 are going
 to keep the incompatibility there. Also I personally feel the ux is
 better
 with unique names, but it is only a slight preference.
 
 Sorry for snapping, you made a fair point.
 
 Yeh, honestly, I agree with Vish. I do feel that the UX of that
 constraint is useful. Otherwise you get into having to show people UUIDs
 in a lot more places. While those are good for consistency, they are
 kind of terrible to show to people.

While there is a good case for the UX of unique names - it also makes 
orchestration via tools like puppet a heck of a lot simpler - the fact is that 
most OpenStack resources do not require unique names.  That being the case, why 
would we want security groups to deviate from this convention?


Maru



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] REST and Django

2014-12-12 Thread Thai Q Tran
In your previous example, you are posting to a certain URL (i.e./keystone/{ver:=x.0}/{method:=update}).client: POST /keystone/{ver:=x.0}/{method:=update} = middleware: just forward to clients[ver].getattr("method")(**kwargs) = keystone: updateCorrect me if I'm wrong, but it looks like you have a unique URL for each /service/version/method.I fail to see how that is different than what we have today? Is there a view for each service? each version?Let's say for argument sake that you have a single view that takes care of all URL routing. All requests pass through this view and contain a JSON that contains instruction on which API to invoke and what parameters to pass.And lets also say that you wrote some code that uses reflection to map the JSON to an action. What you end up with is a client-centric application, where all of the logic resides client-side. If there are things we want to accomplish server-side, it will be extremely hard to pull off. Things like caching, websocket, aggregation, batch actions, translation, etc What you end up with is a client with no help from the server.Obviously the other extreme is what we have today, where we do everything server-side and only using client-side for binding events. I personally prefer a more balance approach where we can leverage both the server and client. There are things that client can do well, and there are things that server can do well. Going the RPC way restrict us to just client technologies and may hamper any additional future functionalities we want to bring server-side. In other words, using REST over RPC gives us the opportunity to use server-side technologies to help solve problems should the need for it arises.I would also argue that the REST approach is NOT what we have today. What we have today is a static webpage that is generated server-side, where API is hidden from the client. What we end up with using the REST approach is a dynamic webpage generated client-side, two very different things. We have essentially striped out the rendering logic from Django templating and replaced it with Angular.-Tihomir Trifonov t.trifo...@gmail.com wrote: -To: "OpenStack Development Mailing List (not for usage questions)" openstack-dev@lists.openstack.orgFrom: Tihomir Trifonov t.trifo...@gmail.comDate: 12/12/2014 04:53AMSubject: Re: [openstack-dev] [horizon] REST and DjangoHere's an example: Admin user Joe has an Domain open and stares at it for 15 minutes while he updates the description. Admin user Bob is asked to go ahead and enable it. He opens the record, edits it, and then saves it. Joe finished perfecting the description and saves it. Doing this action would mean that the Domain is enabled and the description gets updated. Last man in still wins if he updates the same fields, but if they update different fields then both of their changes will take affect without them stomping on each other. Whether that is good or bad may depend on the situationThat's a great example. I believe that all of the Openstack APIs support PATCH updates of arbitrary fields. This way - the frontend(AngularJS) can detect which fields are being modified, and to submit only these fields for update. If we however use a form with POST, although we will load the object before updating it, the middleware cannot find which fields are actually modified, and will update them all, which is more likely what PUT should do. Thus having full control in the frontend part, we can submit only changed fields. If however a service API doesn't support PATCH, it is actually a problem in the API and not in the client...The service API documentation almost always lags (although, helpedby specs now) and the service team takes on the burden of exposing aprogrammatic way to access the API. This is tested and easily consumablevia the python clients, which removes some guesswork from using theservice.True. But what if the service team modifies a method signature from let's say:def add_something(self, request, field1, field2):todef add_something(self, request, field1, field2, field3):and in the middleware we have the old signature:def add_something(self, request, field1, field2):we still need to modify the middleware to add the new field. If however the middleware is transparent and just passes **kwargs, it will pass through whatever the frontend sends. So we just need to update the frontend, which can be done using custom views, and not necessary going through an upstream change. My point is why do we need to hide some features of the backend service API behind a "firewall" what the middleware in fact is?On Fri, Dec 12, 2014 at 8:08 AM, Tripp, Travis S travis.tr...@hp.com wrote:I just re-read and I apologize for the hastily written email I previously
sent. Ill try to salvage it with a bit of a revision below (please ignore
the previous email).

On 12/11/14, 7:02 PM, "Tripp, Travis S" travis.tr...@hp.com wrote
(REVISED):

Tihomir,

Your comments in the patch were very helpful for me to 

Re: [openstack-dev] FW: [horizon] [ux] Changing how the modals are closed in Horizon

2014-12-12 Thread Timur Sufiev
It seems to me that the consensus on keeping the simpler approach -- to
make Bootstrap data-backdrop=static as the default behavior -- has been
reached. Am I right?

On Thu, Dec 4, 2014 at 10:59 PM, Kruithof, Piet pieter.c.kruithof...@hp.com
 wrote:

 My preference would be “change the default behavior to 'static’” for the
 following reasons:

 - There are plenty of ways to close the modal, so there’s not really a
 need for this feature.
 - There are no visual cues, such as an “X” or a Cancel button, that
 selecting outside of the modal closes it.
 - Downside is losing all of your data.

 My two cents…

 Begin forwarded message:

 From: Rob Cresswell (rcresswe) rcres...@cisco.commailto:
 rcres...@cisco.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
 
 Date: December 3, 2014 at 5:21:51 AM PST
 Subject: Re: [openstack-dev] [horizon] [ux] Changing how the modals are
 closed in Horizon
 Reply-To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
 

 +1 to changing the behaviour to ‘static'. Modal inside a modal is
 potentially slightly more useful, but looks messy and inconsistent, which I
 think outweighs the functionality.

 Rob


 On 2 Dec 2014, at 12:21, Timur Sufiev tsuf...@mirantis.commailto:
 tsuf...@mirantis.com wrote:

 Hello, Horizoneers and UX-ers!

 The default behavior of modals in Horizon (defined in turn by Bootstrap
 defaults) regarding their closing is to simply close the modal once user
 clicks somewhere outside of it (on the backdrop element below and around
 the modal). This is not very convenient for the modal forms containing a
 lot of input - when it is closed without a warning all the data the user
 has already provided is lost. Keeping this in mind, I've made a patch [1]
 changing default Bootstrap 'modal_backdrop' parameter to 'static', which
 means that forms are not closed once the user clicks on a backdrop, while
 it's still possible to close them by pressing 'Esc' or clicking on the 'X'
 link at the top right border of the form. Also the patch [1] allows to
 customize this behavior (between 'true'-current one/'false' - no backdrop
 element/'static') on a per-form basis.

 What I didn't know at the moment I was uploading my patch is that David
 Lyle had been working on a similar solution [2] some time ago. It's a bit
 more elaborate than mine: if the user has already filled some some inputs
 in the form, then a confirmation dialog is shown, otherwise the form is
 silently dismissed as it happens now.

 The whole point of writing about this in the ML is to gather opinions
 which approach is better:
 * stick to the current behavior;
 * change the default behavior to 'static';
 * use the David's solution with confirmation dialog (once it'll be rebased
 to the current codebase).

 What do you think?

 [1] https://review.openstack.org/#/c/113206/
 [2] https://review.openstack.org/#/c/23037/

 P.S. I remember that I promised to write this email a week ago, but better
 late than never :).

 --
 Timur Sufiev
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Timur Sufiev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [ha] potential issue with implicit async-compatible mysql drivers

2014-12-12 Thread Mike Bayer

 On Dec 12, 2014, at 9:27 AM, Ihar Hrachyshka ihrac...@redhat.com wrote:
 
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA512
 
 Reading the latest comments at
 https://github.com/PyMySQL/PyMySQL/issues/275, it seems to me that the
 issue is not to be solved in drivers themselves but instead in
 libraries that arrange connections (sqlalchemy/oslo.db), correct?
 
 Will the proposed connection reopening help?

disagree, this is absolutely a driver bug.  I’ve re-read that last comment and 
now I see that the developer is suggesting that this condition not be flagged 
in any way, so I’ve responded.  The connection should absolutely blow up and if 
it wants to refuse to be usable afterwards, that’s fine (it’s the same as 
MySQLdb “commands out of sync”).  It just has to *not* emit any further SQL as 
though nothing is wrong.

It doesn’t matter much for PyMySQL anyway, I don’t know that PyMySQL is up to 
par for openstack in any case (look at the entries in their changelog: 
https://github.com/PyMySQL/PyMySQL/blob/master/CHANGELOG Several other bug 
fixes”, “Many bug fixes- really?  is this an iphone app?)

We really should be looking to get this fixed in MySQL-connector, which seems 
to have a similar issue.   It’s just so difficult to get responses from 
MySQL-connector that the PyMySQL thread is at least informative.





 
 /Ihar
 
 On 05/12/14 23:43, Mike Bayer wrote:
 Hey list -
 
 I’m posting this here just to get some ideas on what might be
 happening here, as it may or may not have some impact on Openstack
 if and when we move to MySQL drivers that are async-patchable, like
 MySQL-connector or PyMySQL.  I had a user post this issue a few
 days ago which I’ve since distilled into test cases for PyMySQL and
 MySQL-connector separately.   It uses gevent, not eventlet, so I’m
 not really sure if this applies.  But there’s plenty of very smart
 people here so if anyone can shed some light on what is actually
 happening here, that would help.
 
 The program essentially illustrates code that performs several
 steps upon a connection, however if the greenlet is suddenly
 killed, the state from the connection, while damaged, is still
 being allowed to continue on in some way, and what’s
 super-catastrophic here is that you see a transaction actually
 being committed *without* all the statements proceeding on it.
 
 In my work with MySQL drivers, I’ve noted for years that they are
 all very, very bad at dealing with concurrency-related issues.  The
 whole “MySQL has gone away” and “commands out of sync” errors are
 ones that we’ve all just drowned in, and so often these are due to
 the driver getting mixed up due to concurrent use of a connection.
 However this one seems more insidious.   Though at the same time,
 the script has some complexity happening (like a simplistic
 connection pool) and I’m not really sure where the core of the
 issue lies.
 
 The script is at
 https://gist.github.com/zzzeek/d196fa91c40cb515365e and also below.
 If you run it for a few seconds, go over to your MySQL command line
 and run this query:
 
 SELECT * FROM table_b WHERE a_id not in (SELECT id FROM table_a)
 ORDER BY a_id DESC;
 
 and what you’ll see is tons of rows in table_b where the “a_id” is
 zero (because cursor.lastrowid fails), but the *rows are
 committed*.   If you read the segment of code that does this, it
 should be impossible:
 
 connection = pool.get() rowid = execute_sql( connection, INSERT
 INTO table_a (data) VALUES (%s), (a,) )
 
 gevent.sleep(random.random() * 0.2)  try: execute_sql( connection, 
 INSERT INTO table_b (a_id, data) VALUES (%s, %s), (rowid, b,) 
 )  connection.commit()  pool.return_conn(connection)
 
 except Exception: connection.rollback() 
 pool.return_conn(connection)
 
 so if the gevent.sleep() throws a timeout error, somehow we are
 getting thrown back in there, with the connection in an invalid
 state, but not invalid enough to commit.
 
 If a simple check for “SELECT connection_id()” is added, this query
 fails and the whole issue is prevented.  Additionally, if you put a
 foreign key constraint on that b_table.a_id, then the issue is
 prevented, and you see that the constraint violation is happening
 all over the place within the commit() call.   The connection is
 being used such that its state just started after the
 gevent.sleep() call.
 
 Now, there’s also a very rudimental connection pool here.   That is
 also part of what’s going on.  If i try to run without the pool,
 the whole script just runs out of connections, fast, which suggests
 that this gevent timeout cleans itself up very, very badly.
 However, SQLAlchemy’s pool works a lot like this one, so if folks
 here can tell me if the connection pool is doing something bad,
 then that’s key, because I need to make a comparable change in
 SQLAlchemy’s pool.   Otherwise I worry our eventlet use could have
 big problems under high load.
 
 
 
 
 
 # -*- coding: utf-8 -*- import gevent.monkey 
 gevent.monkey.patch_all()
 
 import 

Re: [openstack-dev] [Neutron] UniqueConstraint for name and tenant_id in security group

2014-12-12 Thread Ivar Lazzaro
In general, I agree with Jay about the opaqueness of the names. I see
however good reasons for having user-defined unique attributes (see
 Clint's point about idempotency).
A middle ground here could be granting to the users the ability to specify
the resource ID.
A similar proposal was made some time ago by Eugene [0]


[0]
http://lists.openstack.org/pipermail/openstack-dev/2014-September/046150.html

On Thu, Dec 11, 2014 at 6:59 AM, Mark McClain m...@mcclain.xyz wrote:


 On Dec 11, 2014, at 8:43 AM, Jay Pipes jaypi...@gmail.com wrote:

 I'm generally in favor of making name attributes opaque, utf-8 strings
 that are entirely user-defined and have no constraints on them. I consider
 the name to be just a tag that the user places on some resource. It is the
 resource's ID that is unique.

 I do realize that Nova takes a different approach to *some* resources,
 including the security group name.

 End of the day, it's probably just a personal preference whether names
 should be unique to a tenant/user or not.

 Maru had asked me my opinion on whether names should be unique and I
 answered my personal opinion that no, they should not be, and if Neutron
 needed to ensure that there was one and only one default security group for
 a tenant, that a way to accomplish such a thing in a race-free way, without
 use of SELECT FOR UPDATE, was to use the approach I put into the pastebin
 on the review above.


 I agree with Jay.  We should not care about how a user names the
 resource.  There other ways to prevent this race and Jay’s suggestion is a
 good one.

 mark



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] Moving _conf and _scripts to dashboard

2014-12-12 Thread Lin Hua Cheng
Consolidating them would break it for users that have customization and
extension on the two templates.

-Lin

On Fri, Dec 12, 2014 at 9:20 AM, David Lyle dkly...@gmail.com wrote:

 Not entirely sure why they both exist either.

 So by move, you meant override (nuance). That's different and I have no
 issue with that.

 I'm also fine with attempting to consolidate _conf and _scripts.

 David

 On Thu, Dec 11, 2014 at 1:22 PM, Thai Q Tran tqt...@us.ibm.com wrote:


 It would not create a circular dependency, dashboard would depend on
 horizon - not the latter.
 Scripts that are library specific will live in horizon while scripts that
 are panel specific will live in dashboard.
 Let me draw a more concrete example.

 In Horizon
 We know that _script and _conf are included in the base.html
 We create a _script and _conf placeholder file for project overrides
 (similar to _stylesheets and _header)
 In Dashboard
 We create a _script and _conf file with today's content
 It overrides the _script and _conf file in horizon
 Now we can include panel specific scripts without causing circular
 dependency.

 In fact, I would like to go further and suggest that _script and _conf be
 combine into a single file.
 Not sure why we need two places to include scripts.


 -David Lyle dkly...@gmail.com wrote: -
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 From: David Lyle dkly...@gmail.com
 Date: 12/11/2014 09:23AM
 Subject: Re: [openstack-dev] [Horizon] Moving _conf and _scripts to
 dashboard


 I'm probably not understanding the nuance of the question but moving the
 _scripts.html file to openstack_dashboard creates some circular
 dependencies, does it not? templates/base.html in the horizon side of the
 repo includes _scripts.html and insures that the javascript needed by the
 existing horizon framework is present.

 _conf.html seems like a better candidate for moving as it's more closely
 tied to the application code.

 David


 On Wed, Dec 10, 2014 at 7:20 PM, Thai Q Tran tqt...@us.ibm.com wrote:

 Sorry for duplicate mail, forgot the subject.

 -Thai Q Tran/Silicon Valley/IBM wrote: -
 To: OpenStack Development Mailing List \(not for usage questions\) 
 openstack-dev@lists.openstack.org
 From: Thai Q Tran/Silicon Valley/IBM
 Date: 12/10/2014 03:37PM
 Subject: Moving _conf and _scripts to dashboard

 The way we are structuring our javascripts today is complicated. All of
 our static javascripts reside in /horizon/static and are imported through
 _conf.html and _scripts.html. Notice that there are already some panel
 specific javascripts like: horizon.images.js, horizon.instances.js,
 horizon.users.js. They do not belong in horizon. They belong in
 openstack_dashboard because they are specific to a panel.

 Why am I raising this issue now? In Angular, we need controllers written
 in javascript for each panel. As we angularize more and more panels, we
 need to store them in a way that make sense. To me, it make sense for us to
 move _conf and _scripts to openstack_dashboard. Or if this is not possible,
 then provide a mechanism to override them in openstack_dashboard.

 Thoughts?
 Thai



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] [PTL] Cascading vs. Cells – summit recap and move forward

2014-12-12 Thread Russell Bryant
On 12/12/2014 11:06 AM, Andrew Laski wrote:
 
 On 12/12/2014 09:50 AM, Russell Bryant wrote:
 On 12/11/2014 12:55 PM, Andrew Laski wrote:
 Cells can handle a single API on top of globally distributed DCs.  I
 have spoken with a group that is doing exactly that.  But it requires
 that the API is a trusted part of the OpenStack deployments in those
 distributed DCs.
 And the way the rest of the components fit into that scenario is far
 from clear to me.  Do you consider this more of a if you can make it
 work, good for you, or something we should aim to be more generally
 supported over time?  Personally, I see the globally distributed
 OpenStack under a single API case much more complex, and worth
 considering out of scope for the short to medium term, at least.
 
 I do consider this to be out of scope for cells, for at least the medium
 term as you've said.  There is additional complexity in making that a
 supported configuration that is not being addressed in the cells
 effort.  I am just making the statement that this is something cells
 could address if desired, and therefore doesn't need an additional
 solution.

OK, great.  Thanks for the clarification.  I think we're on the same
page.  :-)

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] [PTL] Cascading vs. Cells – summit recap and move forward

2014-12-12 Thread Joe Gordon
On Fri, Dec 12, 2014 at 6:50 AM, Russell Bryant rbry...@redhat.com wrote:

 On 12/11/2014 12:55 PM, Andrew Laski wrote:
  Cells can handle a single API on top of globally distributed DCs.  I
  have spoken with a group that is doing exactly that.  But it requires
  that the API is a trusted part of the OpenStack deployments in those
  distributed DCs.

 And the way the rest of the components fit into that scenario is far
 from clear to me.  Do you consider this more of a if you can make it
 work, good for you, or something we should aim to be more generally
 supported over time?  Personally, I see the globally distributed
 OpenStack under a single API case much more complex, and worth
 considering out of scope for the short to medium term, at least.

 For me, this discussion boils down to ...

 1) Do we consider these use cases in scope at all?

 2) If we consider it in scope, is it enough of a priority to warrant a
 cross-OpenStack push in the near term to work on it?

 3) If yes to #2, how would we do it?  Cascading, or something built
 around cells?

 I haven't worried about #3 much, because I consider #2 or maybe even #1
 to be a show stopper here.


Agreed



 --
 Russell Bryant

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] Logs format on UI (High/6.0)

2014-12-12 Thread Dmitry Pyzhov
We have a high priority bug in 6.0:
https://bugs.launchpad.net/fuel/+bug/1401852. Here is the story.

Our openstack services use to send logs in strange format with extra copy
of timestamp and loglevel:
== ./neutron-metadata-agent.log ==
2014-12-12T11:00:30.098105+00:00 info: 2014-12-12 11:00:30.003 14349 INFO
neutron.common.config [-] Logging enabled!

And we have a workaround for this. We hide extra timestamp and use second
loglevel.

In Juno some of services have updated oslo.logging and now send logs in
simple format:
== ./nova-api.log ==
2014-12-12T10:57:15.437488+00:00 debug: Loading app ec2 from
/etc/nova/api-paste.ini

In order to keep backward compatibility and deal with both formats we have
a dirty workaround for our workaround:
https://review.openstack.org/#/c/141450/

As I see, our best choice here is to throw away all workarounds and show
logs on UI as is. If service sends duplicated data - we should show
duplicated data.

Long term fix here is to update oslo.logging in all packages. We can do it
in 6.1.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [NFV][Telco] pxe-boot

2014-12-12 Thread Joe Gordon
On Fri, Dec 12, 2014 at 1:48 AM, Pasquale Porreca 
pasquale.porr...@dektech.com.au wrote:

 From my point of view it is not advisable to base some functionalities
 of the instances on direct calls to Openstack API. This for 2 main
 reasons, the first one: if the Openstack code changes (and we know
 Openstack code does change) it will be required to change the code of
 the software running in the instance too; the second one: if in the
 future one wants to pass to another cloud infrastructure it will be more
 difficult to achieve it.



Thoughts on your two reasons:

1) What happens if OpenStack code changes?

While OpenStack is under very active development we have stable APIs,
especially around something like booting an instance. So the API call to
boot an instance with a specific image *should not* change as you upgrade
OpenStack (unless we deprecate an API, but this will be a slow multi year
process).

2) if in the future one wants to pass to another cloud infrastructure it
will be more difficult to achieve it.

Why not use something like apache jcloud to make this easier?
https://jclouds.apache.org/






 On 12/12/14 01:20, Joe Gordon wrote:
  On Wed, Dec 10, 2014 at 7:42 AM, Pasquale Porreca 
  pasquale.porr...@dektech.com.au wrote:
 
Well, one of the main reason to choose an open source product is to
 avoid
   vendor lock-in. I think it is not
   advisable to embed in the software running in an instance a call to
   OpenStack specific services.
  
  I'm sorry I don't follow the logic here, can you elaborate.
 
 

 --
 Pasquale Porreca

 DEK Technologies
 Via dei Castelli Romani, 22
 00040 Pomezia (Roma)

 Mobile +39 3394823805
 Skype paskporr

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] Moving _conf and _scripts to dashboard

2014-12-12 Thread Thai Q Tran
As is the case with anything we change, but that should not stop us from making improvements/progress. I would argue that it would make life easier for them since all scripts are now in one place.-Lin Hua Cheng os.lch...@gmail.com wrote: -To: "OpenStack Development Mailing List (not for usage questions)" openstack-dev@lists.openstack.orgFrom: Lin Hua Cheng os.lch...@gmail.comDate: 12/12/2014 10:28AMSubject: Re: [openstack-dev] [Horizon] Moving _conf and _scripts to dashboardConsolidating them would break it for users that have customization and extension on the two templates.-LinOn Fri, Dec 12, 2014 at 9:20 AM, David Lyle dkly...@gmail.com wrote:Not entirely sure why they both exist either.So by move, you meant override (nuance). That's different and I have no issue with that.I'm also fine with attempting to consolidate _conf and _scripts.DavidOn Thu, Dec 11, 2014 at 1:22 PM, Thai Q Tran tqt...@us.ibm.com wrote:It would not create a circular dependency, dashboard would depend on horizon - not the latter.Scripts that are library specific will live in horizon while scripts that are panel specific will live in dashboard.Let me draw a more concrete example.In HorizonWe know that _script and _conf are included in the base.htmlWe create a _script and _conf placeholder file for project overrides (similar to _stylesheets and _header)In DashboardWe create a _script and _conf file with today's contentIt overrides the _script and _conf file in horizonNow we can include panel specific scripts without causing circular dependency.In fact, I would like to go further and suggest that _script and _conf be combine into a single file.Not sure why we need two places to include scripts.-David Lyle dkly...@gmail.com wrote: -To: "OpenStack Development Mailing List (not for usage questions)" openstack-dev@lists.openstack.orgFrom: David Lyle dkly...@gmail.comDate: 12/11/2014 09:23AMSubject: Re: [openstack-dev] [Horizon] Moving _conf and _scripts to dashboardI'm probably not understanding the nuance of the question but moving the _scripts.html file to openstack_dashboard creates some circular dependencies, does it not? templates/base.html in the horizon side of the repo includes _scripts.html and insures that the _javascript_ needed by the existing horizon framework is present._conf.html seems like a better candidate for moving as it's more closely tied to the application code.DavidOn Wed, Dec 10, 2014 at 7:20 PM, Thai Q Tran tqt...@us.ibm.com wrote:Sorry for duplicate mail, forgot the subject.-Thai Q Tran/Silicon Valley/IBM wrote: -To: "OpenStack Development Mailing List \(not for usage questions\)" openstack-dev@lists.openstack.orgFrom: Thai Q Tran/Silicon Valley/IBMDate: 12/10/2014 03:37PMSubject: Moving _conf and _scripts to dashboardThe way we are structuring our_javascript_stoday is complicated. All of our static _javascript_s reside in /horizon/static and are imported through _conf.html and _scripts.html. Notice that there are already some panel specific _javascript_s like: horizon.images.js, horizon.instances.js, horizon.users.js. They do not belong in horizon. They belong in openstack_dashboard because they are specific to a panel.Why am I raising this issue now? In Angular, we need controllers written in _javascript_ for each panel. As we angularize more and more panels, we need to store them in a way that make sense. To me, it make sense for us to move _conf and _scripts to openstack_dashboard. Or if this is not possible, then provide a mechanism to override them in openstack_dashboard.Thoughts?Thai


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___OpenStack-dev mailing listOpenStack-dev@lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___OpenStack-dev mailing listOpenStack-dev@lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [ha] potential issue with implicit async-compatible mysql drivers

2014-12-12 Thread Doug Hellmann

On Dec 12, 2014, at 1:16 PM, Mike Bayer mba...@redhat.com wrote:

 
 On Dec 12, 2014, at 9:27 AM, Ihar Hrachyshka ihrac...@redhat.com wrote:
 
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA512
 
 Reading the latest comments at
 https://github.com/PyMySQL/PyMySQL/issues/275, it seems to me that the
 issue is not to be solved in drivers themselves but instead in
 libraries that arrange connections (sqlalchemy/oslo.db), correct?
 
 Will the proposed connection reopening help?
 
 disagree, this is absolutely a driver bug.  I’ve re-read that last comment 
 and now I see that the developer is suggesting that this condition not be 
 flagged in any way, so I’ve responded.  The connection should absolutely blow 
 up and if it wants to refuse to be usable afterwards, that’s fine (it’s the 
 same as MySQLdb “commands out of sync”).  It just has to *not* emit any 
 further SQL as though nothing is wrong.
 
 It doesn’t matter much for PyMySQL anyway, I don’t know that PyMySQL is up to 
 par for openstack in any case (look at the entries in their changelog: 
 https://github.com/PyMySQL/PyMySQL/blob/master/CHANGELOG Several other bug 
 fixes”, “Many bug fixes- really?  is this an iphone app?)

This does make me a little concerned about merging 
https://review.openstack.org/#/c/133962/ so I’ve added a -2 for the time being 
to let the discussion go on here.

Doug


 
 We really should be looking to get this fixed in MySQL-connector, which seems 
 to have a similar issue.   It’s just so difficult to get responses from 
 MySQL-connector that the PyMySQL thread is at least informative.
 
 
 
 
 
 
 /Ihar
 
 On 05/12/14 23:43, Mike Bayer wrote:
 Hey list -
 
 I’m posting this here just to get some ideas on what might be
 happening here, as it may or may not have some impact on Openstack
 if and when we move to MySQL drivers that are async-patchable, like
 MySQL-connector or PyMySQL.  I had a user post this issue a few
 days ago which I’ve since distilled into test cases for PyMySQL and
 MySQL-connector separately.   It uses gevent, not eventlet, so I’m
 not really sure if this applies.  But there’s plenty of very smart
 people here so if anyone can shed some light on what is actually
 happening here, that would help.
 
 The program essentially illustrates code that performs several
 steps upon a connection, however if the greenlet is suddenly
 killed, the state from the connection, while damaged, is still
 being allowed to continue on in some way, and what’s
 super-catastrophic here is that you see a transaction actually
 being committed *without* all the statements proceeding on it.
 
 In my work with MySQL drivers, I’ve noted for years that they are
 all very, very bad at dealing with concurrency-related issues.  The
 whole “MySQL has gone away” and “commands out of sync” errors are
 ones that we’ve all just drowned in, and so often these are due to
 the driver getting mixed up due to concurrent use of a connection.
 However this one seems more insidious.   Though at the same time,
 the script has some complexity happening (like a simplistic
 connection pool) and I’m not really sure where the core of the
 issue lies.
 
 The script is at
 https://gist.github.com/zzzeek/d196fa91c40cb515365e and also below.
 If you run it for a few seconds, go over to your MySQL command line
 and run this query:
 
 SELECT * FROM table_b WHERE a_id not in (SELECT id FROM table_a)
 ORDER BY a_id DESC;
 
 and what you’ll see is tons of rows in table_b where the “a_id” is
 zero (because cursor.lastrowid fails), but the *rows are
 committed*.   If you read the segment of code that does this, it
 should be impossible:
 
 connection = pool.get() rowid = execute_sql( connection, INSERT
 INTO table_a (data) VALUES (%s), (a,) )
 
 gevent.sleep(random.random() * 0.2)  try: execute_sql( connection, 
 INSERT INTO table_b (a_id, data) VALUES (%s, %s), (rowid, b,) 
 )  connection.commit()  pool.return_conn(connection)
 
 except Exception: connection.rollback() 
 pool.return_conn(connection)
 
 so if the gevent.sleep() throws a timeout error, somehow we are
 getting thrown back in there, with the connection in an invalid
 state, but not invalid enough to commit.
 
 If a simple check for “SELECT connection_id()” is added, this query
 fails and the whole issue is prevented.  Additionally, if you put a
 foreign key constraint on that b_table.a_id, then the issue is
 prevented, and you see that the constraint violation is happening
 all over the place within the commit() call.   The connection is
 being used such that its state just started after the
 gevent.sleep() call.
 
 Now, there’s also a very rudimental connection pool here.   That is
 also part of what’s going on.  If i try to run without the pool,
 the whole script just runs out of connections, fast, which suggests
 that this gevent timeout cleans itself up very, very badly.
 However, SQLAlchemy’s pool works a lot like this one, so if folks
 here can tell me if the connection pool is doing something 

Re: [openstack-dev] [Neutron] UniqueConstraint for name and tenant_id in security group

2014-12-12 Thread Kevin Benton
If we allow resource IDs to be set they will no longer be globally unique.
I'm not sure if this will impact anything directly right now, but it might
be something that impacts tools orchestrating multiple neutron deployments
(e.g. cascading, cells, etc).

On Fri, Dec 12, 2014 at 10:25 AM, Ivar Lazzaro ivarlazz...@gmail.com
wrote:

 In general, I agree with Jay about the opaqueness of the names. I see
 however good reasons for having user-defined unique attributes (see
  Clint's point about idempotency).
 A middle ground here could be granting to the users the ability to specify
 the resource ID.
 A similar proposal was made some time ago by Eugene [0]


 [0]
 http://lists.openstack.org/pipermail/openstack-dev/2014-September/046150.html

 On Thu, Dec 11, 2014 at 6:59 AM, Mark McClain m...@mcclain.xyz wrote:


 On Dec 11, 2014, at 8:43 AM, Jay Pipes jaypi...@gmail.com wrote:

 I'm generally in favor of making name attributes opaque, utf-8 strings
 that are entirely user-defined and have no constraints on them. I consider
 the name to be just a tag that the user places on some resource. It is the
 resource's ID that is unique.

 I do realize that Nova takes a different approach to *some* resources,
 including the security group name.

 End of the day, it's probably just a personal preference whether names
 should be unique to a tenant/user or not.

 Maru had asked me my opinion on whether names should be unique and I
 answered my personal opinion that no, they should not be, and if Neutron
 needed to ensure that there was one and only one default security group for
 a tenant, that a way to accomplish such a thing in a race-free way, without
 use of SELECT FOR UPDATE, was to use the approach I put into the pastebin
 on the review above.


 I agree with Jay.  We should not care about how a user names the
 resource.  There other ways to prevent this race and Jay’s suggestion is a
 good one.

 mark



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Kevin Benton
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder][infra] Ceph CI status update

2014-12-12 Thread Anita Kuno
On 12/12/2014 03:28 AM, Deepak Shetty wrote:
 On Thu, Dec 11, 2014 at 10:33 PM, Anita Kuno ante...@anteaya.info wrote:
 
 On 12/11/2014 09:36 AM, Jon Bernard wrote:
 Heya, quick Ceph CI status update.  Once the test_volume_boot_pattern
 was marked as skipped, only the revert_resize test was failing.  I have
 submitted a patch to nova for this [1], and that yields an all green
 ceph ci run [2].  So at the moment, and with my revert patch, we're in
 good shape.

 I will fix up that patch today so that it can be properly reviewed and
 hopefully merged.  From there I'll submit a patch to infra to move the
 job to the check queue as non-voting, and we can go from there.

 [1] https://review.openstack.org/#/c/139693/
 [2]
 http://logs.openstack.org/93/139693/1/experimental/check-tempest-dsvm-full-ceph/12397fd/console.html

 Cheers,

 Please add the name of your CI account to this table:
 https://wiki.openstack.org/wiki/ThirdPartySystems

 As outlined in the third party CI requirements:
 http://ci.openstack.org/third_party.html#requirements

 Please post system status updates to your individual CI wikipage that is
 linked to this table.

 
 How is posting status there different than here :
 https://wiki.openstack.org/wiki/Cinder/third-party-ci-status
 
 thanx,
 deepak
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
There are over 100 CI accounts now and growing.

Searching the email archives to evaluate the status of a CI is not
something that infra will do, we will look on that wikipage or we will
check the third-party-announce email list (which all third party CI
systems should be subscribed to, as outlined in the third_party.html
page lined above).

If we do not find information where we have asked you to put it and were
we expect it, we may disable your system until you have fulfilled the
requirements as outlined in the third_party.html page linked above.

Sprinkling status updates amongst the emails posted to -dev and
expecting the infra team and other -devs to find them when needed is
unsustainable and has been for some time, which is why we came up with
the wikipage to aggregate them.

Please direct all further questions about this matter to one of the two
third-party meetings as linked above.

Thank you,
Anita.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] tempest - object_storage

2014-12-12 Thread Timur Nurlygayanov
Hi Anton,

it depends on the storage service which you use on this lab.
Tempest tests can fail, for example, if the configuration or your object
storage has some customizations. Also you can change the tempest
configuration to adapt tempest tests for your environment.

On my test OpenStack IceHouse cloud with Swift (HA, baremetal) these tests
are green, but other 'object_storage' tests failed:

-
tempest.api.object_storage.test_account_bulk.BulkTest.test_bulk_delete[gate]
-
tempest.api.object_storage.test_account_bulk.BulkTest.test_extract_archive[gate]
-
tempest.api.object_storage.test_account_bulk.BulkTest.test_bulk_delete_by_POST[gate]
-
tempest.api.object_storage.test_account_quotas_negative.AccountQuotasNegativeTest.test_user_modify_quota[gate,negative,smoke]
-
tempest.api.object_storage.test_container_quotas.ContainerQuotasTest.test_upload_large_object[gate,smoke]
-
tempest.api.object_storage.test_container_quotas.ContainerQuotasTest.test_upload_too_many_objects[gate,smoke]
-
tempest.api.object_storage.test_container_staticweb.StaticWebTest.test_web_index
-
tempest.api.object_storage.test_container_staticweb.StaticWebTest.test_web_listing_css
-
tempest.api.object_storage.test_container_staticweb.StaticWebTest.test_web_error
-
tempest.api.object_storage.test_crossdomain.CrossdomainTest.test_get_crossdomain_policy
-
tempest.api.object_storage.test_object_formpost.ObjectFormPostTest.test_post_object_using_form[gate]
-
tempest.api.object_storage.test_object_formpost_negative.ObjectFormPostNegativeTest.test_post_object_using_form_invalid_signature[gate]
-
tempest.api.object_storage.test_object_formpost_negative.ObjectFormPostNegativeTest.test_post_object_using_form_expired[gate,negative]
-
tempest.api.object_storage.test_object_services.PublicObjectTest.test_access_public_container_object_without_using_creds[gate,smoke]
-
tempest.api.object_storage.test_object_slo.ObjectSloTest.test_delete_large_object[gate]
-
tempest.api.object_storage.test_object_slo.ObjectSloTest.test_list_large_object_metadata[gate]
-
tempest.api.object_storage.test_object_slo.ObjectSloTest.test_retrieve_large_object[gate]
-
tempest.api.object_storage.test_object_slo.ObjectSloTest.test_upload_manifest[gate]
-
tempest.api.object_storage.test_object_temp_url.ObjectTempUrlTest.test_get_object_using_temp_url_with_inline_query_parameter[gate]
-
tempest.api.object_storage.test_object_temp_url_negative.ObjectTempUrlNegativeTest.test_get_object_after_expiration_time[gate,negative]
-
tempest.api.object_storage.test_object_version.ContainerTest.test_versioned_container[gate,smoke]




On Fri, Dec 12, 2014 at 6:10 PM, Anton Massoud anton.mass...@ericsson.com
wrote:

  Hi,



 We are running tempest on icehouse, in our tempest runs, we found that the
 below test cases are failing in random iterations but not always.




 tempest.api.object_storage.test_account_services.AccountTest.test_update_account_metadata_with_delete_matadata_key[gate,smoke]


 tempest.api.object_storage.test_container_services.ContainerTest.test_list_container_contents_with_path[gate,smoke]


 tempest.api.object_storage.test_object_expiry.ObjectExpiryTest.test_get_object_after_expiry_time[gate]


 tempest.api.object_storage.test_object_expiry.ObjectExpiryTest.test_get_object_at_expiry_time[gate]


 tempest.api.object_storage.test_object_services.ObjectTest.test_object_upload_in_segments[gate]


 tempest.api.object_storage.test_object_services.ObjectTest.test_update_object_metadata[gate,smoke]


 tempest.api.object_storage.test_object_services.ObjectTest.test_update_object_metadata_with_create_and_remove_metadata[gate,smoke]


 tempest.api.object_storage.test_object_services.ObjectTest.test_update_object_metadata_with_remove_metadata


 tempest.api.object_storage.test_account_services.AccountTest.test_update_account_metadata_with_create_and_delete_metadata[gate,smoke]


 tempest.api.object_storage.test_container_services.ContainerTest.test_create_container[gate,smoke]


 tempest.api.object_storage.test_container_services.ContainerTest.test_delete_container[gate,smoke]


 tempest.api.object_storage.test_object_services.ObjectTest.test_copy_object_2d_way[gate,smoke]





 Anyone  else has experienced the same, any idea what that may be caused of



 /Anton

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 

Timur,
Senior QA Engineer
OpenStack Projects
Mirantis Inc
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [devstack] localrc for mutli-node setup

2014-12-12 Thread Danny Choi (dannchoi)
Hi,

I would like to use devstack to deploy OpenStack on a multi-node setup,
i.e. separate Controller, Network and Compute nodes

What is the localrc for each node?

I would assume, for example, we don’t need to enable neutron service at the 
Controller node, etc…

Does anyone has the localrc file for each node type that can share?

Thanks,
Danny
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [api] Usage of the PATCH verb

2014-12-12 Thread Amit Gandhi
Hi

We are currently using PATCH in the Poppy API to update existing resources.  
However, we have recently had some discussions on how this should have been 
implemented.

I would like to get the advise of the Openstack Community and the API working 
group on how PATCH semantics should work.

The following RFC documents [1][2] (and a blog post [3]) advise the using PATCH 
as the following:


2.1.  A Simple PATCH Example


   PATCH /file.txt HTTP/1.1
   Host: www.example.comhttp://www.example.com
   Content-Type: application/example
   If-Match: e0023aa4e
   Content-Length: 100


[
{ op: test, path: /a/b/c, value: foo },
{ op: remove, path: /a/b/c },
{ op: add, path: /a/b/c, value: [ foo, bar ] },
{ op: replace, path: /a/b/c, value: 42 },
{ op: move, from: /a/b/c, path: /a/b/d },
{ op: copy, from: /a/b/d, path: /a/b/e }
]



Basically, the changes consist of an operation, the path in the json object to 
modify, and the new value.


The way we currently have it implemented is to submit just the changes, and the 
server applies the change to the resource.  This means passing entire lists to 
change etc.

I would like to hear some feedback from others on how PATCH should be 
implemented.

Thanks

Amit Gandhi
- Rackspace



[1] https://tools.ietf.org/html/rfc5789
[2] http://tools.ietf.org/html/rfc6902
[3] http://williamdurand.fr/2014/02/14/please-do-not-patch-like-an-idiot/



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Spec reviews this week by the neutron-drivers team

2014-12-12 Thread Stefano Maffulli
I have adapted to Neutron the specs review dashboard prepared by Joe
Gordon for Nova. Check it out below.

Reminder: the deadline to approve kilo specs is this coming Monday, Dec 15.

https://review.openstack.org/#/dashboard/?foreach=project%3A%5Eopenstack%2Fneutron-specs+status%3Aopen+NOT+owner%3Aself+NOT+label%3AWorkflow%3C%3D-1+label%3AVerified%3E%3D1%252cjenkins+NOT+label%3ACode-Review%3E%3D-2%252cself+branch%3Amastertitle=Neutron+SpecsYour+are+a+reviewer%2C+but+haven%27t+voted+in+the+current+revision=reviewer%3AselfNeeds+final+%2B2=label%3ACode-Review%3E%3D2+NOT%28reviewerin%3Aneutron-specs-core+label%3ACode-Review%3C%3D-1%29+limit%3A100Passed+Jenkins%2C+Positive+Neutron-Core+Feedback=NOT+label%3ACode-Review%3E%3D2+%28reviewerin%3Aneutron-core+label%3ACode-Review%3E%3D1%29+NOT%28reviewerin%3Aneutron-core+label%3ACode-Review%3C%3D-1%29+limit%3A100Passed+Jenkins%2C+No+Positive+Neutron-Core+Feedback%2C+No+Negative+Feedback=NOT+label%3ACode-Review%3C%3D-1+NOT+label%3ACode-Review%3E%3D2+NOT%28reviewerin%3Aneutron-core+label%3ACode-Review%3E%3D1%29+limit%3A100Wayward+Changes+%28Changes+with+no+code+review+in+the+last+7+days%29=NOT+label%3ACode-Review%3C%3D2+
 a
ge%3A7dSome+negative+feedback%2C+might+still+be+worth+commenting=label%3ACode-Review%3D-1+NOT+label%3ACode-Review%3D-2+limit%3A100Dead+Specs=label%3ACode-Review%3C%3D-2

On 12/09/2014 06:08 AM, Kyle Mestery wrote:
 The neutron-drivers team has started the process of both accepting and
 rejecting specs for Kilo now. If you've submitted a spec, you will soon
 see the spec either approved or land in either the abandoned or -2
 category. We're doing our best to put helpful messages when we do
 abandon or -2 specs, but for more detail, see the neutron-drivers wiki
 page [1]. Also, you can find me on IRC with questions as well.
 
 Thanks!
 Kyle
 
 [1] https://wiki.openstack.org/wiki/Neutron-drivers
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

-- 
OpenStack Evangelist - Community
Ask and answer questions on https://ask.openstack.org

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Global Context and Execution Environment

2014-12-12 Thread W Chan
Renat, Dmitri,

On supplying the global context into the workflow execution...

In addition to Renat's proposal, I have a few here.

1) Pass them implicitly in start_workflow as another kwargs in the
**params.  But on thinking, we should probably make global context
explicitly defined in the WF spec.  Passing them implicitly may be hard to
follow during troubleshooting where the value comes from by looking at the
WF spec.  Plus there will be cases where WF authors want it explicitly
defined. Still debating here...

inputs = {...}
globals = {...}
start_workflow('my_workflow', inputs, globals=globals)

2) Instead of adding to the WF spec, what if we change the scope in
existing input params?  For example, inputs defined in the top workflow by
default is visible to all subflows (pass down to workflow task on
run_workflow) and tasks (passed to action on execution).

3) Add to the WF spec

workflow:
type: direct
global:
- global1
- global2
input:
- input1
- input2

Winson
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] localrc for mutli-node setup

2014-12-12 Thread John Griffith
On Fri, Dec 12, 2014 at 1:03 PM, Danny Choi (dannchoi)
dannc...@cisco.com wrote:
 Hi,

 I would like to use devstack to deploy OpenStack on a multi-node setup,
 i.e. separate Controller, Network and Compute nodes

 What is the localrc for each node?

 I would assume, for example, we don’t need to enable neutron service at the
 Controller node, etc…

 Does anyone has the localrc file for each node type that can share?

 Thanks,
 Danny

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Rather than send my sample, here's a link to a wiki from the Neutron
team that has what you want (mine's different, but uses nova-net).

https://wiki.openstack.org/wiki/NeutronDevstack

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Logs format on UI (High/6.0)

2014-12-12 Thread Rochelle Grober
(apologies for my pushiness, but getting all the projects using olso.log for 
logging will really help the ops experience)

All projects (except maybe Swift) should be moving to use oslo.logging.  
Therefore, your longterm fix is the right way to go.  When you find log issues 
like the one in ./neutron-metadata-agent.log, file it as a bug so that the code 
can be refactored as either a bug fix or a “while you’re in there” step.

--rocky

From: Dmitry Pyzhov [mailto:dpyz...@mirantis.com]
Sent: Friday, December 12, 2014 10:35 AM
To: OpenStack Dev
Subject: [openstack-dev] [Fuel] Logs format on UI (High/6.0)

We have a high priority bug in 6.0: 
https://bugs.launchpad.net/fuel/+bug/1401852. Here is the story.

Our openstack services use to send logs in strange format with extra copy of 
timestamp and loglevel:
== ./neutron-metadata-agent.log ==
2014-12-12T11:00:30.098105+00:00 info: 2014-12-12 11:00:30.003 14349 INFO 
neutron.common.config [-] Logging enabled!

And we have a workaround for this. We hide extra timestamp and use second 
loglevel.

In Juno some of services have updated oslo.logging and now send logs in simple 
format:
== ./nova-api.log ==
2014-12-12T10:57:15.437488+00:00 debug: Loading app ec2 from 
/etc/nova/api-paste.ini

In order to keep backward compatibility and deal with both formats we have a 
dirty workaround for our workaround: https://review.openstack.org/#/c/141450/

As I see, our best choice here is to throw away all workarounds and show logs 
on UI as is. If service sends duplicated data - we should show duplicated data.

Long term fix here is to update oslo.logging in all packages. We can do it in 
6.1.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Logs format on UI (High/6.0)

2014-12-12 Thread Jay Pipes

On 12/12/2014 01:35 PM, Dmitry Pyzhov wrote:

We have a high priority bug in 6.0:
https://bugs.launchpad.net/fuel/+bug/1401852. Here is the story.

Our openstack services use to send logs in strange format with extra
copy of timestamp and loglevel:
== ./neutron-metadata-agent.log ==
2014-12-12T11:00:30.098105+00:00 info: 2014-12-12 11:00:30.003 14349
INFO neutron.common.config [-] Logging enabled!

And we have a workaround for this. We hide extra timestamp and use
second loglevel.

In Juno some of services have updated oslo.logging and now send logs in
simple format:
== ./nova-api.log ==
2014-12-12T10:57:15.437488+00:00 debug: Loading app ec2 from
/etc/nova/api-paste.ini

In order to keep backward compatibility and deal with both formats we
have a dirty workaround for our workaround:
https://review.openstack.org/#/c/141450/

As I see, our best choice here is to throw away all workarounds and show
logs on UI as is. If service sends duplicated data - we should show
duplicated data.


++

Best,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] UniqueConstraint for name and tenant_id in security group

2014-12-12 Thread Sean Dague
On 12/12/2014 01:05 PM, Maru Newby wrote:
 
 On Dec 11, 2014, at 2:27 PM, Sean Dague s...@dague.net wrote:
 
 On 12/11/2014 04:16 PM, Jay Pipes wrote:
 On 12/11/2014 04:07 PM, Vishvananda Ishaya wrote:
 On Dec 11, 2014, at 1:04 PM, Jay Pipes jaypi...@gmail.com wrote:
 On 12/11/2014 04:01 PM, Vishvananda Ishaya wrote:

 On Dec 11, 2014, at 8:00 AM, Henry Gessau ges...@cisco.com wrote:

 On Thu, Dec 11, 2014, Mark McClain m...@mcclain.xyz wrote:

 On Dec 11, 2014, at 8:43 AM, Jay Pipes jaypi...@gmail.com
 mailto:jaypi...@gmail.com wrote:

 I'm generally in favor of making name attributes opaque, utf-8
 strings that
 are entirely user-defined and have no constraints on them. I
 consider the
 name to be just a tag that the user places on some resource. It
 is the
 resource's ID that is unique.

 I do realize that Nova takes a different approach to *some*
 resources,
 including the security group name.

 End of the day, it's probably just a personal preference whether
 names
 should be unique to a tenant/user or not.

 Maru had asked me my opinion on whether names should be unique and I
 answered my personal opinion that no, they should not be, and if
 Neutron
 needed to ensure that there was one and only one default security
 group for
 a tenant, that a way to accomplish such a thing in a race-free
 way, without
 use of SELECT FOR UPDATE, was to use the approach I put into the
 pastebin on
 the review above.


 I agree with Jay.  We should not care about how a user names the
 resource.
 There other ways to prevent this race and Jay’s suggestion is a
 good one.

 However we should open a bug against Horizon because the user
 experience there
 is terrible with duplicate security group names.

 The reason security group names are unique is that the ec2 api
 supports source
 rule specifications by tenant_id (user_id in amazon) and name, so
 not enforcing
 uniqueness means that invocation in the ec2 api will either fail or be
 non-deterministic in some way.

 So we should couple our API evolution to EC2 API then?

 -jay

 No I was just pointing out the historical reason for uniqueness, and
 hopefully
 encouraging someone to find the best behavior for the ec2 api if we
 are going
 to keep the incompatibility there. Also I personally feel the ux is
 better
 with unique names, but it is only a slight preference.

 Sorry for snapping, you made a fair point.

 Yeh, honestly, I agree with Vish. I do feel that the UX of that
 constraint is useful. Otherwise you get into having to show people UUIDs
 in a lot more places. While those are good for consistency, they are
 kind of terrible to show to people.
 
 While there is a good case for the UX of unique names - it also makes 
 orchestration via tools like puppet a heck of a lot simpler - the fact is 
 that most OpenStack resources do not require unique names.  That being the 
 case, why would we want security groups to deviate from this convention?

Maybe the other ones are the broken ones?

Honestly, any sanely usable system makes names unique inside a
container. Like files in a directory. In this case the tenant is the
container, which makes sense.

It is one of many places that OpenStack is not consistent. But I'd
rather make things consistent and more usable than consistent and less.

-Sean

-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] UniqueConstraint for name and tenant_id in security group

2014-12-12 Thread Mathieu Gagné

On 2014-12-12 4:40 PM, Sean Dague wrote:


While there is a good case for the UX of unique names - it also makes 
orchestration via tools like puppet a heck of a lot simpler - the fact is that 
most OpenStack resources do not require unique names.  That being the case, why 
would we want security groups to deviate from this convention?


Maybe the other ones are the broken ones?

Honestly, any sanely usable system makes names unique inside a
container. Like files in a directory. In this case the tenant is the
container, which makes sense.


+1

It makes as much sense as a filesystem accepting 2 files in the same 
folder with the same name but allows you to distinguish them by their 
inode number.


--
Mathieu

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] UniqueConstraint for name and tenant_id in security group

2014-12-12 Thread Morgan Fainberg
On Friday, December 12, 2014, Sean Dague s...@dague.net wrote:

 On 12/12/2014 01:05 PM, Maru Newby wrote:
 
  On Dec 11, 2014, at 2:27 PM, Sean Dague s...@dague.net javascript:;
 wrote:
 
  On 12/11/2014 04:16 PM, Jay Pipes wrote:
  On 12/11/2014 04:07 PM, Vishvananda Ishaya wrote:
  On Dec 11, 2014, at 1:04 PM, Jay Pipes jaypi...@gmail.com
 javascript:; wrote:
  On 12/11/2014 04:01 PM, Vishvananda Ishaya wrote:
 
  On Dec 11, 2014, at 8:00 AM, Henry Gessau ges...@cisco.com
 javascript:; wrote:
 
  On Thu, Dec 11, 2014, Mark McClain m...@mcclain.xyz wrote:
 
  On Dec 11, 2014, at 8:43 AM, Jay Pipes jaypi...@gmail.com
 javascript:;
  mailto:jaypi...@gmail.com javascript:; wrote:
 
  I'm generally in favor of making name attributes opaque, utf-8
  strings that
  are entirely user-defined and have no constraints on them. I
  consider the
  name to be just a tag that the user places on some resource. It
  is the
  resource's ID that is unique.
 
  I do realize that Nova takes a different approach to *some*
  resources,
  including the security group name.
 
  End of the day, it's probably just a personal preference whether
  names
  should be unique to a tenant/user or not.
 
  Maru had asked me my opinion on whether names should be unique
 and I
  answered my personal opinion that no, they should not be, and if
  Neutron
  needed to ensure that there was one and only one default security
  group for
  a tenant, that a way to accomplish such a thing in a race-free
  way, without
  use of SELECT FOR UPDATE, was to use the approach I put into the
  pastebin on
  the review above.
 
 
  I agree with Jay.  We should not care about how a user names the
  resource.
  There other ways to prevent this race and Jay’s suggestion is a
  good one.
 
  However we should open a bug against Horizon because the user
  experience there
  is terrible with duplicate security group names.
 
  The reason security group names are unique is that the ec2 api
  supports source
  rule specifications by tenant_id (user_id in amazon) and name, so
  not enforcing
  uniqueness means that invocation in the ec2 api will either fail or
 be
  non-deterministic in some way.
 
  So we should couple our API evolution to EC2 API then?
 
  -jay
 
  No I was just pointing out the historical reason for uniqueness, and
  hopefully
  encouraging someone to find the best behavior for the ec2 api if we
  are going
  to keep the incompatibility there. Also I personally feel the ux is
  better
  with unique names, but it is only a slight preference.
 
  Sorry for snapping, you made a fair point.
 
  Yeh, honestly, I agree with Vish. I do feel that the UX of that
  constraint is useful. Otherwise you get into having to show people UUIDs
  in a lot more places. While those are good for consistency, they are
  kind of terrible to show to people.
 
  While there is a good case for the UX of unique names - it also makes
 orchestration via tools like puppet a heck of a lot simpler - the fact is
 that most OpenStack resources do not require unique names.  That being the
 case, why would we want security groups to deviate from this convention?

 Maybe the other ones are the broken ones?

 Honestly, any sanely usable system makes names unique inside a
 container. Like files in a directory. In this case the tenant is the
 container, which makes sense.

 It is one of many places that OpenStack is not consistent. But I'd
 rather make things consistent and more usable than consistent and less.


+1.

More consistent and more usable is a good approach. The name uniqueness has
prior art in OpenStack - keystone keeps project names unique within a
domain (domain is the container), similar usernames can't be duplicated in
the same domain. It would be silly to auth with the user ID, likewise
unique names for the security group in the container (tenant) makes a lot
of sense from a UX Perspective.

--Morgan
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] UniqueConstraint for name and tenant_id in security group

2014-12-12 Thread Jay Pipes

On 12/12/2014 05:00 PM, Morgan Fainberg wrote:

On Friday, December 12, 2014, Sean Dague s...@dague.net
mailto:s...@dague.net wrote:

On 12/12/2014 01:05 PM, Maru Newby wrote:
 
  On Dec 11, 2014, at 2:27 PM, Sean Dague s...@dague.net
javascript:; wrote:
 
  On 12/11/2014 04:16 PM, Jay Pipes wrote:
  On 12/11/2014 04:07 PM, Vishvananda Ishaya wrote:
  On Dec 11, 2014, at 1:04 PM, Jay Pipes jaypi...@gmail.com
javascript:; wrote:
  On 12/11/2014 04:01 PM, Vishvananda Ishaya wrote:
 
  On Dec 11, 2014, at 8:00 AM, Henry Gessau ges...@cisco.com
javascript:; wrote:
 
  On Thu, Dec 11, 2014, Mark McClain m...@mcclain.xyz wrote:
 
  On Dec 11, 2014, at 8:43 AM, Jay Pipes
jaypi...@gmail.com javascript:;
  mailto:jaypi...@gmail.com javascript:; wrote:
 
  I'm generally in favor of making name attributes opaque,
utf-8
  strings that
  are entirely user-defined and have no constraints on them. I
  consider the
  name to be just a tag that the user places on some
resource. It
  is the
  resource's ID that is unique.
 
  I do realize that Nova takes a different approach to *some*
  resources,
  including the security group name.
 
  End of the day, it's probably just a personal preference
whether
  names
  should be unique to a tenant/user or not.
 
  Maru had asked me my opinion on whether names should be
unique and I
  answered my personal opinion that no, they should not be,
and if
  Neutron
  needed to ensure that there was one and only one default
security
  group for
  a tenant, that a way to accomplish such a thing in a
race-free
  way, without
  use of SELECT FOR UPDATE, was to use the approach I put
into the
  pastebin on
  the review above.
 
 
  I agree with Jay.  We should not care about how a user
names the
  resource.
  There other ways to prevent this race and Jay’s suggestion
is a
  good one.
 
  However we should open a bug against Horizon because the user
  experience there
  is terrible with duplicate security group names.
 
  The reason security group names are unique is that the ec2 api
  supports source
  rule specifications by tenant_id (user_id in amazon) and
name, so
  not enforcing
  uniqueness means that invocation in the ec2 api will either
fail or be
  non-deterministic in some way.
 
  So we should couple our API evolution to EC2 API then?
 
  -jay
 
  No I was just pointing out the historical reason for
uniqueness, and
  hopefully
  encouraging someone to find the best behavior for the ec2 api
if we
  are going
  to keep the incompatibility there. Also I personally feel the
ux is
  better
  with unique names, but it is only a slight preference.
 
  Sorry for snapping, you made a fair point.
 
  Yeh, honestly, I agree with Vish. I do feel that the UX of that
  constraint is useful. Otherwise you get into having to show
people UUIDs
  in a lot more places. While those are good for consistency, they are
  kind of terrible to show to people.
 
  While there is a good case for the UX of unique names - it also
makes orchestration via tools like puppet a heck of a lot simpler -
the fact is that most OpenStack resources do not require unique
names.  That being the case, why would we want security groups to
deviate from this convention?

Maybe the other ones are the broken ones?

Honestly, any sanely usable system makes names unique inside a
container. Like files in a directory. In this case the tenant is the
container, which makes sense.

It is one of many places that OpenStack is not consistent. But I'd
rather make things consistent and more usable than consistent and less.


+1.

More consistent and more usable is a good approach. The name uniqueness
has prior art in OpenStack - keystone keeps project names unique within
a domain (domain is the container), similar usernames can't be
duplicated in the same domain. It would be silly to auth with the user
ID, likewise unique names for the security group in the container
(tenant) makes a lot of sense from a UX Perspective.


Sounds like Maru and I are pretty heavily in the minority on this one, 
and you all make good points about UX and consistency with other pieces 
of the OpenStack APIs.


Maru, I'm backing off here and will support putting a unique constraint 
on the security group name and project_id field.


Best,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] How to delete a VM which is in ERROR state?

2014-12-12 Thread pcrews

On 12/09/2014 03:54 PM, Ken'ichi Ohmichi wrote:

Hi,

This case is always tested by Tempest on the gate.

https://github.com/openstack/tempest/blob/master/tempest/api/compute/servers/test_delete_server.py#L152

So I guess this problem wouldn't happen on the latest version at least.

Thanks
Ken'ichi Ohmichi

---

2014-12-10 6:32 GMT+09:00 Joe Gordon joe.gord...@gmail.com:



On Sat, Dec 6, 2014 at 5:08 PM, Danny Choi (dannchoi) dannc...@cisco.com
wrote:


Hi,

I have a VM which is in ERROR state.


+--+--+++-++

| ID   | Name
| Status | Task State | Power State | Networks   |


+--+--+++-++

| 1cb5bf96-619c-4174-baae-dd0d8c3d40c5 |
cirros--1cb5bf96-619c-4174-baae-dd0d8c3d40c5 | ERROR  | -  | NOSTATE
||


I tried in both CLI “nova delete” and Horizon “terminate instance”.
Both accepted the delete command without any error.
However, the VM never got deleted.

Is there a way to remove the VM?



What version of nova are you using? This is definitely a serious bug, you
should be able to delete an instance in error state. Can you file a bug that
includes steps on how to reproduce the bug along with all relevant logs.

bugs.launchpad.net/nova




Thanks,
Danny

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Hi,

I've encountered this in my own testing and have found that it appears 
to be tied to libvirt.


When I hit this, reset-state as the admin user reports success (and 
state is set), *but* things aren't really working as advertised and 
subsequent attempts to do anything with the errant vm's will send them 
right back into 'FLAIL' / can't delete / endless DELETING mode.


restarting libvirt-bin on my machine fixes this - after restart, the 
deleting vm's are properly wiped without any further user input to 
nova/horizon and all seems right in the world.


using:
devstack
ubuntu 14.04
libvirtd (libvirt) 1.2.2

triggered via:
lots of random create/reboot/resize/delete requests of varying validity 
and sanity.


Am in the process of cleaning up my test code so as not to hurt anyone's 
brain with the ugly and will file a bug once done, but thought this 
worth sharing.


Thanks,
Patrick

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] UniqueConstraint for name and tenant_id in security group

2014-12-12 Thread Rochelle Grober


Morgan Fainberg [mailto:morgan.fainb...@gmail.com] on Friday, December 12, 2014 
2:01 PM wrote:
On Friday, December 12, 2014, Sean Dague 
s...@dague.netmailto:s...@dague.net wrote:
On 12/12/2014 01:05 PM, Maru Newby wrote:

 On Dec 11, 2014, at 2:27 PM, Sean Dague s...@dague.netjavascript:; wrote:

 On 12/11/2014 04:16 PM, Jay Pipes wrote:
 On 12/11/2014 04:07 PM, Vishvananda Ishaya wrote:
 On Dec 11, 2014, at 1:04 PM, Jay Pipes jaypi...@gmail.comjavascript:; 
 wrote:
 On 12/11/2014 04:01 PM, Vishvananda Ishaya wrote:

 On Dec 11, 2014, at 8:00 AM, Henry Gessau 
 ges...@cisco.comjavascript:; wrote:

 On Thu, Dec 11, 2014, Mark McClain m...@mcclain.xyz wrote:

 On Dec 11, 2014, at 8:43 AM, Jay Pipes 
 jaypi...@gmail.comjavascript:;
 mailto:jaypi...@gmail.comjavascript:; wrote:

 I'm generally in favor of making name attributes opaque, utf-8
 strings that
 are entirely user-defined and have no constraints on them. I
 consider the
 name to be just a tag that the user places on some resource. It
 is the
 resource's ID that is unique.

 I do realize that Nova takes a different approach to *some*
 resources,
 including the security group name.

 End of the day, it's probably just a personal preference whether
 names
 should be unique to a tenant/user or not.

 Maru had asked me my opinion on whether names should be unique and I
 answered my personal opinion that no, they should not be, and if
 Neutron
 needed to ensure that there was one and only one default security
 group for
 a tenant, that a way to accomplish such a thing in a race-free
 way, without
 use of SELECT FOR UPDATE, was to use the approach I put into the
 pastebin on
 the review above.


 I agree with Jay.  We should not care about how a user names the
 resource.
 There other ways to prevent this race and Jay’s suggestion is a
 good one.

 However we should open a bug against Horizon because the user
 experience there
 is terrible with duplicate security group names.

 The reason security group names are unique is that the ec2 api
 supports source
 rule specifications by tenant_id (user_id in amazon) and name, so
 not enforcing
 uniqueness means that invocation in the ec2 api will either fail or be
 non-deterministic in some way.

 So we should couple our API evolution to EC2 API then?

 -jay

 No I was just pointing out the historical reason for uniqueness, and
 hopefully
 encouraging someone to find the best behavior for the ec2 api if we
 are going
 to keep the incompatibility there. Also I personally feel the ux is
 better
 with unique names, but it is only a slight preference.

 Sorry for snapping, you made a fair point.

 Yeh, honestly, I agree with Vish. I do feel that the UX of that
 constraint is useful. Otherwise you get into having to show people UUIDs
 in a lot more places. While those are good for consistency, they are
 kind of terrible to show to people.

 While there is a good case for the UX of unique names - it also makes 
 orchestration via tools like puppet a heck of a lot simpler - the fact is 
 that most OpenStack resources do not require unique names.  That being the 
 case, why would we want security groups to deviate from this convention?

Maybe the other ones are the broken ones?

Honestly, any sanely usable system makes names unique inside a
container. Like files in a directory. In this case the tenant is the
container, which makes sense.

It is one of many places that OpenStack is not consistent. But I'd
rather make things consistent and more usable than consistent and less.

+1.

More consistent and more usable is a good approach. The name uniqueness has 
prior art in OpenStack - keystone keeps project names unique within a domain 
(domain is the container), similar usernames can't be duplicated in the same 
domain. It would be silly to auth with the user ID, likewise unique names for 
the security group in the container (tenant) makes a lot of sense from a UX 
Perspective.

[Rockyg] +1
Especially when dealing with domain data that are managed by Humans, human 
visible unique is important for understanding *and* efficiency.  Tenant 
security is expected to be managed by the tenant admin, not some automated 
“robot admin” and as such needs to be clear , memorable and seperable between 
instances.  Unique names is the most straightforward (and easiest to enforce) 
way do this for humans. Humans read differentiate alphanumerics, so that should 
be the standard differentiator when humans are expected to interact and reason 
about containers.

--Morgan
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Logs format on UI (High/6.0)

2014-12-12 Thread Igor Kalnitsky
+1 to stop parsing logs on UI and show them as is. I think it's more
than enough for all users.

On Fri, Dec 12, 2014 at 8:35 PM, Dmitry Pyzhov dpyz...@mirantis.com wrote:
 We have a high priority bug in 6.0:
 https://bugs.launchpad.net/fuel/+bug/1401852. Here is the story.

 Our openstack services use to send logs in strange format with extra copy of
 timestamp and loglevel:
 == ./neutron-metadata-agent.log ==
 2014-12-12T11:00:30.098105+00:00 info: 2014-12-12 11:00:30.003 14349 INFO
 neutron.common.config [-] Logging enabled!

 And we have a workaround for this. We hide extra timestamp and use second
 loglevel.

 In Juno some of services have updated oslo.logging and now send logs in
 simple format:
 == ./nova-api.log ==
 2014-12-12T10:57:15.437488+00:00 debug: Loading app ec2 from
 /etc/nova/api-paste.ini

 In order to keep backward compatibility and deal with both formats we have a
 dirty workaround for our workaround:
 https://review.openstack.org/#/c/141450/

 As I see, our best choice here is to throw away all workarounds and show
 logs on UI as is. If service sends duplicated data - we should show
 duplicated data.

 Long term fix here is to update oslo.logging in all packages. We can do it
 in 6.1.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] REST and Django

2014-12-12 Thread Tripp, Travis S
Tihomir,

Today I added one glance call based on Richard’s decorator pattern[1] and 
started to play with incorporating some of your ideas. Please note, I only had 
limited time today.  That is passing the kwargs through to the glance client. 
This was an interesting first choice, because it immediately highlighted a 
concrete example of the horizon glance wrapper post-processing still being 
useful (rather than be a direct pass-through with no wrapper). See below. If 
you have some some concrete code examples of your ideas it would be helpful.

[1] 
https://review.openstack.org/#/c/141273/2/openstack_dashboard/api/rest/glance.py

With the patch, basically, you can call the following and all of the GET 
parameters get passed directly through to the horizon glance client and you get 
results back as expected.

http://localhost:8002/api/glance/images/?sort_dir=descsort_key=created_atpaginate=Truemarker=bb2cfb1c-2234-4f54-aec5-b4916fe2d747

If you pass in an incorrect sort_key, the glance client returns the following 
error message which propagates back to the REST caller as an error with the 
message:

sort_key must be one of the following: name, status, container_format, 
disk_format, size, id, created_at, updated_at.

This is done by passing **request.GET.dict() through.

Please note, that if you try this (with POSTMAN, for example), you need to set 
the header of X-Requested-With = XMLHttpRequest

So, what issues did it immediately call out with directly invoking the client?

The python-glanceclient internally handles pagination by returning a generator. 
 Each iteration on the generator will handle making a request for the next page 
of data. If you were to just do something like return list(image_generator) to 
serialize it back out to the caller, it would actually end up making a call 
back to the server X times to fetch all pages before serializing back (thereby 
not really paginating). The horizon glance client wrapper today handles this by 
using islice intelligently along with honoring the API_RESULT_LIMIT setting in 
Horizon. So, this gives a direct example of where the wrapper does something 
that a direct passthrough to the client would not allow.

That said, I can see a few ways that we could use the same REST decorator code 
and provide direct access to the API.  We’d simply provide a class where the 
url_regex maps to the desired path and gives direct passthrough. Maybe that 
kind of passthrough could always be provided for ease of customization / 
extensibility and additional methods with wrappers provided when necessary.  I 
need to leave for today, so can’t actually try that out at the moment.

Thanks,
Travis

From: Thai Q Tran tqt...@us.ibm.commailto:tqt...@us.ibm.com
Reply-To: OpenStack List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Friday, December 12, 2014 at 11:05 AM
To: OpenStack List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [horizon] REST and Django


In your previous example, you are posting to a certain URL (i.e. 
/keystone/{ver:=x.0}/{method:=update}).
client: POST /keystone/{ver:=x.0}/{method:=update} = middleware: just 
forward to clients[ver].getattr(method)(**kwargs) = keystone: update

Correct me if I'm wrong, but it looks like you have a unique URL for each 
/service/version/method.
I fail to see how that is different than what we have today? Is there a view 
for each service? each version?

Let's say for argument sake that you have a single view that takes care of all 
URL routing. All requests pass through this view and contain a JSON that 
contains instruction on which API to invoke and what parameters to pass.
And lets also say that you wrote some code that uses reflection to map the JSON 
to an action. What you end up with is a client-centric application, where all 
of the logic resides client-side. If there are things we want to accomplish 
server-side, it will be extremely hard to pull off. Things like caching, 
websocket, aggregation, batch actions, translation, etc What you end up 
with is a client with no help from the server.

Obviously the other extreme is what we have today, where we do everything 
server-side and only using client-side for binding events. I personally prefer 
a more balance approach where we can leverage both the server and client. There 
are things that client can do well, and there are things that server can do 
well. Going the RPC way restrict us to just client technologies and may hamper 
any additional future functionalities we want to bring server-side. In other 
words, using REST over RPC gives us the opportunity to use server-side 
technologies to help solve problems should the need for it arises.

I would also argue that the REST approach is NOT what we have today. What we 
have today is a static webpage that is generated server-side, where API is 
hidden from the client. What we end up with using the REST approach is a 
dynamic 

Re: [openstack-dev] [Horizon] Moving _conf and _scripts to dashboard

2014-12-12 Thread Lin Hua Cheng
Breaking something for existing user is progress, but not forward. :)

I don't mind moving the code around to _scripts file, but simply dropping
the _conf file is my concern since it might already be extended from.
Perhaps document it first that it will be deprecated, and remove it on
later release.

On Fri, Dec 12, 2014 at 10:43 AM, Thai Q Tran tqt...@us.ibm.com wrote:

 As is the case with anything we change, but that should not stop us from
 making improvements/progress. I would argue that it would make life easier
 for them since all scripts are now in one place.

 -Lin Hua Cheng os.lch...@gmail.com wrote: -
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 From: Lin Hua Cheng os.lch...@gmail.com
 Date: 12/12/2014 10:28AM

 Subject: Re: [openstack-dev] [Horizon] Moving _conf and _scripts to
 dashboard

 Consolidating them would break it for users that have customization and
 extension on the two templates.

 -Lin

 On Fri, Dec 12, 2014 at 9:20 AM, David Lyle dkly...@gmail.com wrote:

 Not entirely sure why they both exist either.

 So by move, you meant override (nuance). That's different and I have no
 issue with that.

 I'm also fine with attempting to consolidate _conf and _scripts.

 David

 On Thu, Dec 11, 2014 at 1:22 PM, Thai Q Tran tqt...@us.ibm.com wrote:


 It would not create a circular dependency, dashboard would depend on
 horizon - not the latter.
 Scripts that are library specific will live in horizon while scripts
 that are panel specific will live in dashboard.
 Let me draw a more concrete example.

 In Horizon
 We know that _script and _conf are included in the base.html
 We create a _script and _conf placeholder file for project overrides
 (similar to _stylesheets and _header)
 In Dashboard
 We create a _script and _conf file with today's content
 It overrides the _script and _conf file in horizon
 Now we can include panel specific scripts without causing circular
 dependency.

 In fact, I would like to go further and suggest that _script and _conf
 be combine into a single file.
 Not sure why we need two places to include scripts.


 -David Lyle dkly...@gmail.com wrote: -
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 From: David Lyle dkly...@gmail.com
 Date: 12/11/2014 09:23AM
 Subject: Re: [openstack-dev] [Horizon] Moving _conf and _scripts to
 dashboard


 I'm probably not understanding the nuance of the question but moving the
 _scripts.html file to openstack_dashboard creates some circular
 dependencies, does it not? templates/base.html in the horizon side of the
 repo includes _scripts.html and insures that the javascript needed by the
 existing horizon framework is present.

 _conf.html seems like a better candidate for moving as it's more closely
 tied to the application code.

 David


 On Wed, Dec 10, 2014 at 7:20 PM, Thai Q Tran tqt...@us.ibm.com wrote:

 Sorry for duplicate mail, forgot the subject.

 -Thai Q Tran/Silicon Valley/IBM wrote: -
 To: OpenStack Development Mailing List \(not for usage questions\) 
 openstack-dev@lists.openstack.org
 From: Thai Q Tran/Silicon Valley/IBM
 Date: 12/10/2014 03:37PM
 Subject: Moving _conf and _scripts to dashboard

 The way we are structuring our javascripts today is complicated. All of
 our static javascripts reside in /horizon/static and are imported through
 _conf.html and _scripts.html. Notice that there are already some panel
 specific javascripts like: horizon.images.js, horizon.instances.js,
 horizon.users.js. They do not belong in horizon. They belong in
 openstack_dashboard because they are specific to a panel.

 Why am I raising this issue now? In Angular, we need controllers
 written in javascript for each panel. As we angularize more and more
 panels, we need to store them in a way that make sense. To me, it make
 sense for us to move _conf and _scripts to openstack_dashboard. Or if this
 is not possible, then provide a mechanism to override them in
 openstack_dashboard.

 Thoughts?
 Thai



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 

Re: [openstack-dev] [Horizon] Moving _conf and _scripts to dashboard

2014-12-12 Thread Thai Q Tran
Haha, sure I'm willing to make concessions. I'll update the patch to retain the _conf file but it will be blank. This way, we can move forward and not break it for existing users.I'll also document it somewhere in the patch.-Lin Hua Cheng os.lch...@gmail.com wrote: -To: "OpenStack Development Mailing List (not for usage questions)" openstack-dev@lists.openstack.orgFrom: Lin Hua Cheng os.lch...@gmail.comDate: 12/12/2014 03:18PMSubject: Re: [openstack-dev] [Horizon] Moving _conf and _scripts to dashboardBreaking something for existing user is progress, but not forward. :)I don't mind moving the code around to _scripts file, but simply dropping the _conf file is my concern since it might already be extended from. Perhaps document it first that it will be deprecated, and remove it on later release.On Fri, Dec 12, 2014 at 10:43 AM, Thai Q Tran tqt...@us.ibm.com wrote:As is the case with anything we change, but that should not stop us from making improvements/progress. I would argue that it would make life easier for them since all scripts are now in one place.-Lin Hua Cheng os.lch...@gmail.com wrote: -To: "OpenStack Development Mailing List (not for usage questions)" openstack-dev@lists.openstack.orgFrom: Lin Hua Cheng os.lch...@gmail.comDate: 12/12/2014 10:28AMSubject: Re: [openstack-dev] [Horizon] Moving _conf and _scripts to dashboardConsolidating them would break it for users that have customization and extension on the two templates.-LinOn Fri, Dec 12, 2014 at 9:20 AM, David Lyle dkly...@gmail.com wrote:Not entirely sure why they both exist either.So by move, you meant override (nuance). That's different and I have no issue with that.I'm also fine with attempting to consolidate _conf and _scripts.DavidOn Thu, Dec 11, 2014 at 1:22 PM, Thai Q Tran tqt...@us.ibm.com wrote:It would not create a circular dependency, dashboard would depend on horizon - not the latter.Scripts that are library specific will live in horizon while scripts that are panel specific will live in dashboard.Let me draw a more concrete example.In HorizonWe know that _script and _conf are included in the base.htmlWe create a _script and _conf placeholder file for project overrides (similar to _stylesheets and _header)In DashboardWe create a _script and _conf file with today's contentIt overrides the _script and _conf file in horizonNow we can include panel specific scripts without causing circular dependency.In fact, I would like to go further and suggest that _script and _conf be combine into a single file.Not sure why we need two places to include scripts.-David Lyle dkly...@gmail.com wrote: -To: "OpenStack Development Mailing List (not for usage questions)" openstack-dev@lists.openstack.orgFrom: David Lyle dkly...@gmail.comDate: 12/11/2014 09:23AMSubject: Re: [openstack-dev] [Horizon] Moving _conf and _scripts to dashboardI'm probably not understanding the nuance of the question but moving the _scripts.html file to openstack_dashboard creates some circular dependencies, does it not? templates/base.html in the horizon side of the repo includes _scripts.html and insures that the _javascript_ needed by the existing horizon framework is present._conf.html seems like a better candidate for moving as it's more closely tied to the application code.DavidOn Wed, Dec 10, 2014 at 7:20 PM, Thai Q Tran tqt...@us.ibm.com wrote:Sorry for duplicate mail, forgot the subject.-Thai Q Tran/Silicon Valley/IBM wrote: -To: "OpenStack Development Mailing List \(not for usage questions\)" openstack-dev@lists.openstack.orgFrom: Thai Q Tran/Silicon Valley/IBMDate: 12/10/2014 03:37PMSubject: Moving _conf and _scripts to dashboardThe way we are structuring our_javascript_stoday is complicated. All of our static _javascript_s reside in /horizon/static and are imported through _conf.html and _scripts.html. Notice that there are already some panel specific _javascript_s like: horizon.images.js, horizon.instances.js, horizon.users.js. They do not belong in horizon. They belong in openstack_dashboard because they are specific to a panel.Why am I raising this issue now? In Angular, we need controllers written in _javascript_ for each panel. As we angularize more and more panels, we need to store them in a way that make sense. To me, it make sense for us to move _conf and _scripts to openstack_dashboard. Or if this is not possible, then provide a mechanism to override them in openstack_dashboard.Thoughts?Thai


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___OpenStack-dev mailing listOpenStack-dev@lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org

Re: [openstack-dev] [Heat] Convergence proof-of-concept showdown

2014-12-12 Thread Zane Bitter

On 12/12/14 05:29, Murugan, Visnusaran wrote:




-Original Message-
From: Zane Bitter [mailto:zbit...@redhat.com]
Sent: Friday, December 12, 2014 6:37 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Heat] Convergence proof-of-concept
showdown

On 11/12/14 08:26, Murugan, Visnusaran wrote:

[Murugan, Visnusaran]
In case of rollback where we have to cleanup earlier version of
resources,

we could get the order from old template. We'd prefer not to have a
graph table.

In theory you could get it by keeping old templates around. But that
means keeping a lot of templates, and it will be hard to keep track
of when you want to delete them. It also means that when starting an
update you'll need to load every existing previous version of the
template in order to calculate the dependencies. It also leaves the
dependencies in an ambiguous state when a resource fails, and
although that can be worked around it will be a giant pain to implement.



Agree that looking to all templates for a delete is not good. But
baring Complexity, we feel we could achieve it by way of having an
update and a delete stream for a stack update operation. I will
elaborate in detail in the etherpad sometime tomorrow :)


I agree that I'd prefer not to have a graph table. After trying a
couple of different things I decided to store the dependencies in the
Resource table, where we can read or write them virtually for free
because it turns out that we are always reading or updating the
Resource itself at exactly the same time anyway.



Not sure how this will work in an update scenario when a resource does
not change and its dependencies do.


We'll always update the requirements, even when the properties don't
change.



Can you elaborate a bit on rollback.


I didn't do anything special to handle rollback. It's possible that we 
need to - obviously the difference in the UpdateReplace + rollback case 
is that the replaced resource is now the one we want to keep, and yet 
the replaced_by/replaces dependency will force the newer (replacement) 
resource to be checked for deletion first, which is an inversion of the 
usual order.


However, I tried to think of a scenario where that would cause problems 
and I couldn't come up with one. Provided we know the actual, real-world 
dependencies of each resource I don't think the ordering of those two 
checks matters.


In fact, I currently can't think of a case where the dependency order 
between replacement and replaced resources matters at all. It matters in 
the current Heat implementation because resources are artificially 
segmented into the current and backup stacks, but with a holistic view 
of dependencies that may well not be required. I tried taking that line 
out of the simulator code and all the tests still passed. If anybody can 
think of a scenario in which it would make a difference, I would be very 
interested to hear it.


In any event though, it should be no problem to reverse the direction of 
that one edge in these particular circumstances if it does turn out to 
be a problem.



We had an approach with depends_on
and needed_by columns in ResourceTable. But dropped it when we figured out
we had too many DB operations for Update.


Yeah, I initially ran into this problem too - you have a bunch of nodes 
that are waiting on the current node, and now you have to go look them 
all up in the database to see what else they're waiting on in order to 
tell if they're ready to be triggered.


It turns out the answer is to distribute the writes but centralise the 
reads. So at the start of the update, we read all of the Resources, 
obtain their dependencies and build one central graph[1]. We than make 
that graph available to each resource (either by passing it as a 
notification parameter, or storing it somewhere central in the DB that 
they will all have to read anyway, i.e. the Stack). But when we update a 
dependency we don't update the central graph, we update the individual 
Resource so there's no global lock required.


[1] 
https://github.com/zaneb/heat-convergence-prototype/blob/distributed-graph/converge/stack.py#L166-L168



Also taking care of deleting resources in order will be an issue.


It works fine.


This implies that there will be different versions of a resource which
will even complicate further.


No it doesn't, other than the different versions we already have due to
UpdateReplace.


This approach reduces DB queries by waiting for completion
notification

on a topic. The drawback I see is that delete stack stream will be
huge as it will have the entire graph. We can always dump such data
in ResourceLock.data Json and pass a simple flag
load_stream_from_db to converge RPC call as a workaround for delete

operation.


This seems to be essentially equivalent to my 'SyncPoint'
proposal[1], with

the key difference that the data is stored in-memory in a Heat engine
rather than the database.


I suspect it's probably a mistake to move it 

[openstack-dev] [nova] nova-manage db archive_deleted_rows broken

2014-12-12 Thread melanie witt
Hi everybody,

At some point, our db archiving functionality got broken because there was a 
change to stop ever deleting instance system metadata [1]. For those 
unfamiliar, the 'nova-manage db archive_deleted_rows' is the thing that moves 
all soft-deleted (deleted=nonzero) rows to the shadow tables. This is a 
periodic cleaning that operators can do to maintain performance (as things can 
get sluggish when deleted=nonzero rows accumulate).

The change was made because instance_type data still needed to be read even 
after instances had been deleted, because we allow admin to view deleted 
instances. I saw a bug [2] and two patches [3][4] which aimed to fix this by 
changing back to soft-deleting instance sysmeta when instances are deleted, and 
instead allow read_deleted=yes for the things that need to read instance_type 
for deleted instances present in the db.

My question is, is this approach okay? If so, I'd like to see these patches 
revive so we can have our db archiving working again. :) I think there's likely 
something I'm missing about the approach, so I'm hoping people who know more 
about instance sysmeta than I do, can chime in on how/if we can fix this for db 
archiving. Thanks.

[1] https://bugs.launchpad.net/nova/+bug/1185190 
[2] https://bugs.launchpad.net/nova/+bug/1226049
[3] https://review.openstack.org/#/c/110875/
[4] https://review.openstack.org/#/c/109201/

melanie (melwitt)






signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Global Context and Execution Environment

2014-12-12 Thread Dmitri Zimine
Winson, Lakshmi, Renat: 

It looked good and I began to write down the summary: 
https://etherpad.openstack.org/p/mistral-global-context

But than realized that it’s not safe to assume from the action, that the global 
context will be supplied as part of API call. 
Check it out in the etherpad.

What problems are we trying to solve: 
1) reduce passing the same parameters over and over from parent to child
2) “automatically” make a parameter accessible to most actions without typing 
it all over (like auth token) 

Can #1 be solved by passing input to subworkflows automatically
Can #2 be solved somehow else? Default passing of arbitrary parameters to 
action seems like breaking abstraction.

Thoughts? need to brainstorm further….

DZ 


On Dec 12, 2014, at 12:54 PM, W Chan m4d.co...@gmail.com wrote:

 Renat, Dmitri,
 
 On supplying the global context into the workflow execution...
 
 In addition to Renat's proposal, I have a few here.
 
 1) Pass them implicitly in start_workflow as another kwargs in the **params.  
 But on thinking, we should probably make global context explicitly defined in 
 the WF spec.  Passing them implicitly may be hard to follow during 
 troubleshooting where the value comes from by looking at the WF spec.  Plus 
 there will be cases where WF authors want it explicitly defined. Still 
 debating here...
 
 inputs = {...}
 globals = {...}
 start_workflow('my_workflow', inputs, globals=globals)
 
 2) Instead of adding to the WF spec, what if we change the scope in existing 
 input params?  For example, inputs defined in the top workflow by default is 
 visible to all subflows (pass down to workflow task on run_workflow) and 
 tasks (passed to action on execution).
 
 3) Add to the WF spec
 
 workflow:
 type: direct
 global:
 - global1
 - global2
 input:
 - input1
 - input2
 
 Winson
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] [PTL] Cascading vs. Cells – summit recap and move forward

2014-12-12 Thread joehuang
Hello,  Andrew, 

 I do consider this to be out of scope for cells, for at least the medium
 term as you've said.  There is additional complexity in making that a
 supported configuration that is not being addressed in the cells
 effort.  I am just making the statement that this is something cells
 could address if desired, and therefore doesn't need an additional solution

1. Does your solution include Cinder,Neutron, Glance, Ceilometer, or only Nova 
involved? Could you describe it more clear how your solution works.
2. The tenant's resources need to be distributed in different data centers, how 
these resources are connected through L2/L3 networing, and isolated from other 
tenants, including provide advanced service like LB/FW/VPN, and service 
chainning?
3. How to distribute the image to geo-distributed data-centers when the user 
upload image, or you mean all VM will boot from the remote image data?
4. How would the metering and monitoring function will work in geo-distributed 
data-centers? Or say, if we use Ceilometer, how to hadnle the sampling 
data/alarm?
5. How to support multi-vendor's OpenStack distribution in one multi-site 
cloud? If only support one vendor's OpenStack distribution, and use RPC for 
inter-dc communication, how to do the cross data center integration and 
troubleshooting, upgrade with RPC if the 
driver/agent/backend(storage/network/sever) from different vendor

I have lots of doubts how the cells would address these challenges,  these 
questions are only part of them. It would be the best if cells can address all 
challenges, then of course no need an adtional solution.

Best regards

Chaoyi Huang ( joehuang )


From: Andrew Laski [andrew.la...@rackspace.com]
Sent: 13 December 2014 0:06
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all] [tc] [PTL] Cascading vs. Cells – summit 
recap and move forward

On 12/12/2014 09:50 AM, Russell Bryant wrote:
 On 12/11/2014 12:55 PM, Andrew Laski wrote:
 Cells can handle a single API on top of globally distributed DCs.  I
 have spoken with a group that is doing exactly that.  But it requires
 that the API is a trusted part of the OpenStack deployments in those
 distributed DCs.
 And the way the rest of the components fit into that scenario is far
 from clear to me.  Do you consider this more of a if you can make it
 work, good for you, or something we should aim to be more generally
 supported over time?  Personally, I see the globally distributed
 OpenStack under a single API case much more complex, and worth
 considering out of scope for the short to medium term, at least.

I do consider this to be out of scope for cells, for at least the medium
term as you've said.  There is additional complexity in making that a
supported configuration that is not being addressed in the cells
effort.  I am just making the statement that this is something cells
could address if desired, and therefore doesn't need an additional solution.

 For me, this discussion boils down to ...

 1) Do we consider these use cases in scope at all?

 2) If we consider it in scope, is it enough of a priority to warrant a
 cross-OpenStack push in the near term to work on it?

 3) If yes to #2, how would we do it?  Cascading, or something built
 around cells?

 I haven't worried about #3 much, because I consider #2 or maybe even #1
 to be a show stopper here.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] [PTL] Cascading vs. Cells – summit recap and move forward

2014-12-12 Thread joehuang
Hello, Russell,

 Personally, I see the globally distributed
 OpenStack under a single API case much more complex, and worth
 considering out of scope for the short to medium term, at least.

Thanks for your thougths. Do you mean it could be set in the roadmap, but not a 
scope in short or medium term ( for example, Kilo and L release )? Or we need 
more discussion to include it in the roadmap, then I would like to know how to 
do that.

Best Regards

Chaoyi Huang ( joehuang )

From: Russell Bryant [rbry...@redhat.com]
Sent: 12 December 2014 22:50
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [all] [tc] [PTL] Cascading vs. Cells – summit 
recap and move forward

On 12/11/2014 12:55 PM, Andrew Laski wrote:
 Cells can handle a single API on top of globally distributed DCs.  I
 have spoken with a group that is doing exactly that.  But it requires
 that the API is a trusted part of the OpenStack deployments in those
 distributed DCs.

And the way the rest of the components fit into that scenario is far
from clear to me.  Do you consider this more of a if you can make it
work, good for you, or something we should aim to be more generally
supported over time?  Personally, I see the globally distributed
OpenStack under a single API case much more complex, and worth
considering out of scope for the short to medium term, at least.

For me, this discussion boils down to ...

1) Do we consider these use cases in scope at all?

2) If we consider it in scope, is it enough of a priority to warrant a
cross-OpenStack push in the near term to work on it?

3) If yes to #2, how would we do it?  Cascading, or something built
around cells?

I haven't worried about #3 much, because I consider #2 or maybe even #1
to be a show stopper here.

--
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Vendor Plugin Decomposition and NeutronClient vendor extension

2014-12-12 Thread Ryu Ishimoto
Hi All,

It's great to see the vendor plugin decomposition spec[1] finally getting
merged!  Now that the spec is completed, I have a question on how this may
impact neutronclient, and in particular, its handling of vendor extensions.

One of the great things about splitting out the plugins is that it will
allow vendors to implement vendor extensions more rapidly.  Looking at the
neutronclient code, however, it seems that these vendor extension commands
are embedded inside the project, and doesn't seem easily extensible.  It
feels natural that, now that neutron vendor code is split out,
neutronclient should also do the same.

Of course, you could always fork neutronclient yourself, but I'm wondering
if there is any plan on improving this.  Admittedly, I don't have a great
solution myself but I'm thinking something along the line of allowing
neutronclient to load commands from an external directory.  I am not
familiar enough with neutronclient to know if there are technical
limitation to what I'm suggesting, but I would love to hear thoughts of
others on this.

Thanks in advance!

Best,
Ryu

[1] https://review.openstack.org/#/c/134680/
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Vendor Plugin Decomposition and NeutronClient vendor extension

2014-12-12 Thread Armando M.
On 12 December 2014 at 22:18, Ryu Ishimoto r...@midokura.com wrote:


 Hi All,

 It's great to see the vendor plugin decomposition spec[1] finally getting
 merged!  Now that the spec is completed, I have a question on how this may
 impact neutronclient, and in particular, its handling of vendor extensions.


Thanks for the excitement :)



 One of the great things about splitting out the plugins is that it will
 allow vendors to implement vendor extensions more rapidly.  Looking at the
 neutronclient code, however, it seems that these vendor extension commands
 are embedded inside the project, and doesn't seem easily extensible.  It
 feels natural that, now that neutron vendor code is split out,
 neutronclient should also do the same.

 Of course, you could always fork neutronclient yourself, but I'm wondering
 if there is any plan on improving this.  Admittedly, I don't have a great
 solution myself but I'm thinking something along the line of allowing
 neutronclient to load commands from an external directory.  I am not
 familiar enough with neutronclient to know if there are technical
 limitation to what I'm suggesting, but I would love to hear thoughts of
 others on this.


There is quite a bit of road ahead of us. We haven't thought or yet
considered how to handle extensions client side. Server side, the extension
mechanism is already quite flexible, but we gotta learn to walk before we
can run!

Having said that your points are well taken, but most likely we won't be
making much progress on these until we have provided and guaranteed a
smooth transition for all plugins and drivers as suggested by the spec
referenced below. Stay tuned!

Cheers,
Armando



 Thanks in advance!

 Best,
 Ryu

 [1] https://review.openstack.org/#/c/134680/

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Core/Vendor code decomposition

2014-12-12 Thread Yuriy Shovkoplias
Dear neutron community,

Can you please clarify couple points on the vendor code decomposition?
 - Assuming I would like to create the new driver now (Kilo development
cycle) - is it already allowed (or mandatory) to follow the new process?

https://review.openstack.org/#/c/134680/

- Assuming the new process is already in place, are the following
guidelines still applicable for the vendor integration code (not for vendor
library)?

https://wiki.openstack.org/wiki/Neutron_Plugins_and_Drivers
The following is a list of requirements for inclusion of code upstream:

   - Participation in Neutron meetings, IRC channels, and email lists.
   - A member of the plugin/driver team participating in code reviews of
   other upstream code.

Regards,
Yuri

On Thu, Dec 11, 2014 at 3:23 AM, Gary Kotton gkot...@vmware.com wrote:


 On 12/11/14, 12:50 PM, Ihar Hrachyshka ihrac...@redhat.com wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA512
 
 +100. I vote -1 there and would like to point out that we *must* keep
 history during the split, and split from u/s code base, not random
 repositories. If you don't know how to achieve this, ask oslo people,
 they did it plenty of times when graduating libraries from oslo-incubator.
 /Ihar
 
 On 10/12/14 19:18, Cedric OLLIVIER wrote:
  https://review.openstack.org/#/c/140191/
 
  2014-12-09 18:32 GMT+01:00 Armando M. arma...@gmail.com
  mailto:arma...@gmail.com:
 
 
  By the way, if Kyle can do it in his teeny tiny time that he has
  left after his PTL duties, then anyone can do it! :)
 
  https://review.openstack.org/#/c/140191/

 This patch looses the recent hacking changes that we have made. This is a
 slight example to try and highly the problem that we may incur as a
 community.

 
  Fully cloning Dave Tucker's repository [1] and the outdated fork of
  the ODL ML2 MechanismDriver included raises some questions (e.g.
  [2]). I wish the next patch set removes some files. At least it
  should take the mainstream work into account (e.g. [3]) .
 
  [1] https://github.com/dave-tucker/odl-neutron-drivers [2]
  https://review.openstack.org/#/c/113330/ [3]
  https://review.openstack.org/#/c/96459/
 
 
  ___ OpenStack-dev
  mailing list OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 -BEGIN PGP SIGNATURE-
 Version: GnuPG/MacGPG2 v2.0.22 (Darwin)
 
 iQEcBAEBCgAGBQJUiXcIAAoJEC5aWaUY1u57dBMH/17unffokpb0uxqewPYrPNMI
 ukDzG4dW8mIP3yfbVNsHQXe6gWj/kj/SkBWJrO13BusTu8hrr+DmOmmfF/42s3vY
 E+6EppQDoUjR+QINBwE46nU+E1w9hIHyAZYbSBtaZQ32c8aQbmHmF+rgoeEQq349
 PfpPLRI6MamFWRQMXSgF11VBTg8vbz21PXnN3KbHbUgzI/RS2SELv4SWmPgKZCEl
 l1K5J1/Vnz2roJn4pr/cfc7vnUIeAB5a9AuBHC6o+6Je2RDy79n+oBodC27kmmIx
 lVGdypoxZ9tF3yfRM9nngjkOtozNzZzaceH0Sc/5JR4uvNReVN4exzkX5fDH+SM=
 =dfe/
 -END PGP SIGNATURE-
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] [PTL] Cascading vs. Cells – summit recap and move forward

2014-12-12 Thread Morgan Fainberg


 On Dec 12, 2014, at 10:30, Joe Gordon joe.gord...@gmail.com wrote:
 
 
 
 On Fri, Dec 12, 2014 at 6:50 AM, Russell Bryant rbry...@redhat.com wrote:
 On 12/11/2014 12:55 PM, Andrew Laski wrote:
  Cells can handle a single API on top of globally distributed DCs.  I
  have spoken with a group that is doing exactly that.  But it requires
  that the API is a trusted part of the OpenStack deployments in those
  distributed DCs.
 
 And the way the rest of the components fit into that scenario is far
 from clear to me.  Do you consider this more of a if you can make it
 work, good for you, or something we should aim to be more generally
 supported over time?  Personally, I see the globally distributed
 OpenStack under a single API case much more complex, and worth
 considering out of scope for the short to medium term, at least.
 
 For me, this discussion boils down to ...
 
 1) Do we consider these use cases in scope at all?
 
 2) If we consider it in scope, is it enough of a priority to warrant a
 cross-OpenStack push in the near term to work on it?
 
 3) If yes to #2, how would we do it?  Cascading, or something built
 around cells?
 
 I haven't worried about #3 much, because I consider #2 or maybe even #1
 to be a show stopper here.
 
 Agreed

I agree with Russell as well. I also am curious on how identity will work in 
these cases. As it stands identity provides authoritative information only for 
the deployment it runs. There is a lot of concern I have from a security 
standpoint when I start needing to address what the central api can do on the 
other providers. We have had this discussion a number of times in Keystone, 
specifically when designing the keystone-to-keystone identity federation, and 
we came to the conclusion that we needed to ensure that the keystone local to a 
given cloud is the only source of authoritative authz information. While it 
may, in some cases, accept authn from a source that is trusted, it still 
controls the local set of roles and grants. 

Second, we only guarantee that a tenan_id / project_id is unique within a 
single deployment of keystone (e.g. Shared/replicated backends such as a 
percona cluster, which cannot be when crossing between differing IAAS 
deployers/providers). If there is ever a tenant_id conflict (in theory possible 
with ldap assignment or an unlucky random uuid generation) between 
installations, you end up with potentially granting access that should not 
exist to a given user. 

With that in mind, how does Keystone fit into this conversation? What is 
expected of identity? What would keystone need to actually support to make this 
a reality?

I ask because I've only seen information on nova, glance, cinder, and 
ceilometer in the documentation. Based upon the above information I outlined, I 
would be concerned with an assumption that identity would just work without 
also being part of this conversation. 

Thanks,
Morgan ___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Core/Vendor code decomposition

2014-12-12 Thread Armando M.
On 12 December 2014 at 23:01, Yuriy Shovkoplias yshovkopl...@mirantis.com
wrote:

 Dear neutron community,

 Can you please clarify couple points on the vendor code decomposition?
  - Assuming I would like to create the new driver now (Kilo development
 cycle) - is it already allowed (or mandatory) to follow the new process?

 https://review.openstack.org/#/c/134680/


Yes. See [1] for more details.


 - Assuming the new process is already in place, are the following
 guidelines still applicable for the vendor integration code (not for vendor
 library)?

 https://wiki.openstack.org/wiki/Neutron_Plugins_and_Drivers
 The following is a list of requirements for inclusion of code upstream:

- Participation in Neutron meetings, IRC channels, and email lists.
- A member of the plugin/driver team participating in code reviews of
other upstream code.


I see no reason why you wouldn't follow those guidelines, as a general rule
of thumb. Having said that, some of the wording would need to be tweaked to
take into account of the new contribution model. Bear in mind, that I
started adding some developer documentation in [2], to give a practical
guide to the proposal. More to follow.

Cheers,
Armando

[1]
http://docs-draft.openstack.org/80/134680/17/check/gate-neutron-specs-docs/2a7afdd/doc/build/html/specs/kilo/core-vendor-decomposition.html#adoption-and-deprecation-policy
[2]
https://review.openstack.org/#/q/status:open+project:openstack/neutron+branch:master+topic:bp/core-vendor-decomposition,n,z


 Regards,
 Yuri

 On Thu, Dec 11, 2014 at 3:23 AM, Gary Kotton gkot...@vmware.com wrote:


 On 12/11/14, 12:50 PM, Ihar Hrachyshka ihrac...@redhat.com wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA512
 
 +100. I vote -1 there and would like to point out that we *must* keep
 history during the split, and split from u/s code base, not random
 repositories. If you don't know how to achieve this, ask oslo people,
 they did it plenty of times when graduating libraries from
 oslo-incubator.
 /Ihar
 
 On 10/12/14 19:18, Cedric OLLIVIER wrote:
  https://review.openstack.org/#/c/140191/
 
  2014-12-09 18:32 GMT+01:00 Armando M. arma...@gmail.com
  mailto:arma...@gmail.com:
 
 
  By the way, if Kyle can do it in his teeny tiny time that he has
  left after his PTL duties, then anyone can do it! :)
 
  https://review.openstack.org/#/c/140191/

 This patch looses the recent hacking changes that we have made. This is a
 slight example to try and highly the problem that we may incur as a
 community.

 
  Fully cloning Dave Tucker's repository [1] and the outdated fork of
  the ODL ML2 MechanismDriver included raises some questions (e.g.
  [2]). I wish the next patch set removes some files. At least it
  should take the mainstream work into account (e.g. [3]) .
 
  [1] https://github.com/dave-tucker/odl-neutron-drivers [2]
  https://review.openstack.org/#/c/113330/ [3]
  https://review.openstack.org/#/c/96459/
 
 
  ___ OpenStack-dev
  mailing list OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 -BEGIN PGP SIGNATURE-
 Version: GnuPG/MacGPG2 v2.0.22 (Darwin)
 
 iQEcBAEBCgAGBQJUiXcIAAoJEC5aWaUY1u57dBMH/17unffokpb0uxqewPYrPNMI
 ukDzG4dW8mIP3yfbVNsHQXe6gWj/kj/SkBWJrO13BusTu8hrr+DmOmmfF/42s3vY
 E+6EppQDoUjR+QINBwE46nU+E1w9hIHyAZYbSBtaZQ32c8aQbmHmF+rgoeEQq349
 PfpPLRI6MamFWRQMXSgF11VBTg8vbz21PXnN3KbHbUgzI/RS2SELv4SWmPgKZCEl
 l1K5J1/Vnz2roJn4pr/cfc7vnUIeAB5a9AuBHC6o+6Je2RDy79n+oBodC27kmmIx
 lVGdypoxZ9tF3yfRM9nngjkOtozNzZzaceH0Sc/5JR4uvNReVN4exzkX5fDH+SM=
 =dfe/
 -END PGP SIGNATURE-
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev