Re: [openstack-dev] [nova] Kilo specs review day

2014-12-11 Thread Sahid Orentino Ferdjaoui
On Thu, Dec 11, 2014 at 08:41:49AM +1100, Michael Still wrote:
 Hi,
 
 at the design summit we said that we would not approve specifications
 after the kilo-1 deadline, which is 18 December. Unfortunately, we’ve
 had a lot of specifications proposed this cycle (166 to my count), and
 haven’t kept up with the review workload.
 
 Therefore, I propose that Friday this week be a specs review day. We
 need to burn down the queue of specs needing review, as well as
 abandoning those which aren’t getting regular updates based on our
 review comments.
 
 I’d appreciate nova-specs-core doing reviews on Friday, but its always
 super helpful when non-cores review as well. 

Sure it could be *super* useful :) - I will try to help on this way.

 A +1 for a developer or
 operator gives nova-specs-core a good signal of what might be ready to
 approve, and that helps us optimize our review time.
 
 For reference, the specs to review may be found at:
 
 
 https://review.openstack.org/#/q/project:openstack/nova-specs+status:open,n,z
 
 Thanks heaps,
 Michael
 
 -- 
 Rackspace Australia
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] out-of-tree plugin for Mech driver/L2 and vif_driver

2014-12-11 Thread Maxime Leroy
On Thu, Dec 11, 2014 at 2:37 AM, henry hly henry4...@gmail.com wrote:
 On Thu, Dec 11, 2014 at 3:48 AM, Ian Wells ijw.ubu...@cack.org.uk wrote:
[..]

 The problem is that we effectively prevent running an out of tree Neutron
 driver (which *is* perfectly legitimate) if it uses a VIF plugging mechanism
 that isn't in Nova, as we can't use out of tree code and we won't accept in
 code ones for out of tree drivers.


+1 well said !

 The question is, do we really need such flexibility for so many nova vif 
 types?


Are we going to accept a new VIF_TYPE in nova if it's only used by an
external ml2/l2 plugin ?

 I also think that VIF_TYPE_TAP and VIF_TYPE_VHOSTUSER is good example,
 nova shouldn't known too much details about switch backend, it should
 only care about the VIF itself, how the VIF is plugged to switch
 belongs to Neutron half.

VIF_TYPE_TAP and VIF_TYPE_VHOSTUSER is nice if your out-of-tree l2/ml2
plugin needs a tap interface or a vhostuser socket.

But if your external l2/ml2 plugin needs a specific type of nic
(i.e. a new method get_config to provide specific parameters to libvirt
for the nic) that not supported in the nova tree, you still need to
have a plugin mechanism.

[..]
 Your issue is one of testing.  Is there any way we could set up a better
 testing framework for VIF drivers where Nova interacts with something to
 test the plugging mechanism actually passes traffic?  I don't believe
 there's any specific limitation on it being *Neutron* that uses the plugging
 interaction.

My spec proposes to use the same plugin mechanism for the vif drivers
in the tree and for the external vif drivers. Please see my RFC patch:
https://review.openstack.org/#/c/136857/

Maxime

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] [PTL] Cascading vs. Cells – summit recap and move forward

2014-12-11 Thread joehuang
Hello, Russell,

Many thanks for your reply. See inline comments.

-Original Message-
From: Russell Bryant [mailto:rbry...@redhat.com] 
Sent: Thursday, December 11, 2014 5:22 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [all] [tc] [PTL] Cascading vs. Cells – summit 
recap and move forward

 On Fri, Dec 5, 2014 at 8:23 AM, joehuang joehu...@huawei.com wrote:
 Dear all  TC  PTL,

 In the 40 minutes cross-project summit session “Approaches for 
 scaling out”[1], almost 100 peoples attended the meeting, and the 
 conclusion is that cells can not cover the use cases and 
 requirements which the OpenStack cascading solution[2] aim to 
 address, the background including use cases and requirements is also 
 described in the mail.

I must admit that this was not the reaction I came away with the discussion 
with.  
There was a lot of confusion, and as we started looking closer, many (or 
perhaps most) 
people speaking up in the room did not agree that the requirements being 
stated are 
things we want to try to satisfy.

[joehuang] Could you pls. confirm your opinion: 1) cells can not cover the use 
cases and requirements which the OpenStack cascading solution aim to address. 
2) Need further discussion whether to satisfy the use cases and requirements.

On 12/05/2014 06:47 PM, joehuang wrote:
 Hello, Davanum,
 
 Thanks for your reply.
 
 Cells can't meet the demand for the use cases and requirements described in 
 the mail. 

You're right that cells doesn't solve all of the requirements you're 
discussing.  
Cells addresses scale in a region.  My impression from the summit session 
 and other discussions is that the scale issues addressed by cells are 
 considered 
 a priority, while the global API bits are not.

[joehuang] Agree cells is in the first class priority.

 1. Use cases
 a). Vodafone use case[4](OpenStack summit speech video from 9'02
 to 12'30 ), establishing globally addressable tenants which result 
 in efficient services deployment.

 Keystone has been working on federated identity.  
That part makes sense, and is already well under way.

[joehuang] The major challenge for VDF use case is cross OpenStack networking 
for tenants. The tenant's VM/Volume may be allocated in different data centers 
geographically, but virtual network (L2/L3/FW/VPN/LB) should be built for each 
tenant automatically and isolated between tenants. Keystone federation can help 
authorization automation, but the cross OpenStack network automation challenge 
is still there.
Using prosperity orchestration layer can solve the automation issue, but VDF 
don't like prosperity API in the north-bound, because no ecosystem is 
available. And other issues, for example, how to distribute image, also cannot 
be solved by Keystone federation.

 b). Telefonica use case[5], create virtual DC( data center) cross 
 multiple physical DCs with seamless experience.

If we're talking about multiple DCs that are effectively local to each other 
with high bandwidth and low latency, that's one conversation.  
My impression is that you want to provide a single OpenStack API on top of 
globally distributed DCs.  I honestly don't see that as a problem we should 
be trying to tackle.  I'd rather continue to focus on making OpenStack work
*really* well split into regions.
 I think some people are trying to use cells in a geographically distributed 
 way, 
 as well.  I'm not sure that's a well understood or supported thing, though.  
 Perhaps the folks working on the new version of cells can comment further.

[joehuang] 1) Splited region way cannot provide cross OpenStack networking 
automation for tenant. 2) exactly, the motivation for cascading is single 
OpenStack API on top of globally distributed DCs. Of course, cascading can 
also be used for DCs close to each other with high bandwidth and low latency. 
3) Folks comment from cells are welcome.
.

 c). ETSI NFV use cases[6], especially use case #1, #2, #3, #5, #6, 
 8#. For NFV cloud, it’s in nature the cloud will be distributed but 
 inter-connected in many data centers.

I'm afraid I don't understand this one.  In many conversations about NFV, I 
haven't heard this before.

[joehuang] This is the ETSI requirement and use cases specification for NFV. 
ETSI is the home of the Industry Specification Group for NFV. In Figure 14 
(virtualization of EPC) of this document, you can see that the operator's  
cloud including many data centers to provide connection service to end user by 
inter-connected VNFs. The requirements listed in 
(https://wiki.openstack.org/wiki/TelcoWorkingGroup) is mainly about the 
requirements from specific VNF(like IMS, SBC, MME, HSS, S/P GW etc) to run over 
cloud, eg. migrate the traditional telco. APP from prosperity hardware to 
cloud. Not all NFV requirements have been covered yet. Forgive me there are so 
many telco terms here.


 2.requirements
 a). The operator has multiple sites cloud; each site can use one or 
 multiple vendor’s 

Re: [openstack-dev] [Ironic] Some questions about Ironic service

2014-12-11 Thread xianchaobo
Hi,Fox Kevin M



Thanks for your help.

Also,I want to know whether these features will be implemented in Ironic?

Do we have a plan to implement them?



Thanks

Xianchaobo


From: Fox, Kevin M [mailto:kevin@pnnl.gov]
Sent: Tuesday, December 09, 2014 5:52 PM
To: OpenStack Development Mailing List (not for usage questions)
Cc: Luohao (brian)
Subject: Re: [openstack-dev] [Ironic] Some questions about Ironic service

No to questions 1, 3, and 4. Yes to 2, but very minimally.


From: xianchaobo
Sent: Monday, December 08, 2014 10:29:50 PM
To: openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Cc: Luohao (brian)
Subject: [openstack-dev] [Ironic] Some questions about Ironic service
Hello, all

I'm trying to install and configure Ironic service, something confused me.
I create two neutron networks, public network and private network.
Private network is used to deploy physical machines
Public network is used to provide floating ip.


(1) Private network type can be VLAN or VXLAN? (In install guide, the 
network type is flat)

(2) The network of deployed physical machines can be managed by neutron?

(3) Different tenants can have its own network to manage physical machines?

(4) Does the ironic provide some mechanism for deployed physical machines

to use storage such as shared storage,cinder volume?

Thanks,
XianChaobo
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano] Oslo.messaging error

2014-12-11 Thread Ekaterina Chernova
Hi!

I recommend you to create separate user and password in Rabbit MQ and do
not use 'quest' user.
Don't forget to edit config file.

I recommend you to go to our IRC channel #murano Freenode. We will help you
to set up your environment step by step!

Regards,
Kate.

On Thu, Dec 11, 2014 at 10:12 AM, raghavendra@accenture.com wrote:







 HI Team,



 I am installing Murano on the Ubuntu 14.04 Juno setup and when I try the
 below install murano-api I encounter the below error. Please assist.



 When I install



 I am using the Murano guide link provided below:

 https://murano.readthedocs.org/en/latest/install/manual.html





 I am trying to execute the section 7



 1.Open a new console and launch Murano API. A separate terminal is
 required because the console will be locked by a running process.

 2. $ cd ~/murano/murano

 3. $ tox -e venv -- murano-api \

 4.  --config-file ./etc/murano/murano.conf





 I am getting the below error : I have a Juno Openstack ready and trying to
 integrate Murano





 2014-12-11 12:28:03.676 9524 INFO eventlet.wsgi [-] (9524) wsgi starting
 up on http://0.0.0.0:8082/

 2014-12-11 12:28:03.677 9524 DEBUG murano.common.statservice [-] Updating
 statistic information. update_stats
 /root/murano/murano/murano/common/statservice.py:57

 2014-12-11 12:28:03.677 9524 DEBUG murano.common.statservice [-] Stats
 object: murano.api.v1.request_statistics.RequestStatisticsCollection
 object at 0x7ff72837d410 update_stats
 /root/murano/murano/murano/common/statservice.py:58

 2014-12-11 12:28:03.677 9524 DEBUG murano.common.statservice [-] Stats:
 Requests:0  Errors: 0 Ave.Res.Time 0.

 Per tenant: {} update_stats
 /root/murano/murano/murano/common/statservice.py:64

 2014-12-11 12:28:03.692 9524 DEBUG oslo.db.sqlalchemy.session [-] MySQL
 server mode set to
 STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION
 _check_effective_sql_mode
 /root/murano/murano/.tox/venv/local/lib/python2.7/site-packages/oslo/db/sqlalchemy/session.py:509

 2014-12-11 12:28:06.721 9524 ERROR oslo.messaging._drivers.impl_rabbit [-]
 AMQP server 192.168.x.x:5672 closed the connection. Check login
 credentials: Socket closed



 Warm Regards,

 *Raghavendra Lad*



 --

 This message is for the designated recipient only and may contain
 privileged, proprietary, or otherwise confidential information. If you have
 received it in error, please notify the sender immediately and delete the
 original. Any other use of the e-mail by you is prohibited. Where allowed
 by local law, electronic communications with Accenture and its affiliates,
 including e-mail and instant messaging (including content), may be scanned
 by our systems for the purposes of information security and assessment of
 internal compliance with Accenture policy.

 __

 www.accenture.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] deprecation 'pattern' library??

2014-12-11 Thread Julien Danjou
On Wed, Dec 10 2014, Joshua Harlow wrote:


[…]

 Or in general any other comments/ideas about providing such a deprecation
 pattern library?

+1

 * debtcollector

made me think of loanshark :)

-- 
Julien Danjou
-- Free Software hacker
-- http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] out-of-tree plugin for Mech driver/L2 and vif_driver

2014-12-11 Thread Daniel P. Berrange
On Thu, Dec 11, 2014 at 09:37:31AM +0800, henry hly wrote:
 On Thu, Dec 11, 2014 at 3:48 AM, Ian Wells ijw.ubu...@cack.org.uk wrote:
  On 10 December 2014 at 01:31, Daniel P. Berrange berra...@redhat.com
  wrote:
 
 
  So the problem of Nova review bandwidth is a constant problem across all
  areas of the code. We need to solve this problem for the team as a whole
  in a much broader fashion than just for people writing VIF drivers. The
  VIF drivers are really small pieces of code that should be straightforward
  to review  get merged in any release cycle in which they are proposed.
  I think we need to make sure that we focus our energy on doing this and
  not ignoring the problem by breaking stuff off out of tree.
 
 
  The problem is that we effectively prevent running an out of tree Neutron
  driver (which *is* perfectly legitimate) if it uses a VIF plugging mechanism
  that isn't in Nova, as we can't use out of tree code and we won't accept in
  code ones for out of tree drivers.
 
 The question is, do we really need such flexibility for so many nova vif 
 types?
 
 I also think that VIF_TYPE_TAP and VIF_TYPE_VHOSTUSER is good example,
 nova shouldn't known too much details about switch backend, it should
 only care about the VIF itself, how the VIF is plugged to switch
 belongs to Neutron half.
 
 However I'm not saying to move existing vif driver out, those open
 backend have been used widely. But from now on the tap and vhostuser
 mode should be encouraged: one common vif driver to many long-tail
 backend.

Yes, I really think this is a key point. When we introduced the VIF type
mechanism we never intended for there to be soo many different VIF types
created. There is a very small, finite number of possible ways to configure
the libvirt guest XML and it was intended that the VIF types pretty much
mirror that. This would have given us about 8 distinct VIF type maximum.

I think the reason for the larger than expected number of VIF types, is
that the drivers are being written to require some arbitrary tools to
be invoked in the plug  unplug methods. It would really be better if
those could be accomplished in the Neutron code than the Nova code, via
a host agent run  provided by the Neutron mechanism.  This would let
us have a very small number of VIF types and so avoid the entire problem
that this thread is bringing up.

Failing that though, I could see a way to accomplish a similar thing
without a Neutron launched agent. If one of the VIF type binding
parameters were the name of a script, we could run that script on
plug  unplug. So we'd have a finite number of VIF types, and each
new Neutron mechanism would merely have to provide a script to invoke

eg consider the existing midonet  iovisor VIF types as an example.
Both of them use the libvirt ethernet config, but have different
things running in their plug methods. If we had a mechanism for
associating a plug script with a vif type, we could use a single
VIF type for both.

eg iovisor port binding info would contain

  vif_type=ethernet
  vif_plug_script=/usr/bin/neutron-iovisor-vif-plug

while midonet would contain

  vif_type=ethernet
  vif_plug_script=/usr/bin/neutron-midonet-vif-plug


And so you see implementing a new Neutron mechanism in this way would
not require *any* changes in Nova whatsoever. The work would be entirely
self-contained within the scope of Neutron. It is simply a packaging
task to get the vif script installed on the compute hosts, so that Nova
can execute it.

This is essentially providing a flexible VIF plugin system for Nova,
without having to have it plug directly into the Nova codebase with
the API  RPC stability constraints that implies.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][sriov] PciDeviceRequestFailed error

2014-12-11 Thread Akilesh K
Hey guys sorry for the delayed reply. The problem was with the whitelist. I
had whitelisted the id of the physical function instead of the virtual
function.


On Mon, Dec 8, 2014 at 9:33 AM, shihanzhang ayshihanzh...@126.com wrote:

 I think the problem is in nova, can you show your
 pci_passthrough_whitelist in nova.conf?






 At 2014-12-04 18:26:21, Akilesh K akilesh1...@gmail.com wrote:

 Hi,
 I am using neutron-plugin-sriov-agent.

 I have configured pci_whitelist  in nova.conf

 I have configured ml2_conf_sriov.ini.

 But when I launch instance I get the exception in subject.

 On further checking with the help of some forum messages, I discovered
 that pci_stats are empty.
 mysql  select hypervisor_hostname,pci_stats from compute_nodes;
 +-+---+
 | hypervisor_hostname | pci_stats |
 +-+---+
 | openstack | []|
 +-+---+
 1 row in set (0.00 sec)


 Further to this I found that PciDeviceStats.pools is an empty list too.

 Can anyone tell me what I am missing.


 Thank you,
 Ageeleshwar K




 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Core/Vendor code decomposition

2014-12-11 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

+100. I vote -1 there and would like to point out that we *must* keep
history during the split, and split from u/s code base, not random
repositories. If you don't know how to achieve this, ask oslo people,
they did it plenty of times when graduating libraries from oslo-incubator.
/Ihar

On 10/12/14 19:18, Cedric OLLIVIER wrote:
 https://review.openstack.org/#/c/140191/
 
 2014-12-09 18:32 GMT+01:00 Armando M. arma...@gmail.com 
 mailto:arma...@gmail.com:
 
 
 By the way, if Kyle can do it in his teeny tiny time that he has 
 left after his PTL duties, then anyone can do it! :)
 
 https://review.openstack.org/#/c/140191/
 
 Fully cloning Dave Tucker's repository [1] and the outdated fork of
 the ODL ML2 MechanismDriver included raises some questions (e.g.
 [2]). I wish the next patch set removes some files. At least it
 should take the mainstream work into account (e.g. [3]) .
 
 [1] https://github.com/dave-tucker/odl-neutron-drivers [2]
 https://review.openstack.org/#/c/113330/ [3]
 https://review.openstack.org/#/c/96459/
 
 
 ___ OpenStack-dev
 mailing list OpenStack-dev@lists.openstack.org 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2.0.22 (Darwin)

iQEcBAEBCgAGBQJUiXcIAAoJEC5aWaUY1u57dBMH/17unffokpb0uxqewPYrPNMI
ukDzG4dW8mIP3yfbVNsHQXe6gWj/kj/SkBWJrO13BusTu8hrr+DmOmmfF/42s3vY
E+6EppQDoUjR+QINBwE46nU+E1w9hIHyAZYbSBtaZQ32c8aQbmHmF+rgoeEQq349
PfpPLRI6MamFWRQMXSgF11VBTg8vbz21PXnN3KbHbUgzI/RS2SELv4SWmPgKZCEl
l1K5J1/Vnz2roJn4pr/cfc7vnUIeAB5a9AuBHC6o+6Je2RDy79n+oBodC27kmmIx
lVGdypoxZ9tF3yfRM9nngjkOtozNzZzaceH0Sc/5JR4uvNReVN4exzkX5fDH+SM=
=dfe/
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Lack of quota - security bug or not?

2014-12-11 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

On 10/12/14 22:12, Jeremy Stanley wrote:
 On 2014-12-10 16:07:35 -0500 (-0500), Jay Pipes wrote:
 On 12/10/2014 04:05 PM, Jeremy Stanley wrote:
 I think the bigger question is whether the lack of a quota 
 implementation for everything a tenant could ever possibly 
 create is something we should have reported in secret, worked 
 under embargo, backported to supported stable branches, and 
 announced via high-profile security advisories once fixed.
 
 Sure, fine.
 
 Any tips for how to implement new quota features in a way that the 
 patches won't violate our stable backport policies?
 

If we consider it a security issue worth CVE, then security concerns
generally beat stability concerns. We'll obviously need to document
the change in default behaviour in release notes though, and maybe
provide a documented way to disable the change for stable releases (I
suspect we already have a way to disable specific quotas, but we
should make sure it's the case and we provide operators commands ready
to be executed to achieve this).

/Ihar
-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2.0.22 (Darwin)

iQEcBAEBCgAGBQJUiXeoAAoJEC5aWaUY1u57i3EIAMZp5XoTfayE2EblAruo+hK+
I4c8EvrhCNOVe51BsI42VFkuqp4vf9nKpHYz/PtSOp/9tLxXgpt0tFgEEOUS2xR9
rIMR0vkJSLWgT6v7aGMR7cDQ1MSGkmjCQl2SgmRgsyG0Jcx1/+El9zUToTI9hTFu
Yw97cN04j/pFda7Noo91ck7htq0pSCsLtR2jRVePgcIc6UeW372aaXn8zboTtCks
c03VXiZHc5TpZurZiFopT+CLbiDl5k0JvMuptP7YOhnfzzNsaaL/Bd8+9f6SGpol
Dy7Ha2CDsAl1WEMx0VvAHvH5O4YRbbE0sIvY1r0pxmMQB8lJwx6KfcDwIrer2Og=
=ZY3+
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] XenAPI questions

2014-12-11 Thread Bob Ball
Hi Yamamoto,

XenAPI and Neutron do work well together, and we have an private CI that is 
running Neutron jobs.  As it's not currently the public CI it's harder to 
access logs.
We're working on trying to move the existing XenServer CI from a nova-network 
base to a neutron base, at which point the logs will of course be publically 
accessible and tested against any changes, thus making it easy to answer 
questions such as the below.

Bob

 -Original Message-
 From: YAMAMOTO Takashi [mailto:yamam...@valinux.co.jp]
 Sent: 11 December 2014 03:17
 To: openstack-dev@lists.openstack.org
 Subject: [openstack-dev] [Neutron] XenAPI questions
 
 hi,
 
 i have questions for XenAPI folks:
 
 - what's the status of XenAPI support in neutron?
 - is there any CI covering it?  i want to look at logs.
 - is it possible to write a small program which runs with the xen
   rootwrap and proxies OpenFlow channel between domains?
   (cf. https://review.openstack.org/#/c/138980/)
 
 thank you.
 
 YAMAMOTO Takashi
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Core/Vendor code decomposition

2014-12-11 Thread Gary Kotton

On 12/11/14, 12:50 PM, Ihar Hrachyshka ihrac...@redhat.com wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

+100. I vote -1 there and would like to point out that we *must* keep
history during the split, and split from u/s code base, not random
repositories. If you don't know how to achieve this, ask oslo people,
they did it plenty of times when graduating libraries from oslo-incubator.
/Ihar

On 10/12/14 19:18, Cedric OLLIVIER wrote:
 https://review.openstack.org/#/c/140191/
 
 2014-12-09 18:32 GMT+01:00 Armando M. arma...@gmail.com
 mailto:arma...@gmail.com:
 
 
 By the way, if Kyle can do it in his teeny tiny time that he has
 left after his PTL duties, then anyone can do it! :)
 
 https://review.openstack.org/#/c/140191/

This patch looses the recent hacking changes that we have made. This is a
slight example to try and highly the problem that we may incur as a
community.

 
 Fully cloning Dave Tucker's repository [1] and the outdated fork of
 the ODL ML2 MechanismDriver included raises some questions (e.g.
 [2]). I wish the next patch set removes some files. At least it
 should take the mainstream work into account (e.g. [3]) .
 
 [1] https://github.com/dave-tucker/odl-neutron-drivers [2]
 https://review.openstack.org/#/c/113330/ [3]
 https://review.openstack.org/#/c/96459/
 
 
 ___ OpenStack-dev
 mailing list OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2.0.22 (Darwin)

iQEcBAEBCgAGBQJUiXcIAAoJEC5aWaUY1u57dBMH/17unffokpb0uxqewPYrPNMI
ukDzG4dW8mIP3yfbVNsHQXe6gWj/kj/SkBWJrO13BusTu8hrr+DmOmmfF/42s3vY
E+6EppQDoUjR+QINBwE46nU+E1w9hIHyAZYbSBtaZQ32c8aQbmHmF+rgoeEQq349
PfpPLRI6MamFWRQMXSgF11VBTg8vbz21PXnN3KbHbUgzI/RS2SELv4SWmPgKZCEl
l1K5J1/Vnz2roJn4pr/cfc7vnUIeAB5a9AuBHC6o+6Je2RDy79n+oBodC27kmmIx
lVGdypoxZ9tF3yfRM9nngjkOtozNzZzaceH0Sc/5JR4uvNReVN4exzkX5fDH+SM=
=dfe/
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Lack of quota - security bug or not?

2014-12-11 Thread George Shuklin



On 12/10/2014 10:34 PM, Jay Pipes wrote:

On 12/10/2014 02:43 PM, George Shuklin wrote:

I have some small discussion in launchpad: is lack of a quota for
unprivileged user counted as security bug (or at least as a bug)?

If user can create 100500 objects in database via normal API and ops
have no way to restrict this, is it OK for Openstack or not?


That would be a major security bug. Please do file one and we'll get 
on it immediately.




(private bug at that moment) https://bugs.launchpad.net/ossa/+bug/1401170

There is discussion about this. Quote:

Jeremy Stanley (fungi):
Traditionally we've not considered this sort of exploit a security 
vulnerability. The lack of built-in quota for particular kinds of 
database entries isn't necessarily a design flaw, but even if it 
can/should be fixed it's likely not going to get addressed in stable 
backports, is not something for which we would issue a security 
advisory, and so doesn't need to be kept under secret embargo. Does 
anyone else disagree?


If anyone have access to OSSA tracker, please say your opinion in that bug.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] UniqueConstraint for name and tenant_id in security group

2014-12-11 Thread Anna Kamyshnikova
Hello everyone!

In neutron there is a rather old bug [1] about adding uniqueness for
security group name and tenant id. I found this idea reasonable and started
working on fix for this bug [2]. I think it is good to add a
uniqueconstraint because:

1) In nova there is such constraint for security groups
https://github.com/openstack/nova/blob/stable/juno/nova/db/sqlalchemy/migrate_repo/versions/216_havana.py#L1155-L1157.
So I think that it is rather disruptive that it is impossible to create
security group with the same name in nova, but possible in neutron.
2) Users get confused having security groups with the same name.

In comment for proposed change Assaf Muller and Maru Newby object for such
solution and suggested another option, so I think we need more eyes on this
change.

I would like to ask you to share your thoughts on this topic.

[1] - https://bugs.launchpad.net/neutron/+bug/1194579
[2] - https://review.openstack.org/135006
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Services are now split out and neutron is open for commits!

2014-12-11 Thread Thierry Carrez
Kyle Mestery wrote:
 Folks, just a heads up that we have completed splitting out the services
 (FWaaS, LBaaS, and VPNaaS) into separate repositores. [1] [2] [3]. This
 was all done in accordance with the spec approved here [4]. Thanks to
 all involved, but a special thanks to Doug and Anita, as well as infra.
 Without all of their work and help, this wouldn't have been possible!

Congrats!

That's a good example where having an in-person sprint really
facilitates getting things done in a reasonable amount of time -- just
having a set of interested people up at the same time and focused on the
same priorities helps!

Now let's see if we manage to publish those all for kilo-1 next week :)

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Lack of quota - security bug or not?

2014-12-11 Thread Thierry Carrez
George Shuklin wrote:
 
 
 On 12/10/2014 10:34 PM, Jay Pipes wrote:
 On 12/10/2014 02:43 PM, George Shuklin wrote:
 I have some small discussion in launchpad: is lack of a quota for
 unprivileged user counted as security bug (or at least as a bug)?

 If user can create 100500 objects in database via normal API and ops
 have no way to restrict this, is it OK for Openstack or not?

 That would be a major security bug. Please do file one and we'll get
 on it immediately.

 
 (private bug at that moment) https://bugs.launchpad.net/ossa/+bug/1401170
 
 There is discussion about this. Quote:
 
 Jeremy Stanley (fungi):
 Traditionally we've not considered this sort of exploit a security
 vulnerability. The lack of built-in quota for particular kinds of
 database entries isn't necessarily a design flaw, but even if it
 can/should be fixed it's likely not going to get addressed in stable
 backports, is not something for which we would issue a security
 advisory, and so doesn't need to be kept under secret embargo. Does
 anyone else disagree?
 
 If anyone have access to OSSA tracker, please say your opinion in that bug.

It also depends a lot on the details. Is there amplification ? Is there
a cost associated ? I bet most public cloud providers would be fine with
a user creating and paying for running 100500 instances, and that user
would certainly end up creating at least 100500 objects in database via
normal API.

So this is really a per-report call, which is why we usually discuss
them all separately.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Convergence proof-of-concept showdown

2014-12-11 Thread Murugan, Visnusaran


 -Original Message-
 From: Zane Bitter [mailto:zbit...@redhat.com]
 Sent: Wednesday, December 10, 2014 11:17 PM
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Heat] Convergence proof-of-concept
 showdown
 
 You really need to get a real email client with quoting support ;)
Apologies :) I screwed up my mail client's configuration.

 
 On 10/12/14 06:42, Murugan, Visnusaran wrote:
  Well, we still have to persist the dependencies of each version of a
 resource _somehow_, because otherwise we can't know how to clean them
 up in the correct order. But what I think you meant to say is that this
 approach doesn't require it to be persisted in a separate table where the
 rows are marked as traversed as we work through the graph.
 
  [Murugan, Visnusaran]
  In case of rollback where we have to cleanup earlier version of resources,
 we could get the order from old template. We'd prefer not to have a graph
 table.
 
 In theory you could get it by keeping old templates around. But that means
 keeping a lot of templates, and it will be hard to keep track of when you want
 to delete them. It also means that when starting an update you'll need to
 load every existing previous version of the template in order to calculate the
 dependencies. It also leaves the dependencies in an ambiguous state when a
 resource fails, and although that can be worked around it will be a giant pain
 to implement.
 

Agree that looking to all templates for a delete is not good. But baring 
Complexity, we feel we could achieve it by way of having an update and a 
delete stream for a stack update operation. I will elaborate in detail in the
etherpad sometime tomorrow :)

 I agree that I'd prefer not to have a graph table. After trying a couple of
 different things I decided to store the dependencies in the Resource table,
 where we can read or write them virtually for free because it turns out that
 we are always reading or updating the Resource itself at exactly the same
 time anyway.
 

Not sure how this will work in an update scenario when a resource does not 
change
and its dependencies do. Also taking care of deleting resources in order will
be an issue. This implies that there will be different versions of a resource 
which 
will even complicate further.

  This approach reduces DB queries by waiting for completion notification
 on a topic. The drawback I see is that delete stack stream will be huge as it
 will have the entire graph. We can always dump such data in
 ResourceLock.data Json and pass a simple flag load_stream_from_db to
 converge RPC call as a workaround for delete operation.
 
  This seems to be essentially equivalent to my 'SyncPoint' proposal[1], with
 the key difference that the data is stored in-memory in a Heat engine rather
 than the database.
 
  I suspect it's probably a mistake to move it in-memory for similar
  reasons to the argument Clint made against synchronising the marking off
 of dependencies in-memory. The database can handle that and the problem
 of making the DB robust against failures of a single machine has already been
 solved by someone else. If we do it in-memory we are just creating a single
 point of failure for not much gain. (I guess you could argue it doesn't 
 matter,
 since if any Heat engine dies during the traversal then we'll have to kick off
 another one anyway, but it does limit our options if that changes in the
 future.) [Murugan, Visnusaran] Resource completes, removes itself from
 resource_lock and notifies engine. Engine will acquire parent lock and 
 initiate
 parent only if all its children are satisfied (no child entry in 
 resource_lock).
 This will come in place of Aggregator.
 
 Yep, if you s/resource_lock/SyncPoint/ that's more or less exactly what I did.
 The three differences I can see are:
 
 1) I think you are proposing to create all of the sync points at the start of 
 the
 traversal, rather than on an as-needed basis. This is probably a good idea. I
 didn't consider it because of the way my prototype evolved, but there's now
 no reason I can see not to do this.
 If we could move the data to the Resource table itself then we could even
 get it for free from an efficiency point of view.

+1. But we will need engine_id to be stored somewhere for recovery purpose 
(easy to be queried format).
Sync points are created as-needed. Single resource is enough to restart that 
entire stream.
I think there is a disconnect in our understanding. I will detail it as well in 
the etherpad.

 2) You're using a single list from which items are removed, rather than two
 lists (one static, and one to which items are added) that get compared.
 Assuming (1) then this is probably a good idea too.

Yeah. We have a single list per active stream which work by removing 
Complete/satisfied resources from it.

 3) You're suggesting to notify the engine unconditionally and let the engine
 decide if the list is empty. That's probably not a good idea - not only does 
 

Re: [openstack-dev] Lack of quota - security bug or not?

2014-12-11 Thread Clark, Robert Graham
On 11/12/2014 13:16, Thierry Carrez thie...@openstack.org wrote:


George Shuklin wrote:
 
 
 On 12/10/2014 10:34 PM, Jay Pipes wrote:
 On 12/10/2014 02:43 PM, George Shuklin wrote:
 I have some small discussion in launchpad: is lack of a quota for
 unprivileged user counted as security bug (or at least as a bug)?

 If user can create 100500 objects in database via normal API and ops
 have no way to restrict this, is it OK for Openstack or not?

 That would be a major security bug. Please do file one and we'll get
 on it immediately.

 
 (private bug at that moment)
https://bugs.launchpad.net/ossa/+bug/1401170
 
 There is discussion about this. Quote:
 
 Jeremy Stanley (fungi):
 Traditionally we've not considered this sort of exploit a security
 vulnerability. The lack of built-in quota for particular kinds of
 database entries isn't necessarily a design flaw, but even if it
 can/should be fixed it's likely not going to get addressed in stable
 backports, is not something for which we would issue a security
 advisory, and so doesn't need to be kept under secret embargo. Does
 anyone else disagree?
 
 If anyone have access to OSSA tracker, please say your opinion in that
bug.

It also depends a lot on the details. Is there amplification ? Is there
a cost associated ? I bet most public cloud providers would be fine with
a user creating and paying for running 100500 instances, and that user
would certainly end up creating at least 100500 objects in database via
normal API.

So this is really a per-report call, which is why we usually discuss
them all separately.

-- 
Thierry Carrez (ttx)

Most public cloud providers would not be in any way happy with a new
customer spinning up anything like that number of instances. Fraud and
Abuse are major concerns for public cloud providers. Automated checks take
time.

Imagine someone using a stolen but not yet cancelled credit card spinning
up 1000¹s of instances. The card checks out ok when the user signs up but
has been cancelled by the time the billing cycle closes - massive loss to
the cloud provider in at least three ways. Direct lost revenue from that
customer,  the loss of capacity which possibly stopped other customers
bringing business to the platform and finally the likelyhood that the
account was setup for malicious purposes, either internet facing or
against the cloud infrastructure itself.

Please add me to the bug if you¹d like to discuss further.

-Rob


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] UniqueConstraint for name and tenant_id in security group

2014-12-11 Thread Jay Pipes

On 12/11/2014 07:22 AM, Anna Kamyshnikova wrote:

Hello everyone!

In neutron there is a rather old bug [1] about adding uniqueness for
security group name and tenant id. I found this idea reasonable and
started working on fix for this bug [2]. I think it is good to add a
uniqueconstraint because:

1) In nova there is such constraint for security groups
https://github.com/openstack/nova/blob/stable/juno/nova/db/sqlalchemy/migrate_repo/versions/216_havana.py#L1155-L1157.
So I think that it is rather disruptive that it is impossible to create
security group with the same name in nova, but possible in neutron.
2) Users get confused having security groups with the same name.

In comment for proposed change Assaf Muller and Maru Newby object for
such solution and suggested another option, so I think we need more eyes
on this change.

I would like to ask you to share your thoughts on this topic.
[1] - https://bugs.launchpad.net/neutron/+bug/1194579
[2] - https://review.openstack.org/135006


I'm generally in favor of making name attributes opaque, utf-8 strings 
that are entirely user-defined and have no constraints on them. I 
consider the name to be just a tag that the user places on some 
resource. It is the resource's ID that is unique.


I do realize that Nova takes a different approach to *some* resources, 
including the security group name.


End of the day, it's probably just a personal preference whether names 
should be unique to a tenant/user or not.


Maru had asked me my opinion on whether names should be unique and I 
answered my personal opinion that no, they should not be, and if Neutron 
needed to ensure that there was one and only one default security group 
for a tenant, that a way to accomplish such a thing in a race-free way, 
without use of SELECT FOR UPDATE, was to use the approach I put into the 
pastebin on the review above.


Best,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [project-config][infra] Advanced way to run specific jobs against code changes

2014-12-11 Thread Denis Makogon
Good day Stackers.


I’d like to raise question about implementing custom pipelines for Zuul.
For those of you how’s pretty familiar with project-config and infra

itself it wouldn’t be a news that for now Zuul layout supports only few
pipelines types [1]
https://github.com/openstack-infra/project-config/blob/17990d544f5162b9eebaa6b9781e7abbeab57b42/zuul/layout.yaml
.

Most of OpenStack projects are maintaining more than one type of drivers
(for Nova - virt driver, Trove - datastore drivers,

Cinder - volume backends, etc.). And, as it can be seen, existing jenkins
check jobs are not wisely utilize infra resources.

This is a real problem, just remember end of every release - number of
check/recheck jobs is huge.


So, how can we utilize resources more wisely and run only needed check job?
Like we’ve been processing unstable new check jobs - putting them into

‘experimental’ pipeline. So why can’t we provide such ability for projects
to define their own pipelines?


For example, as code reviewer, i see that patch touches specific
functionality of Driver A and i know that project testing infrastructure
provides an ability

to examine specific workflow for Driver A. Then it seems to be more than
valid to post a comment on the review like “check driver-a”. As you can see

i want to ask gerrit to trig custom pipeline for given project. Let me
describe more concrete example from “real world”. In Trove we maintain 5
different

drivers for different datastores and it doesn’t look like a good thing to
run all check jobs against code that doesn’t actually touch any of existing
datastore

drivers (this is what we have right now [2]
https://github.com/openstack-infra/project-config/blob/17990d544f5162b9eebaa6b9781e7abbeab57b42/zuul/layout.yaml#L1500-L1526
).

   Now here comes my proposal. I’d like to extend existing Zuul
pipeline to support any of needed check jobs (see example of Triple-O, [3]
https://github.com/openstack-infra/project-config/blob/master/zuul/layout.yaml#L174-L220
).

   But, as i can, see there are possible problems with such
approach, so i also have an  alternative proposal to one above. The only
one way to deal with such

   approach is to use REGEX ‘files’ for job definitions (example:
requirements check job [4]
https://github.com/openstack-infra/project-config/blob/master/zuul/layout.yaml#L659-L665).
In this case we’d still maintain only one pipeline ‘experimental’

   for all second-priority jobs. To make small summary, two ways
were proposed:



   -

   Pipeline(s) per Project. Pros: reviewer can trig specific pipeline by
   himself. Cons: spamming status/zuul.
   -

   REGEX files per additional jobs.


Sorry, but i’m not able to describe all Pros/Cons for each of proposals.
So, if you know them, please help me to figure out them.


All thoughts/suggestions are welcome.


Kind regards

Denis M.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Some questions about Ironic service

2014-12-11 Thread Fox, Kevin M
I would hope yes to all, but there is a lot of hard work there. Some things 
there require neutron to configure physical switches. Some require a guest 
agent in the image to get cinder working, or cinder controlled hardware. And 
all require developers interested enough in making it happen.

No time frame on any of it.

Thanks,
Kevin


From: xianchaobo
Sent: Thursday, December 11, 2014 1:07:54 AM
To: openstack-dev@lists.openstack.org
Cc: Luohao (brian)
Subject: Re: [openstack-dev] [Ironic] Some questions about Ironic service


Hi,Fox Kevin M



Thanks for your help.

Also,I want to know whether these features will be implemented in Ironic?

Do we have a plan to implement them?



Thanks

Xianchaobo


From: Fox, Kevin M [mailto:kevin@pnnl.gov]
Sent: Tuesday, December 09, 2014 5:52 PM
To: OpenStack Development Mailing List (not for usage questions)
Cc: Luohao (brian)
Subject: Re: [openstack-dev] [Ironic] Some questions about Ironic service

No to questions 1, 3, and 4. Yes to 2, but very minimally.


From: xianchaobo
Sent: Monday, December 08, 2014 10:29:50 PM
To: openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Cc: Luohao (brian)
Subject: [openstack-dev] [Ironic] Some questions about Ironic service
Hello, all

I’m trying to install and configure Ironic service, something confused me.
I create two neutron networks, public network and private network.
Private network is used to deploy physical machines
Public network is used to provide floating ip.


(1) Private network type can be VLAN or VXLAN? (In install guide, the 
network type is flat)

(2) The network of deployed physical machines can be managed by neutron?

(3) Different tenants can have its own network to manage physical machines?

(4) Does the ironic provide some mechanism for deployed physical machines

to use storage such as shared storage,cinder volume?

Thanks,
XianChaobo
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] question about Get Guest Info row in HypervisorSupportMatrix

2014-12-11 Thread Daniel P. Berrange
On Tue, Dec 09, 2014 at 03:39:35PM +, Daniel P. Berrange wrote:
 On Tue, Dec 09, 2014 at 06:33:47PM +0300, Dmitry Guryanov wrote:
  Hello!
  
  There is a feature in HypervisorSupportMatrix 
  (https://wiki.openstack.org/wiki/HypervisorSupportMatrix) called Get Guest 
  Info. Does anybody know, what does it mean? I haven't found anything like 
  this neither in nova api nor in horizon and nova command line.
 
 I've pretty much no idea what the intention was for that field. I've
 been working on formally documenting all those things, but draw a blank
 for that
 
 FYI:
 
   https://review.openstack.org/#/c/136380/1/doc/hypervisor-support.ini

It is now able to auto-generate nova docs showing the support matrix in
a more friendly fashion

http://docs-draft.openstack.org/80/136380/2/check/gate-nova-docs/94c33ba/doc/build/html/support-matrix.html


Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] ironic-discoverd status update

2014-12-11 Thread Dmitry Tantsur

Hi all!

As you know I actively promote ironic-discoverd project [1] as one of 
the means to do hardware inspection for Ironic (see e.g. spec [2]), so I 
decided it's worth to give some updates to the community from time to 
time. This email is purely informative, you may safely skip it, if 
you're not interested.


Background
==

The discoverd project (I usually skip the ironic- part when talking 
about it) solves the problem of populating information about a node in 
Ironic database without help of any vendor-specific tool. This 
information usually includes Nova scheduling properties (CPU, RAM, disk 
size) and MAC's for ports.


Introspection is done by booting a ramdisk on a node, collecting data 
there and posting it back to discoverd HTTP API. Thus actually discoverd 
consists of 2 components: the service [1] and the ramdisk [3]. The 
service handles 2 major tasks:
* Processing data posted by the ramdisk, i.e. finding the node in Ironic 
database and updating node properties with new data.
* Managing iptables so that the default PXE environment for 
introspection does not interfere with Neutron


The project was born from a series of patches to Ironic itself after we 
discovered that this change is going to be too intrusive. Discoverd was 
actively tested as part of Instack [4] and it's RPM is a part of Juno 
RDO. After the Paris summit, we agreed on bringing it closer to the 
Ironic upstream, and now discoverd is hosted on StackForge and tracks 
bugs on Launchpad.


Future
==

The basic feature of discoverd: supply Ironic with properties required 
for scheduling, is pretty finished as of the latest stable series 0.2.


However, more features are planned for release 1.0.0 this January [5]. 
They go beyond the bare minimum of finding out CPU, RAM, disk size and 
NIC MAC's.


Plugability
~~~

An interesting feature of discoverd is support for plugins, which I 
prefer to call hooks. It's possible to hook into the introspection data 
processing chain in 2 places:
* Before any data processing. This opens opportunity to adopt discoverd 
to ramdisks that have different data format. The only requirement is 
that the ramdisk posts a JSON object.
* After a node is found in Ironic database and ports are created for 
MAC's, but before any actual data update. This gives an opportunity to 
alter, which properties discoverd is going to update.


Actually, even the default logic of update Node.properties is contained 
in a plugin - see SchedulerHook in ironic_discoverd/plugins/standard.py 
[6]. This plugability opens wide opportunities for integrating with 3rd 
party ramdisks and CMDB's (which as we know Ironic is not ;).


Enrolling
~

Some people have found it limiting that the introspection requires power 
credentials (IPMI user name and password) to be already set. The recent 
set of patches [7] introduces a possibility to request manual power on 
of the machine and update IPMI credentials via the ramdisk to the 
expected values. Note that support of this feature in the reference 
ramdisk [3] is not ready yet. Also note that this scenario is only 
possible when using discoverd directly via it's API, not via Ironic API 
like in [2].


Get Involved


Discoverd terribly lacks reviews. Out team is very small and 
self-approving is not a rare case. I'm even not against fast-tracking 
any existing Ironic core to a discoverd core after a couple of 
meaningful reviews :)


And of course patches are welcome, especially plugins for integration 
with existing systems doing similar things and CMDB's. Patches are 
accepted via usual Gerrit workflow. Ideas are accepted as Launchpad 
blueprints (we do not follow the Gerrit spec process right now).


Finally, please comment on the Ironic spec [2], I'd like to know what 
you think.


References
==

[1] https://pypi.python.org/pypi/ironic-discoverd
[2] https://review.openstack.org/#/c/135605/
[3] 
https://github.com/openstack/diskimage-builder/tree/master/elements/ironic-discoverd-ramdisk

[4] https://github.com/agroup/instack-undercloud/
[5] https://bugs.launchpad.net/ironic-discoverd/+milestone/1.0.0
[6] 
https://github.com/stackforge/ironic-discoverd/blob/master/ironic_discoverd/plugins/standard.py
[7] 
https://blueprints.launchpad.net/ironic-discoverd/+spec/setup-ipmi-credentials


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] REST and Django

2014-12-11 Thread Tihomir Trifonov

 *​​Client just needs to know which URL to hit in order to invoke a certain
 API, and does not need to know the procedure name or parameters ordering.*



​That's where the difference is. I think the client has to know the
procedure name and parameters. Otherwise​ we have a translation factory
pattern, that converts one naming convention to another. And you won't be
able to call any service API if there is no code in the middleware to
translate it to the service API procedure name and parameters. To avoid
this - we can use a transparent proxy model - direct mapping of a client
call to service API naming, which can be done if the client invokes the
methods with the names in the service API, so that the middleware will just
pass parameters, and will not translate. Instead of:


updating user data:

client: POST /user/=middleware: convert to
/keystone/update/=   keystone: update

we may use:

client: POST /keystone/{ver:=x.0}/{method:=update}=
 middleware: just forward to clients[ver].getattr(method)(**kwargs) 
=   keystone: update


​The idea here is that if we have keystone 4.0 client, ​we will have to
just add it to the clients [] list and nothing more is required at the
middleware level. Just create the frontend code to use the new Keystone 4.0
methods. Otherwise we will have to add all new/different signatures of 4.0
against 2.0/3.0 in the middleware in order to use Keystone 4.0.

There is also a great example of using a pluggable/new feature in Horizon.
Do you remember the volume types support patch? The patch was pending in
Gerrit for few months - first waiting the cinder support for volume types
to go upstream, then waiting few more weeks for review. I am not sure, but
as far as I remember, the Horizon patch even missed a release milestone and
was introduced in the next release.

If we have a transparent middleware - this will be no more an issue. As
long as someone has written the frontend modules(which should be easy to
add and customize), and they install the required version of the service
API - they will not need updated Horizon to start using the feature. Maybe
I am not the right person to give examples here, but how many of you had
some kind of Horizon customization being locally merged/patched in your
local distros/setups, until the patch is being pushed upstream?

I will say it again. Nova, Keystone, Cinder, Glance etc. already have
stable public APIs. Why do we want to add the translation middleware and to
introduce another level of REST API? This layer will often hide new
features, added to the service APIs and will delay their appearance in
Horizon. That's simply not needed. I believe it is possible to just wrap
the authentication in the middleware REST, but not to translate anything as
RPC methods/parameters.


​And one more example:

​@rest_utils.ajax()
def put(self, request, id):
Update a single project.

The POST data should be an application/json object containing the
parameters to update: name (string),  description (string),
domain_id (string) and enabled (boolean, defaults to true).
Additional, undefined parameters may also be provided, but you'll
have
to look deep into keystone to figure out what they might be.

This method returns HTTP 204 (no content) on success.

project = api.keystone.tenant_get(request, id)
kwargs = _tenant_kwargs_from_DATA(request.DATA, enabled=None)
api.keystone.tenant_update(request, project, **kwargs)


​Do we really need the lines:​

project = api.keystone.tenant_get(request, id)
kwargs = _tenant_kwargs_from_DATA(request.DATA, enabled=None)

​
? ​Since we update the project on the client, it is obvious that we already
fetched the project data. So we can simply send:


POST /keystone/3.0/tenant_update

Content-Type: application/json

{id: cached.id, domain_id: cached.domain_id, name: new name,
description: new description, enabled: cached.enabled}

Fewer requests, faster application.




On Wed, Dec 10, 2014 at 8:39 PM, Thai Q Tran tqt...@us.ibm.com wrote:


 ​​
 I think we're arguing for the same thing, but maybe slightly different
 approach. I think we can both agree that a middle-layer is required,
 whether we intend to use it as a proxy or REST endpoints. Regardless of the
 approach, the client needs to relay what API it wants to invoke, and you
 can do that either via RPC or REST. I personally prefer the REST approach
 because it shields the client. Client just needs to know which URL to hit
 in order to invoke a certain API, and does not need to know the procedure
 name or parameters ordering. Having said all of that, I do believe we
 should keep it as thin as possible. I do like the idea of having separate
 classes for different API versions. What we have today is a thin REST layer
 that acts like a proxy. You hit a certain URL, and the middle layer
 forwards the API invokation. The only exception to this rule is support for
 batch 

Re: [openstack-dev] [Neutron] UniqueConstraint for name and tenant_id in security group

2014-12-11 Thread Mark McClain

 On Dec 11, 2014, at 8:43 AM, Jay Pipes jaypi...@gmail.com wrote:
 
 I'm generally in favor of making name attributes opaque, utf-8 strings that 
 are entirely user-defined and have no constraints on them. I consider the 
 name to be just a tag that the user places on some resource. It is the 
 resource's ID that is unique.
 
 I do realize that Nova takes a different approach to *some* resources, 
 including the security group name.
 
 End of the day, it's probably just a personal preference whether names should 
 be unique to a tenant/user or not.
 
 Maru had asked me my opinion on whether names should be unique and I answered 
 my personal opinion that no, they should not be, and if Neutron needed to 
 ensure that there was one and only one default security group for a tenant, 
 that a way to accomplish such a thing in a race-free way, without use of 
 SELECT FOR UPDATE, was to use the approach I put into the pastebin on the 
 review above.
 

I agree with Jay.  We should not care about how a user names the resource.  
There other ways to prevent this race and Jay’s suggestion is a good one.

mark


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] deprecation 'pattern' library??

2014-12-11 Thread Davanum Srinivas
Surprisingly deprecator is still available on pypi

On Thu, Dec 11, 2014 at 2:04 AM, Julien Danjou jul...@danjou.info wrote:
 On Wed, Dec 10 2014, Joshua Harlow wrote:


 […]

 Or in general any other comments/ideas about providing such a deprecation
 pattern library?

 +1

 * debtcollector

 made me think of loanshark :)

 --
 Julien Danjou
 -- Free Software hacker
 -- http://julien.danjou.info

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Davanum Srinivas :: https://twitter.com/dims

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] - filter out fields in keypair object.

2014-12-11 Thread Dmitry Bogun
Hi.

Why we filter out some fields from keeper object in create and list 
operations?


in module nova.api.openstack.compute.plugins.v3.keypairs

class KeypairController(wsgi.Controller):
# ...

def _filter_keypair(self, keypair, **attrs):
clean = {
'name': keypair.name,
'public_key': keypair.public_key,
'fingerprint': keypair.fingerprint,
}
for attr in attrs:
clean[attr] = keypair[attr]
return clean

we have method _filter_kaypar. This method used to create response in create 
and index methods. Why we need it?

PS I need user_id field in list/index request in horizon. The only way to got 
it now - use get/show method for each returned object.___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] out-of-tree plugin for Mech driver/L2 and vif_driver

2014-12-11 Thread Maxime Leroy
On Thu, Dec 11, 2014 at 11:41 AM, Daniel P. Berrange
berra...@redhat.com wrote:
 On Thu, Dec 11, 2014 at 09:37:31AM +0800, henry hly wrote:
 On Thu, Dec 11, 2014 at 3:48 AM, Ian Wells ijw.ubu...@cack.org.uk wrote:
  On 10 December 2014 at 01:31, Daniel P. Berrange berra...@redhat.com
  wrote:
 
 
[..]
 The question is, do we really need such flexibility for so many nova vif 
 types?

 I also think that VIF_TYPE_TAP and VIF_TYPE_VHOSTUSER is good example,
 nova shouldn't known too much details about switch backend, it should
 only care about the VIF itself, how the VIF is plugged to switch
 belongs to Neutron half.

 However I'm not saying to move existing vif driver out, those open
 backend have been used widely. But from now on the tap and vhostuser
 mode should be encouraged: one common vif driver to many long-tail
 backend.

 Yes, I really think this is a key point. When we introduced the VIF type
 mechanism we never intended for there to be soo many different VIF types
 created. There is a very small, finite number of possible ways to configure
 the libvirt guest XML and it was intended that the VIF types pretty much
 mirror that. This would have given us about 8 distinct VIF type maximum.

 I think the reason for the larger than expected number of VIF types, is
 that the drivers are being written to require some arbitrary tools to
 be invoked in the plug  unplug methods. It would really be better if
 those could be accomplished in the Neutron code than the Nova code, via
 a host agent run  provided by the Neutron mechanism.  This would let
 us have a very small number of VIF types and so avoid the entire problem
 that this thread is bringing up.

 Failing that though, I could see a way to accomplish a similar thing
 without a Neutron launched agent. If one of the VIF type binding
 parameters were the name of a script, we could run that script on
 plug  unplug. So we'd have a finite number of VIF types, and each
 new Neutron mechanism would merely have to provide a script to invoke

 eg consider the existing midonet  iovisor VIF types as an example.
 Both of them use the libvirt ethernet config, but have different
 things running in their plug methods. If we had a mechanism for
 associating a plug script with a vif type, we could use a single
 VIF type for both.

 eg iovisor port binding info would contain

   vif_type=ethernet
   vif_plug_script=/usr/bin/neutron-iovisor-vif-plug

 while midonet would contain

   vif_type=ethernet
   vif_plug_script=/usr/bin/neutron-midonet-vif-plug


Having less VIF types, then using scripts to plug/unplug the vif in
nova is a good idea. So, +1 for the idea.

If you want, I can propose a new spec for this. Do you think we have
enough time to approve this new spec before the 18th December?

Anyway I think we still need to have a vif_driver plugin mechanism:
For example, if your external l2/ml2 plugin needs a specific type of
nic (i.e. a new method get_config to provide specific parameters to
libvirt for the nic) that is not supported in the nova tree.

Maybe we can find an other way to support it?
Or, are we going to accept new VIF_TYPE in nova if it's only used by
an external ml2/l2 plugin?

Maxime

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] out-of-tree plugin for Mech driver/L2 and vif_driver

2014-12-11 Thread Daniel P. Berrange
On Thu, Dec 11, 2014 at 04:15:00PM +0100, Maxime Leroy wrote:
 On Thu, Dec 11, 2014 at 11:41 AM, Daniel P. Berrange
 berra...@redhat.com wrote:
  On Thu, Dec 11, 2014 at 09:37:31AM +0800, henry hly wrote:
  On Thu, Dec 11, 2014 at 3:48 AM, Ian Wells ijw.ubu...@cack.org.uk wrote:
   On 10 December 2014 at 01:31, Daniel P. Berrange berra...@redhat.com
   wrote:
  
  
 [..]
  The question is, do we really need such flexibility for so many nova vif 
  types?
 
  I also think that VIF_TYPE_TAP and VIF_TYPE_VHOSTUSER is good example,
  nova shouldn't known too much details about switch backend, it should
  only care about the VIF itself, how the VIF is plugged to switch
  belongs to Neutron half.
 
  However I'm not saying to move existing vif driver out, those open
  backend have been used widely. But from now on the tap and vhostuser
  mode should be encouraged: one common vif driver to many long-tail
  backend.
 
  Yes, I really think this is a key point. When we introduced the VIF type
  mechanism we never intended for there to be soo many different VIF types
  created. There is a very small, finite number of possible ways to configure
  the libvirt guest XML and it was intended that the VIF types pretty much
  mirror that. This would have given us about 8 distinct VIF type maximum.
 
  I think the reason for the larger than expected number of VIF types, is
  that the drivers are being written to require some arbitrary tools to
  be invoked in the plug  unplug methods. It would really be better if
  those could be accomplished in the Neutron code than the Nova code, via
  a host agent run  provided by the Neutron mechanism.  This would let
  us have a very small number of VIF types and so avoid the entire problem
  that this thread is bringing up.
 
  Failing that though, I could see a way to accomplish a similar thing
  without a Neutron launched agent. If one of the VIF type binding
  parameters were the name of a script, we could run that script on
  plug  unplug. So we'd have a finite number of VIF types, and each
  new Neutron mechanism would merely have to provide a script to invoke
 
  eg consider the existing midonet  iovisor VIF types as an example.
  Both of them use the libvirt ethernet config, but have different
  things running in their plug methods. If we had a mechanism for
  associating a plug script with a vif type, we could use a single
  VIF type for both.
 
  eg iovisor port binding info would contain
 
vif_type=ethernet
vif_plug_script=/usr/bin/neutron-iovisor-vif-plug
 
  while midonet would contain
 
vif_type=ethernet
vif_plug_script=/usr/bin/neutron-midonet-vif-plug
 
 
 Having less VIF types, then using scripts to plug/unplug the vif in
 nova is a good idea. So, +1 for the idea.
 
 If you want, I can propose a new spec for this. Do you think we have
 enough time to approve this new spec before the 18th December?
 
 Anyway I think we still need to have a vif_driver plugin mechanism:
 For example, if your external l2/ml2 plugin needs a specific type of
 nic (i.e. a new method get_config to provide specific parameters to
 libvirt for the nic) that is not supported in the nova tree.

As I said above, there's a really small finite set of libvirt configs
we need to care about. We don't need to have a plugin system for that.
It is no real burden to support them in tree


Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [barbican] PKCS#11 configuration

2014-12-11 Thread Ivan Wallis
Hi,

I am trying to configure Barbican with an external HSM that has a PKCS#11
provider and just wondering if someone can point me in the right direction
on how to configure this type of environment?

Regards,
Ivan
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] mid-cycle update

2014-12-11 Thread Kyle Mestery
On Wed, Dec 10, 2014 at 5:41 PM, Michael Still mi...@stillhq.com wrote:

 On Thu, Dec 11, 2014 at 10:14 AM, Kyle Mestery mest...@mestery.com
 wrote:
  The Neutron mid-cycle [1] is now complete, I wanted to let everyone know
 how
  it went. Thanks to all who attended, we got a lot done. I admit to being
  skeptical of mid-cycles, especially given the cross project meeting a
 month
  back on the topic. But this particular one was very useful. We had
 defined
  tasks to complete, and we made a lot of progress! What we accomplished
 was:
 
  1. We finished splitting out neutron advanced services and get things
  working again post-split.
  2. We had a team refactoring the L3 agent who now have a batch of
 commits to
  merge post services-split.
  3. We worked on refactoring the core API and WSGI layer, and produced
  multiple specs on this topic and some POC code.
  4. We had someone working on IPV6 tempest tests for the gate who made
 good
  progress here.
  5. We had multiple people working on plugin decomposition who are close
 to
  getting this working.

 This all sounds like good work. Did you manage to progress the
 nova-network to neutron migration tasks as well?

 I forgot to mention that, but yes, there was some work done on that as
well. I'll follow up with Oleg on this. Michael, I think it makes sense for
us to discuss this in an upcoming Neutron meeting as well. I'll figure out
a time and let you know. Having some nova folks there would be good.

Thanks,
Kyle


  Overall, it was a great sprint! Thanks to Adobe for hosting, Utah is a
  beautiful state.
 
  Looking forward to the rest of Kilo!
 
  Kyle
 
  [1] https://wiki.openstack.org/wiki/Sprints/NeutronKiloSprint

 Thanks,
 Michael

 --
 Rackspace Australia

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Kilo specs review day

2014-12-11 Thread Andrew Laski


On 12/10/2014 04:41 PM, Michael Still wrote:

Hi,

at the design summit we said that we would not approve specifications
after the kilo-1 deadline, which is 18 December. Unfortunately, we’ve
had a lot of specifications proposed this cycle (166 to my count), and
haven’t kept up with the review workload.

Therefore, I propose that Friday this week be a specs review day. We
need to burn down the queue of specs needing review, as well as
abandoning those which aren’t getting regular updates based on our
review comments.

I’d appreciate nova-specs-core doing reviews on Friday, but its always
super helpful when non-cores review as well. A +1 for a developer or
operator gives nova-specs-core a good signal of what might be ready to
approve, and that helps us optimize our review time.

For reference, the specs to review may be found at:

 
https://review.openstack.org/#/q/project:openstack/nova-specs+status:open,n,z

Thanks heaps,
Michael



It will be nice to have a good push before we hit the deadline.

I would like to remind priority owners to update their list of any 
outstanding specs at 
https://etherpad.openstack.org/p/kilo-nova-priorities-tracking so they 
can be targeted during the review day.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] mid-cycle update

2014-12-11 Thread Mark McClain

 On Dec 11, 2014, at 10:29 AM, Kyle Mestery mest...@mestery.com wrote:
 
 On Wed, Dec 10, 2014 at 5:41 PM, Michael Still mi...@stillhq.com 
 mailto:mi...@stillhq.com wrote:
 
 This all sounds like good work. Did you manage to progress the
 nova-network to neutron migration tasks as well?
 
 I forgot to mention that, but yes, there was some work done on that as well. 
 I'll follow up with Oleg on this. Michael, I think it makes sense for us to 
 discuss this in an upcoming Neutron meeting as well. I'll figure out a time 
 and let you know. Having some nova folks there would be good.
 

Oleg and I sat down at the mid-cycle and worked through the design spec more.  
He’s working through the spec draft to validate a few bits of the spec to make 
sure they’re doable.

mark___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] meeting location change

2014-12-11 Thread Sean Dague
In today's early Nova meeting (UTC 1400), we realized that there no
longer is a conflicting meeting in #openstack-meeting.

I've adjusted the location here -
https://wiki.openstack.org/wiki/Meetings#Nova_team_Meeting

Also added a formula based on date +%U to let you figure out if it's
an early or late week.

-Sean

-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] UniqueConstraint for name and tenant_id in security group

2014-12-11 Thread Henry Gessau
On Thu, Dec 11, 2014, Mark McClain m...@mcclain.xyz wrote:
 
 On Dec 11, 2014, at 8:43 AM, Jay Pipes jaypi...@gmail.com
 mailto:jaypi...@gmail.com wrote:

 I'm generally in favor of making name attributes opaque, utf-8 strings that
 are entirely user-defined and have no constraints on them. I consider the
 name to be just a tag that the user places on some resource. It is the
 resource's ID that is unique.

 I do realize that Nova takes a different approach to *some* resources,
 including the security group name.

 End of the day, it's probably just a personal preference whether names
 should be unique to a tenant/user or not.

 Maru had asked me my opinion on whether names should be unique and I
 answered my personal opinion that no, they should not be, and if Neutron
 needed to ensure that there was one and only one default security group for
 a tenant, that a way to accomplish such a thing in a race-free way, without
 use of SELECT FOR UPDATE, was to use the approach I put into the pastebin on
 the review above.

 
 I agree with Jay.  We should not care about how a user names the resource.
  There other ways to prevent this race and Jay’s suggestion is a good one.

However we should open a bug against Horizon because the user experience there
is terrible with duplicate security group names.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Reason for mem/vcpu ratio in default flavors

2014-12-11 Thread David Kranz
Perhaps this is a historical question, but I was wondering how the 
default OpenStack flavor size ratio of 2/1 was determined? According to 
http://aws.amazon.com/ec2/instance-types/, ec2 defines the flavors for 
General Purpose (M3) at about 3.7/1, with Compute Intensive (C3) at 
about 1.9/1 and Memory Intensive (R3) at about 7.6/1.


 -David

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] We lost some commits during upstream puppet manifests merge

2014-12-11 Thread Aleksandr Didenko
 Also I’m just wondering how do we keep upstream modules in our repo? They
are not submodules, so how is it organized?

Currently, we don't have any automatic tracking system for changes we apply
to the community/upstream modules, that could help us to re-apply them
during the sync. Only git or diff comparison between original module and
our copy. But that should not be a problem when we finish current sync and
switch to the new contribution workflow described in the doc, Vladimir has
mentioned in the initial email [1].

Also, in the nearest future we're planning to add unit tests (rake spec)
and puppet noop tests into our CI. I think we should combine noop tests
with regression testing by using 'rake spec'. But this time I mean RSpec
tests for puppet host, not for specific classes as I suggested in the
previous email. Such tests would compile a complete catalog using our
'site.pp' for specific astute.yaml settings and it will check that needed
puppet resources present in the catalog and have needed attributes. Here's
a draft - [2]. It checks catalog compilation for a controller node and runs
few checks for 'keystone' class and keystone cache driver settings.

Since all the test logic is outside of our puppet modules directory, it
won't be affected by any further upstream syncs or changes we apply in our
modules. So in case some commit removes anything critical that is covered
by regression/noop tests, then it will get '-1' from CI and attract our
attention :)

[1]
http://docs.mirantis.com/fuel-dev/develop/module_structure.html#contributing-to-existing-fuel-library-modules
[2] https://review.openstack.org/141022

Regards,
Aleksandr


On Fri, Nov 21, 2014 at 8:07 PM, Tomasz Napierala tnapier...@mirantis.com
wrote:


  On 21 Nov 2014, at 17:15, Aleksandr Didenko adide...@mirantis.com
 wrote:
 
  Hi,
 
  following our docs/workflow plus writing rspec tests for every new
 option we add/modify in our manifests could help with regressions. For
 example:
• we add new keystone config option in openstack::keystone class -
 keystone_config {'cache/backend': value = 'keystone.cache.memcache_pool';}
• we create new test for openstack::keystone class, something like
 this:
• should
 contain_keystone_config(cache/backend).with_value('keystone.cache.memcache_pool')
  So with such test, if for some reason we lose
 keystone_config(cache/backend) option, 'rake spec' would alert us about
 it right away and we'll get -1 from CI. Of course we should also
 implement 'rake spec' CI gate for this.
 
  But from the other hand, if someone changes option in manifests and
 updates rspec tests accordingly, then such commit will pass 'rake spec'
 test and we can still lose some specific option.
 
   We should speed up development of some modular testing framework that
 will check that corresponding change affects only particular pieces.
 
  Such test would not catch this particular regressions with
 keystone_config {'cache/backend': value =
 'keystone.cache.memcache_pool';}, because even with regression (i.e.
 dogpile backend) keystone was working OK. It has passed several BVTs and
 custom system tests, because 'dogpile' cache backend was working just fine
 while all memcached servers are up and running. So it looks like we need
 some kind of tests that will ensure that particular config options (or
 particular puppet resources) have some particular values (like backend =
 keystone.cache.memcache_pool in [cache] block of keystone.conf).
 
  So I would go with rspec testing for specific resources but I would
 write them in 'openstack' module. Those tests should check that needed
 (nova/cinder/keystone/glance)_config resources have needed values in the
 puppet catalog. Since we're not going to sync 'openstack' module with the
 upstream, such tests will remain intact until we change them, and they
 won't be affected by other modules sync/merge (keystone, cinder, nova, etc).

 I totally agree, but we need to remember to introduce tests in separate
 commits, otherwise loosing commit ID we would also lose tests ;)

 Also I’m just wondering how do we keep upstream modules in our repo? They
 are not submodules, so how is it organized?

 Regards,
 --
 Tomasz 'Zen' Napierala
 Sr. OpenStack Engineer
 tnapier...@mirantis.com







 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [barbican] PKCS#11 configuration

2014-12-11 Thread Ade Lee
Which HSM do you have?

On Thu, 2014-12-11 at 07:24 -0800, Ivan Wallis wrote:
 Hi,
 
 
 I am trying to configure Barbican with an external HSM that has a
 PKCS#11 provider and just wondering if someone can point me in the
 right direction on how to configure this type of environment?
 
 
 Regards,
 Ivan
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder][infra] Ceph CI status update

2014-12-11 Thread Anita Kuno
On 12/11/2014 09:36 AM, Jon Bernard wrote:
 Heya, quick Ceph CI status update.  Once the test_volume_boot_pattern
 was marked as skipped, only the revert_resize test was failing.  I have
 submitted a patch to nova for this [1], and that yields an all green
 ceph ci run [2].  So at the moment, and with my revert patch, we're in
 good shape.
 
 I will fix up that patch today so that it can be properly reviewed and
 hopefully merged.  From there I'll submit a patch to infra to move the
 job to the check queue as non-voting, and we can go from there.
 
 [1] https://review.openstack.org/#/c/139693/
 [2] 
 http://logs.openstack.org/93/139693/1/experimental/check-tempest-dsvm-full-ceph/12397fd/console.html
 
 Cheers,
 
Please add the name of your CI account to this table:
https://wiki.openstack.org/wiki/ThirdPartySystems

As outlined in the third party CI requirements:
http://ci.openstack.org/third_party.html#requirements

Please post system status updates to your individual CI wikipage that is
linked to this table.

The mailing list is not the place to post status updates for third party
CI systems.

If you have questions about any of the above, please attend one of the
two third party meetings and ask any and all questions until you are
satisfied. https://wiki.openstack.org/wiki/Meetings#Third_Party_Meeting

Thank you,
Anita.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder][infra] Ceph CI status update

2014-12-11 Thread Anita Kuno
On 12/11/2014 09:36 AM, Jon Bernard wrote:
 Heya, quick Ceph CI status update.  Once the test_volume_boot_pattern
 was marked as skipped, only the revert_resize test was failing.  I have
 submitted a patch to nova for this [1], and that yields an all green
 ceph ci run [2].  So at the moment, and with my revert patch, we're in
 good shape.
 
 I will fix up that patch today so that it can be properly reviewed and
 hopefully merged.  From there I'll submit a patch to infra to move the
 job to the check queue as non-voting, and we can go from there.
 
 [1] https://review.openstack.org/#/c/139693/
 [2] 
 http://logs.openstack.org/93/139693/1/experimental/check-tempest-dsvm-full-ceph/12397fd/console.html
 
 Cheers,
 
Please add the name of your CI account to this table:
https://wiki.openstack.org/wiki/ThirdPartySystems

As outlined in the third party CI requirements:
http://ci.openstack.org/third_party.html#requirements

Please post system status updates to your individual CI wikipage that is
linked to this table.

The mailing list is not the place to post status updates for third party
CI systems.

If you have questions about any of the above, please attend one of the
two third party meetings and ask any and all questions until you are
satisfied. https://wiki.openstack.org/wiki/Meetings#Third_Party_Meeting

Thank you,
Anita.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] Proposal to add IPMI meters from Ceilometer in Horizon

2014-12-11 Thread David Lyle
Please submit the blueprint and set the target for the milestone you are
targeting. That will add it the the blueprint review process for Horizon.

Seems like a minor change, so at this time, I don't foresee any issues with
approving it.

David

On Thu, Dec 11, 2014 at 12:34 AM, Xin, Xiaohui xiaohui@intel.com
wrote:

  Hi,

 In Juno Release, the IPMI meters in Ceilometer have been implemented.

 We know that most of the meters implemented in Ceilometer can be observed
 in Horizon side.

 User admin can use the “Admin” dashboard - “System” Panel Group -
 “Resource Usage” Panel to show the “Resources Usage Overview”.

 There are a lot of Ceilometer Metrics there now, each metric can be
 metered.

 Since IPMI meters have already been there, we’d like to add such Metric
 items for it in Horizon to get metered information.



 Is there anyone who oppose this proposal? If not, we’d like to add a
 blueprint in Horizon for it soon.



 Thanks

 Xiaohui

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] Moving _conf and _scripts to dashboard

2014-12-11 Thread David Lyle
I'm probably not understanding the nuance of the question but moving the
_scripts.html file to openstack_dashboard creates some circular
dependencies, does it not? templates/base.html in the horizon side of the
repo includes _scripts.html and insures that the javascript needed by the
existing horizon framework is present.

_conf.html seems like a better candidate for moving as it's more closely
tied to the application code.

David


On Wed, Dec 10, 2014 at 7:20 PM, Thai Q Tran tqt...@us.ibm.com wrote:

 Sorry for duplicate mail, forgot the subject.

 -Thai Q Tran/Silicon Valley/IBM wrote: -
 To: OpenStack Development Mailing List \(not for usage questions\) 
 openstack-dev@lists.openstack.org
 From: Thai Q Tran/Silicon Valley/IBM
 Date: 12/10/2014 03:37PM
 Subject: Moving _conf and _scripts to dashboard

 The way we are structuring our javascripts today is complicated. All of
 our static javascripts reside in /horizon/static and are imported through
 _conf.html and _scripts.html. Notice that there are already some panel
 specific javascripts like: horizon.images.js, horizon.instances.js,
 horizon.users.js. They do not belong in horizon. They belong in
 openstack_dashboard because they are specific to a panel.

 Why am I raising this issue now? In Angular, we need controllers written
 in javascript for each panel. As we angularize more and more panels, we
 need to store them in a way that make sense. To me, it make sense for us to
 move _conf and _scripts to openstack_dashboard. Or if this is not possible,
 then provide a mechanism to override them in openstack_dashboard.

 Thoughts?
 Thai



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] We are looking for ten individuals to participate in a usability study sponsored by the Horizon team

2014-12-11 Thread Kruithof, Piet
We are looking for ten individuals to participate in a usability study 
sponsored by the Horizon team.  The purpose of the study is to evaluate the 
proposed Launch Instance workflow.  The study will be moderated remotely via 
Google Hangouts.

Participant description: individuals who use cloud services as a consumer 
(IaaS, PaaS, SaaS, etc).  In this study, we are not interested in admins or 
operators.
Time to complete study: ~1.5 hours
Requirements:  Please complete the following survey to be considered  
https://docs.google.com/spreadsheet/embeddedform?formkey=dHl2Qi1QVzdDeVNXUEIyR2h3LUttcGc6MA

**Participants will be entered into a drawing for an HP 10 Tablet. **

Feel free to forward the link to anyone else who might be interested – 
experience with Horizon is not a requirement.  College students are welcome to 
participate.

As always, the results will be shared with the community.


Thanks,


Piet Kruithof
Sr. UX Architect – HP Helion Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] [PTL] Cascading vs. Cells – summit recap and move forward

2014-12-11 Thread Jay Pipes

On 12/11/2014 04:02 AM, joehuang wrote:

[joehuang] The major challenge for VDF use case is cross OpenStack
networking for tenants. The tenant's VM/Volume may be allocated in
different data centers geographically, but virtual network
(L2/L3/FW/VPN/LB) should be built for each tenant automatically and
isolated between tenants. Keystone federation can help authorization
automation, but the cross OpenStack network automation challenge is
still there. Using prosperity orchestration layer can solve the
automation issue, but VDF don't like prosperity API in the
north-bound, because no ecosystem is available. And other issues, for
example, how to distribute image, also cannot be solved by Keystone
federation.


What is prosperity orchestration layer and prosperity API?


[joehuang] This is the ETSI requirement and use cases specification
for NFV. ETSI is the home of the Industry Specification Group for
NFV. In Figure 14 (virtualization of EPC) of this document, you can
see that the operator's  cloud including many data centers to provide
connection service to end user by inter-connected VNFs. The
requirements listed in
(https://wiki.openstack.org/wiki/TelcoWorkingGroup) is mainly about
the requirements from specific VNF(like IMS, SBC, MME, HSS, S/P GW
etc) to run over cloud, eg. migrate the traditional telco. APP from
prosperity hardware to cloud. Not all NFV requirements have been
covered yet. Forgive me there are so many telco terms here.


What is prosperity hardware?

Thanks,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Keystone][python-keystoneclient][pycadf] Abandoning of inactive reviews

2014-12-11 Thread Morgan Fainberg
This is a notification that at the start of next week, all projects under the 
Identity Program are going to see a cleanup of old/lingering open reviews. I 
will be reviewing all reviews. If there is a negative score (this would be, -1 
or -2 from jenkins, -1 or -2 from a code reviewer, or -1 workflow) and the 
review has not seen an update in over 60days (more than “rebase”, 
commenting/responding to comments is an update) I will be administratively 
abandoning the change.

This will include reviews on:

Keystone
Keystone-specs
python-keystoneclient
keystonemiddleware
pycadf
python-keystoneclient-kerberos
python-keystoneclient-federation

Please take a look at your open reviews and get an update/response to negative 
scores to keep reviews active. You will always be able to un-abandon a review 
(as the author) or ask a Keystone-core member to unabandon a change. 

Cheers,
Morgan Fainberg

-- 
Morgan Fainberg
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] mid-cycle update

2014-12-11 Thread Carl Baldwin
We also spent a half day progressing the Ipam work and made a plan to move
forward.

Carl
On Dec 10, 2014 4:16 PM, Kyle Mestery mest...@mestery.com wrote:

 The Neutron mid-cycle [1] is now complete, I wanted to let everyone know
 how it went. Thanks to all who attended, we got a lot done. I admit to
 being skeptical of mid-cycles, especially given the cross project meeting a
 month back on the topic. But this particular one was very useful. We had
 defined tasks to complete, and we made a lot of progress! What we
 accomplished was:

 1. We finished splitting out neutron advanced services and get things
 working again post-split.
 2. We had a team refactoring the L3 agent who now have a batch of commits
 to merge post services-split.
 3. We worked on refactoring the core API and WSGI layer, and produced
 multiple specs on this topic and some POC code.
 4. We had someone working on IPV6 tempest tests for the gate who made good
 progress here.
 5. We had multiple people working on plugin decomposition who are close to
 getting this working.

 Overall, it was a great sprint! Thanks to Adobe for hosting, Utah is a
 beautiful state.

 Looking forward to the rest of Kilo!

 Kyle

 [1] https://wiki.openstack.org/wiki/Sprints/NeutronKiloSprint

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara] team meeting Dec 11 1800 UTC

2014-12-11 Thread Sergey Lukjanov
http://eavesdrop.openstack.org/meetings/sahara/2014/sahara.2014-12-11-17.59.html
http://eavesdrop.openstack.org/meetings/sahara/2014/sahara.2014-12-11-17.59.log.html

On Wed, Dec 10, 2014 at 6:21 PM, Sergey Lukjanov slukja...@mirantis.com
wrote:

 Hi folks,

 We'll be having the Sahara team meeting as usual in
 #openstack-meeting-alt channel.

 Agenda:
 https://wiki.openstack.org/wiki/Meetings/SaharaAgenda#Next_meetings


 http://www.timeanddate.com/worldclock/fixedtime.html?msg=Sahara+Meetingiso=20141211T18

 --
 Sincerely yours,
 Sergey Lukjanov
 Sahara Technical Lead
 (OpenStack Data Processing)
 Principal Software Engineer
 Mirantis Inc.




-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Kilo specs review day

2014-12-11 Thread Joe Gordon
On Thu, Dec 11, 2014 at 7:30 AM, Andrew Laski andrew.la...@rackspace.com
wrote:


 On 12/10/2014 04:41 PM, Michael Still wrote:

 Hi,

 at the design summit we said that we would not approve specifications
 after the kilo-1 deadline, which is 18 December. Unfortunately, we’ve
 had a lot of specifications proposed this cycle (166 to my count), and
 haven’t kept up with the review workload.

 Therefore, I propose that Friday this week be a specs review day. We
 need to burn down the queue of specs needing review, as well as
 abandoning those which aren’t getting regular updates based on our
 review comments.

 I’d appreciate nova-specs-core doing reviews on Friday, but its always
 super helpful when non-cores review as well. A +1 for a developer or
 operator gives nova-specs-core a good signal of what might be ready to
 approve, and that helps us optimize our review time.

 For reference, the specs to review may be found at:

  https://review.openstack.org/#/q/project:openstack/nova-
 specs+status:open,n,z

 Thanks heaps,
 Michael


 It will be nice to have a good push before we hit the deadline.

 I would like to remind priority owners to update their list of any
 outstanding specs at https://etherpad.openstack.
 org/p/kilo-nova-priorities-tracking so they can be targeted during the
 review day.



In preparation, I put together a nova-specs dashboard:

https://review.openstack.org/141137

https://review.openstack.org/#/dashboard/?foreach=project%3A%5Eopenstack%2Fnova-specs+status%3Aopen+NOT+owner%3Aself+NOT+label%3AWorkflow%3C%3D-1+label%3AVerified%3E%3D1%252cjenkins+NOT+label%3ACode-Review%3E%3D-2%252cself+branch%3Amastertitle=Nova+SpecsYour+are+a+reviewer%2C+but+haven%27t+voted+in+the+current+revision=reviewer%3AselfNeeds+final+%2B2=label%3ACode-Review%3E%3D2+NOT%28reviewerin%3Anova-specs-core+label%3ACode-Review%3C%3D-1%29+limit%3A100Passed+Jenkins%2C+Positive+Nova-Core+Feedback=NOT+label%3ACode-Review%3E%3D2+%28reviewerin%3Anova-core+label%3ACode-Review%3E%3D1%29+NOT%28reviewerin%3Anova-core+label%3ACode-Review%3C%3D-1%29+limit%3A100Passed+Jenkins%2C+No+Positive+Nova-Core+Feedback%2C+No+Negative+Feedback=NOT+label%3ACode-Review%3C%3D-1+NOT+label%3ACode-Review%3E%3D2+NOT%28reviewerin%3Anova-core+label%3ACode-Review%3E%3D1%29+limit%3A100Wayward+Changes+%28Changes+with+no+code+review+in+the+last+7+days%29=NOT+label%3ACode-Review%3C%3D2+age%3A7dSome+negative+feedback%2C+might+still+be+worth+commenting=label%3ACode-Review%3D-1+NOT+label%3ACode-Review%3D-2+limit%3A100Dead+Specs=label%3ACode-Review%3C%3D-2




 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Action context passed to all action executions by default

2014-12-11 Thread W Chan
Renat,

Here's the blueprint.
https://blueprints.launchpad.net/mistral/+spec/mistral-runtime-context

I'm proposing to add *args and **kwargs to the __init__ methods of all
actions.  The action context can be passed as a dict in the kwargs. The
global context and the env context can be provided here as well.  Maybe put
all these different context under a kwarg called context?

For example,

ctx = {
env: {...},
global: {...},
runtime: {
execution_id: ...,
task_id: ...,
...
}
}

action = SomeMistralAction(context=ctx)

WDYT?

Winson
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] [PTL] Cascading vs. Cells – summit recap and move forward

2014-12-11 Thread Andrew Laski


On 12/11/2014 04:02 AM, joehuang wrote:

Hello, Russell,

Many thanks for your reply. See inline comments.

-Original Message-
From: Russell Bryant [mailto:rbry...@redhat.com]
Sent: Thursday, December 11, 2014 5:22 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [all] [tc] [PTL] Cascading vs. Cells – summit 
recap and move forward


On Fri, Dec 5, 2014 at 8:23 AM, joehuang joehu...@huawei.com wrote:

Dear all  TC  PTL,

In the 40 minutes cross-project summit session “Approaches for
scaling out”[1], almost 100 peoples attended the meeting, and the
conclusion is that cells can not cover the use cases and
requirements which the OpenStack cascading solution[2] aim to
address, the background including use cases and requirements is also
described in the mail.

I must admit that this was not the reaction I came away with the discussion 
with.
There was a lot of confusion, and as we started looking closer, many (or 
perhaps most)
people speaking up in the room did not agree that the requirements being stated 
are
things we want to try to satisfy.

[joehuang] Could you pls. confirm your opinion: 1) cells can not cover the use 
cases and requirements which the OpenStack cascading solution aim to address. 
2) Need further discussion whether to satisfy the use cases and requirements.


Correct, cells does not cover all of the use cases that cascading aims 
to address.  But it was expressed that the use cases that are not 
covered may not be cases that we want addressed.



On 12/05/2014 06:47 PM, joehuang wrote:

Hello, Davanum,

Thanks for your reply.

Cells can't meet the demand for the use cases and requirements described in the 
mail.

You're right that cells doesn't solve all of the requirements you're discussing.
Cells addresses scale in a region.  My impression from the summit session
and other discussions is that the scale issues addressed by cells are considered
a priority, while the global API bits are not.

[joehuang] Agree cells is in the first class priority.


1. Use cases
a). Vodafone use case[4](OpenStack summit speech video from 9'02
to 12'30 ), establishing globally addressable tenants which result
in efficient services deployment.

Keystone has been working on federated identity.
That part makes sense, and is already well under way.

[joehuang] The major challenge for VDF use case is cross OpenStack networking 
for tenants. The tenant's VM/Volume may be allocated in different data centers 
geographically, but virtual network (L2/L3/FW/VPN/LB) should be built for each 
tenant automatically and isolated between tenants. Keystone federation can help 
authorization automation, but the cross OpenStack network automation challenge 
is still there.
Using prosperity orchestration layer can solve the automation issue, but VDF 
don't like prosperity API in the north-bound, because no ecosystem is 
available. And other issues, for example, how to distribute image, also cannot 
be solved by Keystone federation.


b). Telefonica use case[5], create virtual DC( data center) cross
multiple physical DCs with seamless experience.

If we're talking about multiple DCs that are effectively local to each other
with high bandwidth and low latency, that's one conversation.
My impression is that you want to provide a single OpenStack API on top of
globally distributed DCs.  I honestly don't see that as a problem we should
be trying to tackle.  I'd rather continue to focus on making OpenStack work
*really* well split into regions.
I think some people are trying to use cells in a geographically distributed way,
as well.  I'm not sure that's a well understood or supported thing, though.
Perhaps the folks working on the new version of cells can comment further.

[joehuang] 1) Splited region way cannot provide cross OpenStack networking automation for 
tenant. 2) exactly, the motivation for cascading is single OpenStack API on top of 
globally distributed DCs. Of course, cascading can also be used for DCs close to 
each other with high bandwidth and low latency. 3) Folks comment from cells are welcome.
.


Cells can handle a single API on top of globally distributed DCs.  I 
have spoken with a group that is doing exactly that.  But it requires 
that the API is a trusted part of the OpenStack deployments in those 
distributed DCs.





c). ETSI NFV use cases[6], especially use case #1, #2, #3, #5, #6,
8#. For NFV cloud, it’s in nature the cloud will be distributed but
inter-connected in many data centers.

I'm afraid I don't understand this one.  In many conversations about NFV, I 
haven't heard this before.

[joehuang] This is the ETSI requirement and use cases specification for NFV. 
ETSI is the home of the Industry Specification Group for NFV. In Figure 14 
(virtualization of EPC) of this document, you can see that the operator's  
cloud including many data centers to provide connection service to end user by 
inter-connected VNFs. The requirements listed in 

Re: [openstack-dev] [nova] Kilo specs review day

2014-12-11 Thread Michael Still
The dashboard is really cool, although I had to fix the spelling error...

Michael

On Fri, Dec 12, 2014 at 6:26 AM, Joe Gordon joe.gord...@gmail.com wrote:


 On Thu, Dec 11, 2014 at 7:30 AM, Andrew Laski andrew.la...@rackspace.com
 wrote:


 On 12/10/2014 04:41 PM, Michael Still wrote:

 Hi,

 at the design summit we said that we would not approve specifications
 after the kilo-1 deadline, which is 18 December. Unfortunately, we’ve
 had a lot of specifications proposed this cycle (166 to my count), and
 haven’t kept up with the review workload.

 Therefore, I propose that Friday this week be a specs review day. We
 need to burn down the queue of specs needing review, as well as
 abandoning those which aren’t getting regular updates based on our
 review comments.

 I’d appreciate nova-specs-core doing reviews on Friday, but its always
 super helpful when non-cores review as well. A +1 for a developer or
 operator gives nova-specs-core a good signal of what might be ready to
 approve, and that helps us optimize our review time.

 For reference, the specs to review may be found at:


 https://review.openstack.org/#/q/project:openstack/nova-specs+status:open,n,z

 Thanks heaps,
 Michael


 It will be nice to have a good push before we hit the deadline.

 I would like to remind priority owners to update their list of any
 outstanding specs at
 https://etherpad.openstack.org/p/kilo-nova-priorities-tracking so they can
 be targeted during the review day.



 In preparation, I put together a nova-specs dashboard:

 https://review.openstack.org/141137

 https://review.openstack.org/#/dashboard/?foreach=project%3A%5Eopenstack%2Fnova-specs+status%3Aopen+NOT+owner%3Aself+NOT+label%3AWorkflow%3C%3D-1+label%3AVerified%3E%3D1%252cjenkins+NOT+label%3ACode-Review%3E%3D-2%252cself+branch%3Amastertitle=Nova+SpecsYour+are+a+reviewer%2C+but+haven%27t+voted+in+the+current+revision=reviewer%3AselfNeeds+final+%2B2=label%3ACode-Review%3E%3D2+NOT%28reviewerin%3Anova-specs-core+label%3ACode-Review%3C%3D-1%29+limit%3A100Passed+Jenkins%2C+Positive+Nova-Core+Feedback=NOT+label%3ACode-Review%3E%3D2+%28reviewerin%3Anova-core+label%3ACode-Review%3E%3D1%29+NOT%28reviewerin%3Anova-core+label%3ACode-Review%3C%3D-1%29+limit%3A100Passed+Jenkins%2C+No+Positive+Nova-Core+Feedback%2C+No+Negative+Feedback=NOT+label%3ACode-Review%3C%3D-1+NOT+label%3ACode-Review%3E%3D2+NOT%28reviewerin%3Anova-core+label%3ACode-Review%3E%3D1%29+limit%3A100Wayward+Changes+%28Changes+with+no+code+review+in+the+last+7+days%29=NOT+label%3ACode-Review%3C%3D2+age%3A7dSome+negative+feedback%2C+might+still+be+worth+commenting=label%3ACode-Review%3D-1+NOT+label%3ACode-Review%3D-2+limit%3A100Dead+Specs=label%3ACode-Review%3C%3D-2




 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] Moving _conf and _scripts to dashboard

2014-12-11 Thread Thai Q Tran
It would not create a circular dependency, dashboard would depend on horizon - not the latter.Scripts that are library specific will live in horizon while scripts that are panel specific will live in dashboard.Let me draw a more concrete example.In Horizon	We know that _script and _conf are included in the base.html	We create a _script and _conf placeholder file for project overrides (similar to _stylesheets and _header)In Dashboard	We create a _script and _conf file with today's content	It overrides the _script and _conf file in horizon	Now we can include panel specific scripts without causing circular dependency.In fact, I would like to go further and suggest that _script and _conf be combine into a single file.Not sure why we need two places to include scripts.-David Lyle dkly...@gmail.com wrote: -To: "OpenStack Development Mailing List (not for usage questions)" openstack-dev@lists.openstack.orgFrom: David Lyle dkly...@gmail.comDate: 12/11/2014 09:23AMSubject: Re: [openstack-dev] [Horizon] Moving _conf and _scripts to dashboardI'm probably not understanding the nuance of the question but moving the _scripts.html file to openstack_dashboard creates some circular dependencies, does it not? templates/base.html in the horizon side of the repo includes _scripts.html and insures that the _javascript_ needed by the existing horizon framework is present._conf.html seems like a better candidate for moving as it's more closely tied to the application code.DavidOn Wed, Dec 10, 2014 at 7:20 PM, Thai Q Tran tqt...@us.ibm.com wrote:Sorry for duplicate mail, forgot the subject.-Thai Q Tran/Silicon Valley/IBM wrote: -To: "OpenStack Development Mailing List \(not for usage questions\)" openstack-dev@lists.openstack.orgFrom: Thai Q Tran/Silicon Valley/IBMDate: 12/10/2014 03:37PMSubject: Moving _conf and _scripts to dashboardThe way we are structuring our_javascript_stoday is complicated. All of our static _javascript_s reside in /horizon/static and are imported through _conf.html and _scripts.html. Notice that there are already some panel specific _javascript_s like: horizon.images.js, horizon.instances.js, horizon.users.js. They do not belong in horizon. They belong in openstack_dashboard because they are specific to a panel.Why am I raising this issue now? In Angular, we need controllers written in _javascript_ for each panel. As we angularize more and more panels, we need to store them in a way that make sense. To me, it make sense for us to move _conf and _scripts to openstack_dashboard. Or if this is not possible, then provide a mechanism to override them in openstack_dashboard.Thoughts?Thai


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___OpenStack-dev mailing listOpenStack-dev@lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] out-of-tree plugin for Mech driver/L2 and vif_driver

2014-12-11 Thread Vishvananda Ishaya

On Dec 11, 2014, at 2:41 AM, Daniel P. Berrange berra...@redhat.com wrote:

 On Thu, Dec 11, 2014 at 09:37:31AM +0800, henry hly wrote:
 On Thu, Dec 11, 2014 at 3:48 AM, Ian Wells ijw.ubu...@cack.org.uk wrote:
 On 10 December 2014 at 01:31, Daniel P. Berrange berra...@redhat.com
 wrote:
 
 
 So the problem of Nova review bandwidth is a constant problem across all
 areas of the code. We need to solve this problem for the team as a whole
 in a much broader fashion than just for people writing VIF drivers. The
 VIF drivers are really small pieces of code that should be straightforward
 to review  get merged in any release cycle in which they are proposed.
 I think we need to make sure that we focus our energy on doing this and
 not ignoring the problem by breaking stuff off out of tree.
 
 
 The problem is that we effectively prevent running an out of tree Neutron
 driver (which *is* perfectly legitimate) if it uses a VIF plugging mechanism
 that isn't in Nova, as we can't use out of tree code and we won't accept in
 code ones for out of tree drivers.
 
 The question is, do we really need such flexibility for so many nova vif 
 types?
 
 I also think that VIF_TYPE_TAP and VIF_TYPE_VHOSTUSER is good example,
 nova shouldn't known too much details about switch backend, it should
 only care about the VIF itself, how the VIF is plugged to switch
 belongs to Neutron half.
 
 However I'm not saying to move existing vif driver out, those open
 backend have been used widely. But from now on the tap and vhostuser
 mode should be encouraged: one common vif driver to many long-tail
 backend.
 
 Yes, I really think this is a key point. When we introduced the VIF type
 mechanism we never intended for there to be soo many different VIF types
 created. There is a very small, finite number of possible ways to configure
 the libvirt guest XML and it was intended that the VIF types pretty much
 mirror that. This would have given us about 8 distinct VIF type maximum.
 
 I think the reason for the larger than expected number of VIF types, is
 that the drivers are being written to require some arbitrary tools to
 be invoked in the plug  unplug methods. It would really be better if
 those could be accomplished in the Neutron code than the Nova code, via
 a host agent run  provided by the Neutron mechanism.  This would let
 us have a very small number of VIF types and so avoid the entire problem
 that this thread is bringing up.
 
 Failing that though, I could see a way to accomplish a similar thing
 without a Neutron launched agent. If one of the VIF type binding
 parameters were the name of a script, we could run that script on
 plug  unplug. So we'd have a finite number of VIF types, and each
 new Neutron mechanism would merely have to provide a script to invoke
 
 eg consider the existing midonet  iovisor VIF types as an example.
 Both of them use the libvirt ethernet config, but have different
 things running in their plug methods. If we had a mechanism for
 associating a plug script with a vif type, we could use a single
 VIF type for both.
 
 eg iovisor port binding info would contain
 
  vif_type=ethernet
  vif_plug_script=/usr/bin/neutron-iovisor-vif-plug
 
 while midonet would contain
 
  vif_type=ethernet
  vif_plug_script=/usr/bin/neutron-midonet-vif-plug

+1 This is a great suggestion.

Vish

 
 
 And so you see implementing a new Neutron mechanism in this way would
 not require *any* changes in Nova whatsoever. The work would be entirely
 self-contained within the scope of Neutron. It is simply a packaging
 task to get the vif script installed on the compute hosts, so that Nova
 can execute it.
 
 This is essentially providing a flexible VIF plugin system for Nova,
 without having to have it plug directly into the Nova codebase with
 the API  RPC stability constraints that implies.
 
 Regards,
 Daniel
 -- 
 |: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
 |: http://libvirt.org  -o- http://virt-manager.org :|
 |: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
 |: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] UniqueConstraint for name and tenant_id in security group

2014-12-11 Thread Vishvananda Ishaya

On Dec 11, 2014, at 8:00 AM, Henry Gessau ges...@cisco.com wrote:

 On Thu, Dec 11, 2014, Mark McClain m...@mcclain.xyz wrote:
 
 On Dec 11, 2014, at 8:43 AM, Jay Pipes jaypi...@gmail.com
 mailto:jaypi...@gmail.com wrote:
 
 I'm generally in favor of making name attributes opaque, utf-8 strings that
 are entirely user-defined and have no constraints on them. I consider the
 name to be just a tag that the user places on some resource. It is the
 resource's ID that is unique.
 
 I do realize that Nova takes a different approach to *some* resources,
 including the security group name.
 
 End of the day, it's probably just a personal preference whether names
 should be unique to a tenant/user or not.
 
 Maru had asked me my opinion on whether names should be unique and I
 answered my personal opinion that no, they should not be, and if Neutron
 needed to ensure that there was one and only one default security group for
 a tenant, that a way to accomplish such a thing in a race-free way, without
 use of SELECT FOR UPDATE, was to use the approach I put into the pastebin on
 the review above.
 
 
 I agree with Jay.  We should not care about how a user names the resource.
 There other ways to prevent this race and Jay’s suggestion is a good one.
 
 However we should open a bug against Horizon because the user experience there
 is terrible with duplicate security group names.

The reason security group names are unique is that the ec2 api supports source
rule specifications by tenant_id (user_id in amazon) and name, so not enforcing
uniqueness means that invocation in the ec2 api will either fail or be
non-deterministic in some way.

Vish




signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Reason for mem/vcpu ratio in default flavors

2014-12-11 Thread Vishvananda Ishaya
Probably just a historical artifact of values that we thought were reasonable 
for our machines at NASA.

Vish

On Dec 11, 2014, at 8:35 AM, David Kranz dkr...@redhat.com wrote:

 Perhaps this is a historical question, but I was wondering how the default 
 OpenStack flavor size ratio of 2/1 was determined? According to 
 http://aws.amazon.com/ec2/instance-types/, ec2 defines the flavors for 
 General Purpose (M3) at about 3.7/1, with Compute Intensive (C3) at about 
 1.9/1 and Memory Intensive (R3) at about 7.6/1.
 
 -David
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] deprecation 'pattern' library??

2014-12-11 Thread Joshua Harlow

Ya,

I to was surprised by the general lack of this kind of library on pypi.

One would think u know that people deprecate stuff, but maybe this isn't 
the norm for python... Why deprecate when u can just make v2.0 ;)


-Josh

Davanum Srinivas wrote:

Surprisingly deprecator is still available on pypi

On Thu, Dec 11, 2014 at 2:04 AM, Julien Danjoujul...@danjou.info  wrote:

On Wed, Dec 10 2014, Joshua Harlow wrote:


[…]


Or in general any other comments/ideas about providing such a deprecation
pattern library?

+1


* debtcollector

made me think of loanshark :)

--
Julien Danjou
-- Free Software hacker
-- http://julien.danjou.info

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev







___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] UniqueConstraint for name and tenant_id in security group

2014-12-11 Thread Vishvananda Ishaya

On Dec 11, 2014, at 1:04 PM, Jay Pipes jaypi...@gmail.com wrote:

 On 12/11/2014 04:01 PM, Vishvananda Ishaya wrote:
 
 On Dec 11, 2014, at 8:00 AM, Henry Gessau ges...@cisco.com wrote:
 
 On Thu, Dec 11, 2014, Mark McClain m...@mcclain.xyz wrote:
 
 On Dec 11, 2014, at 8:43 AM, Jay Pipes jaypi...@gmail.com
 mailto:jaypi...@gmail.com wrote:
 
 I'm generally in favor of making name attributes opaque, utf-8 strings 
 that
 are entirely user-defined and have no constraints on them. I consider the
 name to be just a tag that the user places on some resource. It is the
 resource's ID that is unique.
 
 I do realize that Nova takes a different approach to *some* resources,
 including the security group name.
 
 End of the day, it's probably just a personal preference whether names
 should be unique to a tenant/user or not.
 
 Maru had asked me my opinion on whether names should be unique and I
 answered my personal opinion that no, they should not be, and if Neutron
 needed to ensure that there was one and only one default security group 
 for
 a tenant, that a way to accomplish such a thing in a race-free way, 
 without
 use of SELECT FOR UPDATE, was to use the approach I put into the pastebin 
 on
 the review above.
 
 
 I agree with Jay.  We should not care about how a user names the resource.
 There other ways to prevent this race and Jay’s suggestion is a good one.
 
 However we should open a bug against Horizon because the user experience 
 there
 is terrible with duplicate security group names.
 
 The reason security group names are unique is that the ec2 api supports 
 source
 rule specifications by tenant_id (user_id in amazon) and name, so not 
 enforcing
 uniqueness means that invocation in the ec2 api will either fail or be
 non-deterministic in some way.
 
 So we should couple our API evolution to EC2 API then?
 
 -jay

No I was just pointing out the historical reason for uniqueness, and hopefully
encouraging someone to find the best behavior for the ec2 api if we are going
to keep the incompatibility there. Also I personally feel the ux is better
with unique names, but it is only a slight preference.

Vish


signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] UniqueConstraint for name and tenant_id in security group

2014-12-11 Thread Jay Pipes

On 12/11/2014 04:07 PM, Vishvananda Ishaya wrote:

On Dec 11, 2014, at 1:04 PM, Jay Pipes jaypi...@gmail.com wrote:

On 12/11/2014 04:01 PM, Vishvananda Ishaya wrote:


On Dec 11, 2014, at 8:00 AM, Henry Gessau ges...@cisco.com wrote:


On Thu, Dec 11, 2014, Mark McClain m...@mcclain.xyz wrote:



On Dec 11, 2014, at 8:43 AM, Jay Pipes jaypi...@gmail.com
mailto:jaypi...@gmail.com wrote:

I'm generally in favor of making name attributes opaque, utf-8 strings that
are entirely user-defined and have no constraints on them. I consider the
name to be just a tag that the user places on some resource. It is the
resource's ID that is unique.

I do realize that Nova takes a different approach to *some* resources,
including the security group name.

End of the day, it's probably just a personal preference whether names
should be unique to a tenant/user or not.

Maru had asked me my opinion on whether names should be unique and I
answered my personal opinion that no, they should not be, and if Neutron
needed to ensure that there was one and only one default security group for
a tenant, that a way to accomplish such a thing in a race-free way, without
use of SELECT FOR UPDATE, was to use the approach I put into the pastebin on
the review above.



I agree with Jay.  We should not care about how a user names the resource.
There other ways to prevent this race and Jay’s suggestion is a good one.


However we should open a bug against Horizon because the user experience there
is terrible with duplicate security group names.


The reason security group names are unique is that the ec2 api supports source
rule specifications by tenant_id (user_id in amazon) and name, so not enforcing
uniqueness means that invocation in the ec2 api will either fail or be
non-deterministic in some way.


So we should couple our API evolution to EC2 API then?

-jay


No I was just pointing out the historical reason for uniqueness, and hopefully
encouraging someone to find the best behavior for the ec2 api if we are going
to keep the incompatibility there. Also I personally feel the ux is better
with unique names, but it is only a slight preference.


Sorry for snapping, you made a fair point.

-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] UniqueConstraint for name and tenant_id in security group

2014-12-11 Thread Sean Dague
On 12/11/2014 04:16 PM, Jay Pipes wrote:
 On 12/11/2014 04:07 PM, Vishvananda Ishaya wrote:
 On Dec 11, 2014, at 1:04 PM, Jay Pipes jaypi...@gmail.com wrote:
 On 12/11/2014 04:01 PM, Vishvananda Ishaya wrote:

 On Dec 11, 2014, at 8:00 AM, Henry Gessau ges...@cisco.com wrote:

 On Thu, Dec 11, 2014, Mark McClain m...@mcclain.xyz wrote:

 On Dec 11, 2014, at 8:43 AM, Jay Pipes jaypi...@gmail.com
 mailto:jaypi...@gmail.com wrote:

 I'm generally in favor of making name attributes opaque, utf-8
 strings that
 are entirely user-defined and have no constraints on them. I
 consider the
 name to be just a tag that the user places on some resource. It
 is the
 resource's ID that is unique.

 I do realize that Nova takes a different approach to *some*
 resources,
 including the security group name.

 End of the day, it's probably just a personal preference whether
 names
 should be unique to a tenant/user or not.

 Maru had asked me my opinion on whether names should be unique and I
 answered my personal opinion that no, they should not be, and if
 Neutron
 needed to ensure that there was one and only one default security
 group for
 a tenant, that a way to accomplish such a thing in a race-free
 way, without
 use of SELECT FOR UPDATE, was to use the approach I put into the
 pastebin on
 the review above.


 I agree with Jay.  We should not care about how a user names the
 resource.
 There other ways to prevent this race and Jay’s suggestion is a
 good one.

 However we should open a bug against Horizon because the user
 experience there
 is terrible with duplicate security group names.

 The reason security group names are unique is that the ec2 api
 supports source
 rule specifications by tenant_id (user_id in amazon) and name, so
 not enforcing
 uniqueness means that invocation in the ec2 api will either fail or be
 non-deterministic in some way.

 So we should couple our API evolution to EC2 API then?

 -jay

 No I was just pointing out the historical reason for uniqueness, and
 hopefully
 encouraging someone to find the best behavior for the ec2 api if we
 are going
 to keep the incompatibility there. Also I personally feel the ux is
 better
 with unique names, but it is only a slight preference.
 
 Sorry for snapping, you made a fair point.

Yeh, honestly, I agree with Vish. I do feel that the UX of that
constraint is useful. Otherwise you get into having to show people UUIDs
in a lot more places. While those are good for consistency, they are
kind of terrible to show to people.

-Sean

-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] UniqueConstraint for name and tenant_id in security group

2014-12-11 Thread Mathieu Gagné

On 2014-12-11 8:43 AM, Jay Pipes wrote:


I'm generally in favor of making name attributes opaque, utf-8 strings
that are entirely user-defined and have no constraints on them. I
consider the name to be just a tag that the user places on some
resource. It is the resource's ID that is unique.

I do realize that Nova takes a different approach to *some* resources,
including the security group name.

End of the day, it's probably just a personal preference whether names
should be unique to a tenant/user or not.



We recently had an issue in production where a user had 2 default 
security groups (for reasons we have yet to identify). For the sack of 
completeness, we are running Nova+Neutron Icehouse.


When no security group is provided, Nova will default to the default 
security group. However due to the fact 2 security groups had the same 
name, nova-compute got confused, put the instance in ERROR state and 
logged this traceback [1]:


  NoUniqueMatch: Multiple security groups found matching 'default'. Use 
an ID to be more specific.


I do understand that people might wish to create security groups with 
the same name.


However I think the following things are very wrong:

- the instance request should be blocked before it ends up on a compute 
node with nova-compute. It shouldn't be the job of nova-compute to find 
out issues about duplicated names. It should be the job of nova-api. 
Don't waste your time scheduling and spawning an instance that will 
never spawn with success.


- From an end user perspective, this means nova boot returns no error 
and it's only later that the user is informed of the confusion with 
security group names.


- Why does it have to crash with a traceback? IMO, traceback means we 
didn't think about this use case, here is more information on how to 
find the source. As an operator, I don't care about the traceback if 
it's a known limitation of Nova/Neutron. Don't pollute my logs with 
normal exceptions. (Log rationalization anyone?)


Whatever comes out of this discussion about security group name 
uniqueness, I would like those points to be addressed as I feel those 
aren't great users/operators experience.


[1] http://paste.openstack.org/show/149618/

--
Mathieu

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] [PTL] Cascading vs. Cells - summit recap and move forward

2014-12-11 Thread Joe Gordon
On Thu, Dec 11, 2014 at 1:02 AM, joehuang joehu...@huawei.com wrote:

 Hello, Russell,

 Many thanks for your reply. See inline comments.

 -Original Message-
 From: Russell Bryant [mailto:rbry...@redhat.com]
 Sent: Thursday, December 11, 2014 5:22 AM
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [all] [tc] [PTL] Cascading vs. Cells - summit
 recap and move forward

  On Fri, Dec 5, 2014 at 8:23 AM, joehuang joehu...@huawei.com wrote:
  Dear all  TC  PTL,
 
  In the 40 minutes cross-project summit session Approaches for
  scaling out[1], almost 100 peoples attended the meeting, and the
  conclusion is that cells can not cover the use cases and
  requirements which the OpenStack cascading solution[2] aim to
  address, the background including use cases and requirements is also
  described in the mail.

 I must admit that this was not the reaction I came away with the
 discussion with.
 There was a lot of confusion, and as we started looking closer, many (or
 perhaps most)
 people speaking up in the room did not agree that the requirements being
 stated are
 things we want to try to satisfy.

 [joehuang] Could you pls. confirm your opinion: 1) cells can not cover the
 use cases and requirements which the OpenStack cascading solution aim to
 address. 2) Need further discussion whether to satisfy the use cases and
 requirements.

 On 12/05/2014 06:47 PM, joehuang wrote:
  Hello, Davanum,
 
  Thanks for your reply.
 
  Cells can't meet the demand for the use cases and requirements
 described in the mail.

 You're right that cells doesn't solve all of the requirements you're
 discussing.
 Cells addresses scale in a region.  My impression from the summit session
  and other discussions is that the scale issues addressed by cells are
 considered
  a priority, while the global API bits are not.

 [joehuang] Agree cells is in the first class priority.

  1. Use cases
  a). Vodafone use case[4](OpenStack summit speech video from 9'02
  to 12'30 ), establishing globally addressable tenants which result
  in efficient services deployment.

  Keystone has been working on federated identity.
 That part makes sense, and is already well under way.

 [joehuang] The major challenge for VDF use case is cross OpenStack
 networking for tenants. The tenant's VM/Volume may be allocated in
 different data centers geographically, but virtual network
 (L2/L3/FW/VPN/LB) should be built for each tenant automatically and
 isolated between tenants. Keystone federation can help authorization
 automation, but the cross OpenStack network automation challenge is still
 there.
 Using prosperity orchestration layer can solve the automation issue, but
 VDF don't like prosperity API in the north-bound, because no ecosystem is
 available. And other issues, for example, how to distribute image, also
 cannot be solved by Keystone federation.

  b). Telefonica use case[5], create virtual DC( data center) cross
  multiple physical DCs with seamless experience.

 If we're talking about multiple DCs that are effectively local to each
 other
 with high bandwidth and low latency, that's one conversation.
 My impression is that you want to provide a single OpenStack API on top of
 globally distributed DCs.  I honestly don't see that as a problem we
 should
 be trying to tackle.  I'd rather continue to focus on making OpenStack
 work
 *really* well split into regions.
  I think some people are trying to use cells in a geographically
 distributed way,
  as well.  I'm not sure that's a well understood or supported thing,
 though.
  Perhaps the folks working on the new version of cells can comment
 further.

 [joehuang] 1) Splited region way cannot provide cross OpenStack networking
 automation for tenant. 2) exactly, the motivation for cascading is single
 OpenStack API on top of globally distributed DCs. Of course, cascading can
 also be used for DCs close to each other with high bandwidth and low
 latency. 3) Folks comment from cells are welcome.
 .

  c). ETSI NFV use cases[6], especially use case #1, #2, #3, #5, #6,
  8#. For NFV cloud, it's in nature the cloud will be distributed but
  inter-connected in many data centers.

 I'm afraid I don't understand this one.  In many conversations about NFV,
 I haven't heard this before.

 [joehuang] This is the ETSI requirement and use cases specification for
 NFV. ETSI is the home of the Industry Specification Group for NFV. In
 Figure 14 (virtualization of EPC) of this document, you can see that the
 operator's  cloud including many data centers to provide connection service
 to end user by inter-connected VNFs. The requirements listed in (
 https://wiki.openstack.org/wiki/TelcoWorkingGroup) is mainly about the
 requirements from specific VNF(like IMS, SBC, MME, HSS, S/P GW etc) to run
 over cloud, eg. migrate the traditional telco. APP from prosperity hardware
 to cloud. Not all NFV requirements have been covered yet. Forgive me there
 are so many telco terms here.

 
  

Re: [openstack-dev] [NFV][Telco] pxe-boot

2014-12-11 Thread Joe Gordon
On Wed, Dec 10, 2014 at 7:42 AM, Pasquale Porreca 
pasquale.porr...@dektech.com.au wrote:

  Well, one of the main reason to choose an open source product is to avoid
 vendor lock-in. I think it is not
 advisable to embed in the software running in an instance a call to
 OpenStack specific services.


I'm sorry I don't follow the logic here, can you elaborate.



 On 12/10/14 00:20, Joe Gordon wrote:


 On Wed, Dec 3, 2014 at 1:16 AM, Pasquale Porreca 
 pasquale.porr...@dektech.com.au wrote:

 The use case we were thinking about is a Network Function (e.g. IMS
 Nodes) implementation in which the high availability is based on OpenSAF.
 In this scenario there is an Active/Standby cluster of 2 System Controllers
 (SC) plus several Payloads (PL) that boot from network, controlled by the
 SC. The logic of which service to deploy on each payload is inside the SC.

 In OpenStack both SCs and PLs will be instances running in the cloud,
 anyway the PLs should still boot from network under the control of the SC.
 In fact to use Glance to store the image for the PLs and keep the control
 of the PLs in the SC, the SC should trigger the boot of the PLs with
 requests to Nova/Glance, but an application running inside an instance
 should not directly interact with a cloud infrastructure service like
 Glance or Nova.


  Why not? This is a fairly common practice.


 --
 Pasquale Porreca

 DEK Technologies
 Via dei Castelli Romani, 22
 00040 Pomezia (Roma)

 Mobile +39 3394823805
 Skype paskporr


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] UniqueConstraint for name and tenant_id in security group

2014-12-11 Thread Clint Byrum
Excerpts from Jay Pipes's message of 2014-12-11 05:43:46 -0800:
 On 12/11/2014 07:22 AM, Anna Kamyshnikova wrote:
  Hello everyone!
 
  In neutron there is a rather old bug [1] about adding uniqueness for
  security group name and tenant id. I found this idea reasonable and
  started working on fix for this bug [2]. I think it is good to add a
  uniqueconstraint because:
 
  1) In nova there is such constraint for security groups
  https://github.com/openstack/nova/blob/stable/juno/nova/db/sqlalchemy/migrate_repo/versions/216_havana.py#L1155-L1157.
  So I think that it is rather disruptive that it is impossible to create
  security group with the same name in nova, but possible in neutron.
  2) Users get confused having security groups with the same name.
 
  In comment for proposed change Assaf Muller and Maru Newby object for
  such solution and suggested another option, so I think we need more eyes
  on this change.
 
  I would like to ask you to share your thoughts on this topic.
  [1] - https://bugs.launchpad.net/neutron/+bug/1194579
  [2] - https://review.openstack.org/135006
 
 I'm generally in favor of making name attributes opaque, utf-8 strings 
 that are entirely user-defined and have no constraints on them. I 
 consider the name to be just a tag that the user places on some 
 resource. It is the resource's ID that is unique.
 
 I do realize that Nova takes a different approach to *some* resources, 
 including the security group name.
 
 End of the day, it's probably just a personal preference whether names 
 should be unique to a tenant/user or not.
 

The problem with this approach is that it requires the user to have an
external mechanism to achieve idempotency. By allowing an opaque string
that the user submits to you to be guaranteed to be unique, you allow
the user to write dumber code around creation in an unreliable
fashion. So instead of


while True:
  try:
item = clientlib.find(name='foo')[0]
break
  except NotFound:
try:
  item = clientlib.create(name='foo')
  break
except UniqueConflict:
  item = clientlib.find(name='foo')[0]
  break

You can keep retrying forever because you know only one thing with that
name will ever exist.

Without unique names, you have to write weird stuff like this to do a
retry.

while len(clientlib.find(name='foo'))  1:
  try:
item = clientlib.create(name='foo')
list = clientlib.searchfor(name='foo')
for found_item in list:
  if found_item.id != item.id:
clientlib.delete(found_item.id)

Name can certainly remain not-unique and free-form, but don't discount
the value of a unique value that the user specifies.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] UniqueConstraint for name and tenant_id in security group

2014-12-11 Thread Rochelle Grober
First, I agree that it's much friendlier to have unique security group names 
and not have to use UUIDs since when there is a need for more than a default, 
the Tennant admin will want to be able to easily track info related to it, plus 
in the GUI, if it allows a new one to be created, it should disallow default, 
but should allow modification of that SG.  Also, the GUI could easily suggest 
adding a number or letter to the end of the new name if the one suggested by 
the user is already in use.

So, GUI, logs, and policy issues all rolled into this discussion.

Now to my cause

Log rationalization!  YES!  So, I would classify this as a bug in the logging 
component of Nova.  As Mathieu states, this is a known condition, so this 
should be an ERROR or perhaps WARN that includes which SG name is a duplicate 
(the NoUniqueMatch statement does this, identifying 'default'), and the Use an 
ID to be more specific as a helpful pointer.

If a bug has not been filed yet, could you file one, with the pointer to the 
paste file and tag it log or log impact?  And I'd love if you put me on the 
list of people who should be informed (rockyg).

Thanks for considering the enduser(s) impact of non-unique names.

--Rocky


-Original Message-
From: Mathieu Gagné [mailto:mga...@iweb.com] 
Sent: Thursday, December 11, 2014 3:05 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron] UniqueConstraint for name and tenant_id 
in security group

On 2014-12-11 8:43 AM, Jay Pipes wrote:

 I'm generally in favor of making name attributes opaque, utf-8 strings
 that are entirely user-defined and have no constraints on them. I
 consider the name to be just a tag that the user places on some
 resource. It is the resource's ID that is unique.

 I do realize that Nova takes a different approach to *some* resources,
 including the security group name.

 End of the day, it's probably just a personal preference whether names
 should be unique to a tenant/user or not.


We recently had an issue in production where a user had 2 default 
security groups (for reasons we have yet to identify). For the sack of 
completeness, we are running Nova+Neutron Icehouse.

When no security group is provided, Nova will default to the default 
security group. However due to the fact 2 security groups had the same 
name, nova-compute got confused, put the instance in ERROR state and 
logged this traceback [1]:

   NoUniqueMatch: Multiple security groups found matching 'default'. Use 
an ID to be more specific.

I do understand that people might wish to create security groups with 
the same name.

However I think the following things are very wrong:

- the instance request should be blocked before it ends up on a compute 
node with nova-compute. It shouldn't be the job of nova-compute to find 
out issues about duplicated names. It should be the job of nova-api. 
Don't waste your time scheduling and spawning an instance that will 
never spawn with success.

- From an end user perspective, this means nova boot returns no error 
and it's only later that the user is informed of the confusion with 
security group names.

- Why does it have to crash with a traceback? IMO, traceback means we 
didn't think about this use case, here is more information on how to 
find the source. As an operator, I don't care about the traceback if 
it's a known limitation of Nova/Neutron. Don't pollute my logs with 
normal exceptions. (Log rationalization anyone?)

Whatever comes out of this discussion about security group name 
uniqueness, I would like those points to be addressed as I feel those 
aren't great users/operators experience.

[1] http://paste.openstack.org/show/149618/

-- 
Mathieu

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Convergence proof-of-concept showdown

2014-12-11 Thread Zane Bitter

On 11/12/14 01:14, Anant Patil wrote:

On 04-Dec-14 10:49, Zane Bitter wrote:

On 01/12/14 02:02, Anant Patil wrote:

On GitHub:https://github.com/anantpatil/heat-convergence-poc


I'm trying to review this code at the moment, and finding some stuff I
don't understand:

https://github.com/anantpatil/heat-convergence-poc/blob/master/heat/engine/stack.py#L911-L916

This appears to loop through all of the resources *prior* to kicking off
any actual updates to check if the resource will change. This is
impossible to do in general, since a resource may obtain a property
value from an attribute of another resource and there is no way to know
whether an update to said other resource would cause a change in the
attribute value.

In addition, no attempt to catch UpdateReplace is made. Although that
looks like a simple fix, I'm now worried about the level to which this
code has been tested.


We were working on new branch and as we discussed on Skype, we have
handled all these cases. Please have a look at our current branch:
https://github.com/anantpatil/heat-convergence-poc/tree/graph-version

When a new resource is taken for convergence, its children are loaded
and the resource definition is re-parsed. The frozen resource definition
will have all the get_attr resolved.



I'm also trying to wrap my head around how resources are cleaned up in
dependency order. If I understand correctly, you store in the
ResourceGraph table the dependencies between various resource names in
the current template (presumably there could also be some left around
from previous templates too?). For each resource name there may be a
number of rows in the Resource table, each with an incrementing version.
As far as I can tell though, there's nowhere that the dependency graph
for _previous_ templates is persisted? So if the dependency order
changes in the template we have no way of knowing the correct order to
clean up in any more? (There's not even a mechanism to associate a
resource version with a particular template, which might be one avenue
by which to recover the dependencies.)

I think this is an important case we need to be able to handle, so I
added a scenario to my test framework to exercise it and discovered that
my implementation was also buggy. Here's the fix:
https://github.com/zaneb/heat-convergence-prototype/commit/786f367210ca0acf9eb22bea78fd9d51941b0e40



Thanks for pointing this out Zane. We too had a buggy implementation for
handling inverted dependency. I had a hard look at our algorithm where
we were continuously merging the edges from new template into the edges
from previous updates. It was an optimized way of traversing the graph
in both forward and reverse order with out missing any resources. But,
when the dependencies are inverted,  this wouldn't work.

We have changed our algorithm. The changes in edges are noted down in
DB, only the delta of edges from previous template is calculated and
kept. At any given point of time, the graph table has all the edges from
current template and delta from previous templates. Each edge has
template ID associated with it.


The thing is, the cleanup dependencies aren't really about the template. 
The real resources really depend on other real resources. You can't 
delete a Volume before its VolumeAttachment, not because it says so in 
the template but because it will fail if you try. The template can give 
us a rough guide in advance to what those dependencies will be, but if 
that's all we keep then we are discarding information.


There may be multiple versions of a resource corresponding to one 
template version. Even worse, the actual dependencies of a resource 
change on a smaller time scale than an entire stack update (this is the 
reason the current implementation updates the template one resource at a 
time as we go).


Given that our Resource entries in the DB are in 1:1 correspondence with 
actual resources (we create a new one whenever we need to replace the 
underlying resource), I found it makes the most conceptual and practical 
sense to store the requirements in the resource itself, and update them 
at the time they actually change in the real world (bonus: introduces no 
new locking issues and no extra DB writes). I settled on this after a 
legitimate attempt at trying other options, but they didn't work out: 
https://github.com/zaneb/heat-convergence-prototype/commit/a62958342e8583f74e2aca90f6239ad457ba984d



For resource clean up, we start from the
first template (template which was completed and updates were made on
top of it, empty template otherwise), and move towards the current
template in the order in which the updates were issued, and for each
template the graph (edges if found for the template) is traversed in
reverse order and resources are cleaned-up.


I'm pretty sure this is backwards - you'll need to clean up newer 
resources first because they may reference resources from older 
templates. Also if you have a stubborn old resource that won't 

Re: [openstack-dev] [Heat] Convergence proof-of-concept showdown

2014-12-11 Thread Zane Bitter

On 11/12/14 08:26, Murugan, Visnusaran wrote:

[Murugan, Visnusaran]
In case of rollback where we have to cleanup earlier version of resources,

we could get the order from old template. We'd prefer not to have a graph
table.

In theory you could get it by keeping old templates around. But that means
keeping a lot of templates, and it will be hard to keep track of when you want
to delete them. It also means that when starting an update you'll need to
load every existing previous version of the template in order to calculate the
dependencies. It also leaves the dependencies in an ambiguous state when a
resource fails, and although that can be worked around it will be a giant pain
to implement.



Agree that looking to all templates for a delete is not good. But baring
Complexity, we feel we could achieve it by way of having an update and a
delete stream for a stack update operation. I will elaborate in detail in the
etherpad sometime tomorrow :)


I agree that I'd prefer not to have a graph table. After trying a couple of
different things I decided to store the dependencies in the Resource table,
where we can read or write them virtually for free because it turns out that
we are always reading or updating the Resource itself at exactly the same
time anyway.



Not sure how this will work in an update scenario when a resource does not 
change
and its dependencies do.


We'll always update the requirements, even when the properties don't change.


Also taking care of deleting resources in order will
be an issue.


It works fine.


This implies that there will be different versions of a resource which
will even complicate further.


No it doesn't, other than the different versions we already have due to 
UpdateReplace.



This approach reduces DB queries by waiting for completion notification

on a topic. The drawback I see is that delete stack stream will be huge as it
will have the entire graph. We can always dump such data in
ResourceLock.data Json and pass a simple flag load_stream_from_db to
converge RPC call as a workaround for delete operation.


This seems to be essentially equivalent to my 'SyncPoint' proposal[1], with

the key difference that the data is stored in-memory in a Heat engine rather
than the database.


I suspect it's probably a mistake to move it in-memory for similar
reasons to the argument Clint made against synchronising the marking off

of dependencies in-memory. The database can handle that and the problem
of making the DB robust against failures of a single machine has already been
solved by someone else. If we do it in-memory we are just creating a single
point of failure for not much gain. (I guess you could argue it doesn't matter,
since if any Heat engine dies during the traversal then we'll have to kick off
another one anyway, but it does limit our options if that changes in the
future.) [Murugan, Visnusaran] Resource completes, removes itself from
resource_lock and notifies engine. Engine will acquire parent lock and initiate
parent only if all its children are satisfied (no child entry in resource_lock).
This will come in place of Aggregator.

Yep, if you s/resource_lock/SyncPoint/ that's more or less exactly what I did.
The three differences I can see are:

1) I think you are proposing to create all of the sync points at the start of 
the
traversal, rather than on an as-needed basis. This is probably a good idea. I
didn't consider it because of the way my prototype evolved, but there's now
no reason I can see not to do this.
If we could move the data to the Resource table itself then we could even
get it for free from an efficiency point of view.


+1. But we will need engine_id to be stored somewhere for recovery purpose 
(easy to be queried format).


Yeah, so I'm starting to think you're right, maybe the/a Lock table is 
the right thing to use there. We could probably do it within the 
resource table using the same select-for-update to set the engine_id, 
but I agree that we might be starting to jam too much into that one table.



Sync points are created as-needed. Single resource is enough to restart that 
entire stream.
I think there is a disconnect in our understanding. I will detail it as well in 
the etherpad.


OK, that would be good.


2) You're using a single list from which items are removed, rather than two
lists (one static, and one to which items are added) that get compared.
Assuming (1) then this is probably a good idea too.


Yeah. We have a single list per active stream which work by removing
Complete/satisfied resources from it.


I went to change this and then remembered why I did it this way: the 
sync point is also storing data about the resources that are triggering 
it. Part of this is the RefID and attributes, and we could replace that 
by storing that data in the Resource itself and querying it rather than 
having it passed in via the notification. But the other part is the 
ID/key of those resources, which we _need_ to know in order to update 

[openstack-dev] [Cross Project][Ops][Log] Logging Working Group: need to establish our regular meeting(s)

2014-12-11 Thread Rochelle Grober
Hi guys,

I apologize for taking so long to get to this, but I think once we start 
meeting, our momentum will build.

I'm cross posting this to dev and operators so anyone who is interested can 
participate.  I've set up a Doodle to vote on the first set of times (these 
happen to work for me and for a dev in Europe, but if we don't get enough 
positive votes, we'll try again).  I will also cross post the meeting schedule 
and add it to the  meetings wiki page when we have decided on the meeting.  
I've stuck with Monday, Wednesday and Thursday to keep the meetings during the 
work week.  I know we are close to the holidays, so I'll give this a week to 
get votes unless I get heavy turnout early.

The doodle poll: https://doodle.com/7tkwu65s8b7vt5ex

I'm working on summarizing the etherpads from the summit and will be posting 
that as a document either on a wiki page or a google doc.

The first session came out with some possible Kilo actions that would help 
logging.  I think our first meeting should focus on:

Agenda:

* Logging bugs against logging - how to and how to advertise to the 
rest of Operators group

* Working with devs:  Kilo possible dev help, getting info from devs on 
what/when to review specs/code

* Where and what form we document our progress, information, etc

* Where to focus efforts on Standards (docs, specs, review lists, 
project liaisons, etc)

* Review progress (bugs, specs, docs, whatever)

So, please vote on the doodle and please let's start the discussion.  I will 
post this separately to dev and operators so that the operators can discuss 
this without spamming the developers until we have something they would want to 
comment on.

--Rocky Grober




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] Proposal to add IPMI meters from Ceilometer in Horizon

2014-12-11 Thread Xin, Xiaohui
Got it. Thanks! We will soon add the blueprint for IPMI meters in Horizon.

Thanks
Xiaohui

From: David Lyle [mailto:dkly...@gmail.com]
Sent: Friday, December 12, 2014 1:10 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Horizon] Proposal to add IPMI meters from 
Ceilometer in Horizon

Please submit the blueprint and set the target for the milestone you are 
targeting. That will add it the the blueprint review process for Horizon.

Seems like a minor change, so at this time, I don't foresee any issues with 
approving it.

David

On Thu, Dec 11, 2014 at 12:34 AM, Xin, Xiaohui 
xiaohui@intel.commailto:xiaohui@intel.com wrote:
Hi,
In Juno Release, the IPMI meters in Ceilometer have been implemented.
We know that most of the meters implemented in Ceilometer can be observed in 
Horizon side.
User admin can use the “Admin” dashboard - “System” Panel Group - “Resource 
Usage” Panel to show the “Resources Usage Overview”.
There are a lot of Ceilometer Metrics there now, each metric can be metered.
Since IPMI meters have already been there, we’d like to add such Metric items 
for it in Horizon to get metered information.

Is there anyone who oppose this proposal? If not, we’d like to add a blueprint 
in Horizon for it soon.

Thanks
Xiaohui

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] [PTL] Cascading vs. Cells – summit recap and move forward

2014-12-11 Thread joehuang
Hi, Jay,

Good question, see inline comments, pls.

Best Regards
Chaoyi Huang ( Joe Huang )

-Original Message-
From: Jay Pipes [mailto:jaypi...@gmail.com] 
Sent: Friday, December 12, 2014 1:58 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [all] [tc] [PTL] Cascading vs. Cells – summit 
recap and move forward

On 12/11/2014 04:02 AM, joehuang wrote:
 [joehuang] The major challenge for VDF use case is cross OpenStack 
 networking for tenants. The tenant's VM/Volume may be allocated in 
 different data centers geographically, but virtual network
 (L2/L3/FW/VPN/LB) should be built for each tenant automatically and 
 isolated between tenants. Keystone federation can help authorization 
 automation, but the cross OpenStack network automation challenge is 
 still there. Using prosperity orchestration layer can solve the 
 automation issue, but VDF don't like prosperity API in the 
 north-bound, because no ecosystem is available. And other issues, for 
 example, how to distribute image, also cannot be solved by Keystone 
 federation.

What is prosperity orchestration layer and prosperity API?

[joehuang] suppose that there are two OpenStack instances in the cloud, and 
vendor A developed an orchestration layer called CMPa (cloud management 
platform a), vendor B's orchestration layer CMPb. CMPa will define boot VM 
interface as CreateVM( Num, NameList, VMTemplate), CMPb may like to define boot 
VM interface as bootVM( Name, projectID, flavorID, volumeSize, location, 
networkID). After the customer asked more and more function to the cloud, the 
API set of CMPa will be quite different from that of CMPb, and different from 
OpenStack API. Now, all apps which consume OpenStack API like Heat, will not be 
able to run above the prosperity software CMPa/CMPb. All OpenStack API APPs 
ecosystem will be lost in the customer's cloud.  

 [joehuang] This is the ETSI requirement and use cases specification 
 for NFV. ETSI is the home of the Industry Specification Group for NFV. 
 In Figure 14 (virtualization of EPC) of this document, you can see 
 that the operator's  cloud including many data centers to provide 
 connection service to end user by inter-connected VNFs. The 
 requirements listed in
 (https://wiki.openstack.org/wiki/TelcoWorkingGroup) is mainly about 
 the requirements from specific VNF(like IMS, SBC, MME, HSS, S/P GW
 etc) to run over cloud, eg. migrate the traditional telco. APP from 
 prosperity hardware to cloud. Not all NFV requirements have been 
 covered yet. Forgive me there are so many telco terms here.

What is prosperity hardware?

[joehuang] For example, Huawei's IMS can only run over Huawei's ATCA hardware, 
even you bought Nokia ATCA, the IMS from Huawei will not be able to work over 
Nokia ATCA. The telco APP is sold with hardware together. (More comments on 
ETSI: ETSI is also the standard organization for GSM, 3G, 4G.)
 
Thanks,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] [PTL] Cascading vs. Cells – summit recap and move forward

2014-12-11 Thread Joshua Harlow

So I think u mean 'proprietary'?

http://www.merriam-webster.com/dictionary/proprietary

-Josh

joehuang wrote:

Hi, Jay,

Good question, see inline comments, pls.

Best Regards
Chaoyi Huang ( Joe Huang )

-Original Message-
From: Jay Pipes [mailto:jaypi...@gmail.com]
Sent: Friday, December 12, 2014 1:58 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [all] [tc] [PTL] Cascading vs. Cells – summit 
recap and move forward

On 12/11/2014 04:02 AM, joehuang wrote:

[joehuang] The major challenge for VDF use case is cross OpenStack
networking for tenants. The tenant's VM/Volume may be allocated in
different data centers geographically, but virtual network
(L2/L3/FW/VPN/LB) should be built for each tenant automatically and
isolated between tenants. Keystone federation can help authorization
automation, but the cross OpenStack network automation challenge is
still there. Using prosperity orchestration layer can solve the
automation issue, but VDF don't like prosperity API in the
north-bound, because no ecosystem is available. And other issues, for
example, how to distribute image, also cannot be solved by Keystone
federation.



What is prosperity orchestration layer and prosperity API?


[joehuang] suppose that there are two OpenStack instances in the cloud, and 
vendor A developed an orchestration layer called CMPa (cloud management 
platform a), vendor B's orchestration layer CMPb. CMPa will define boot VM 
interface as CreateVM( Num, NameList, VMTemplate), CMPb may like to define boot 
VM interface as bootVM( Name, projectID, flavorID, volumeSize, location, 
networkID). After the customer asked more and more function to the cloud, the 
API set of CMPa will be quite different from that of CMPb, and different from 
OpenStack API. Now, all apps which consume OpenStack API like Heat, will not be 
able to run above the prosperity software CMPa/CMPb. All OpenStack API APPs 
ecosystem will be lost in the customer's cloud.


[joehuang] This is the ETSI requirement and use cases specification
for NFV. ETSI is the home of the Industry Specification Group for NFV.
In Figure 14 (virtualization of EPC) of this document, you can see
that the operator's  cloud including many data centers to provide
connection service to end user by inter-connected VNFs. The
requirements listed in
(https://wiki.openstack.org/wiki/TelcoWorkingGroup) is mainly about
the requirements from specific VNF(like IMS, SBC, MME, HSS, S/P GW
etc) to run over cloud, eg. migrate the traditional telco. APP from
prosperity hardware to cloud. Not all NFV requirements have been
covered yet. Forgive me there are so many telco terms here.



What is prosperity hardware?


[joehuang] For example, Huawei's IMS can only run over Huawei's ATCA hardware, 
even you bought Nokia ATCA, the IMS from Huawei will not be able to work over 
Nokia ATCA. The telco APP is sold with hardware together. (More comments on 
ETSI: ETSI is also the standard organization for GSM, 3G, 4G.)

Thanks,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] REST and Django

2014-12-11 Thread Tripp, Travis S
Tihomir,

Your comments in the patch were the actually the clearest to me about ease of 
customizing without requiring upstream changes and really made me think more 
about your points.

Here are a couple of bullet points for consideration.


  *   Will we take on auto-discovery of API extensions in two spots (python for 
legacy and JS for new)?
  *   As teams move towards deprecating / modifying APIs who will be 
responsible for ensuring the JS libraries stay up to date and keep tabs on 
every single project?  Right now in Glance, for example, they are working on 
some fixes to v2 API (soon to become v2.3) that will allow them to deprecate v1 
so that Nova can migrate from v1.  Part of this includes making simultaneous 
improvements to the client library so that the switch can happen more 
transparently to client users. This testing and maintenance of the service team 
already takes on.
  *   The service API documentation almost always lags (helped by specs now) 
and the service team takes on the burden of exposing a programmatic way to 
access which is tested and easily consumable via the python clients which 
removes some guesswork from using the service.
  *   This is going to be an incremental approach with legacy support 
requirements anyway, I think.  So, incorporating python side changes won’t just 
go away.
  *   A tangent that needs to be considered IMO since I’m working on some 
elastic search things right now. Which of these would be better if we introduce 
a server side caching mechanism or a new source of data such as elastic search 
to improve performance?
 *   Would the client just be able to handle changing whether or not it 
used cache with a header and in either case the server side appropriately uses 
the cache? (e.g. Cache-Control: no-cache)

I’m not sure I fully understood your example about Cinder.  Was it the 
cinderclient that held up delivery of that horizon support or there cinder API 
or both?  If the API isn’t in, then it would hold up delivery of the feature in 
any case. If it is just about delivering new functionality, all that would be 
required in Richard’s approach is to drop in a new file of decorated classes / 
functions from his utility with the API’s you want? None of the API calls have 
anything to do with how your view actually replaces the upstream view.  These 
are all just about accessing the data.

Finally, I mentioned the following in the patch related to your example below 
about the client making two calls to do an update, but wanted to mention here 
to see if it is an approach that was purposeful (I don’t know the history):

​Do we really need the lines:​

 project = api.keystone.tenant_get(request, id)
 kwargs = _tenant_kwargs_from_DATA(request.DATA, enabled=None)
​
I agree that if you already have all the data it is really bad to have to do 
another call. I do think there is room for discussing the reasoning, though.
As far as I can tell, they do this so that if you are updating an entity, you 
have to be very specific about the fields you are changing. I actually see this 
as potentially a protectionary measure against data loss and a sometimes very 
nice to have feature. It perhaps was intended to *help* guard against race 
conditions *sometimes*.

Here's an example: Admin user Joe has an Domain open and stares at it for 15 
minutes while he updates the description. Admin user Bob is asked to go ahead 
and enable it. He opens the record, edits it, and then saves it. Joe finished 
perfecting the description and saves it. Doing this action would mean that the 
Domain is enabled and the description gets updated. Last man in still wins if 
he updates the same fields, but if they update different fields then both of 
their changes will take affect without them stomping on each other. Whether 
that is good or bad may depend on the situation…

From: Tihomir Trifonov t.trifo...@gmail.commailto:t.trifo...@gmail.com
Reply-To: OpenStack List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Thursday, December 11, 2014 at 7:53 AM
To: OpenStack List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [horizon] REST and Django

​​
Client just needs to know which URL to hit in order to invoke a certain API, 
and does not need to know the procedure name or parameters ordering.


​That's where the difference is. I think the client has to know the procedure 
name and parameters. Otherwise​ we have a translation factory pattern, that 
converts one naming convention to another. And you won't be able to call any 
service API if there is no code in the middleware to translate it to the 
service API procedure name and parameters. To avoid this - we can use a 
transparent proxy model - direct mapping of a client call to service API 
naming, which can be done if the client invokes the methods with the names in 
the service API, so that the middleware will just pass parameters, and will not 

Re: [openstack-dev] [all] [tc] [PTL] Cascading vs. Cells – summit recap and move forward

2014-12-11 Thread joehuang
Hello, Andrew,

Thanks for your confirmation. See inline comments, pls.

-Original Message-
From: Andrew Laski [mailto:andrew.la...@rackspace.com] 
Sent: Friday, December 12, 2014 3:56 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [all] [tc] [PTL] Cascading vs. Cells – summit 
recap and move forward


On 12/11/2014 04:02 AM, joehuang wrote:
 Hello, Russell,

 Many thanks for your reply. See inline comments.

 -Original Message-
 From: Russell Bryant [mailto:rbry...@redhat.com]
 Sent: Thursday, December 11, 2014 5:22 AM
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [all] [tc] [PTL] Cascading vs. Cells – 
 summit recap and move forward

 On Fri, Dec 5, 2014 at 8:23 AM, joehuang joehu...@huawei.com wrote:
 Dear all  TC  PTL,

 In the 40 minutes cross-project summit session “Approaches for 
 scaling out”[1], almost 100 peoples attended the meeting, and the 
 conclusion is that cells can not cover the use cases and 
 requirements which the OpenStack cascading solution[2] aim to 
 address, the background including use cases and requirements is 
 also described in the mail.
 I must admit that this was not the reaction I came away with the discussion 
 with.
 There was a lot of confusion, and as we started looking closer, many 
 (or perhaps most) people speaking up in the room did not agree that 
 the requirements being stated are things we want to try to satisfy.
 [joehuang] Could you pls. confirm your opinion: 1) cells can not cover the 
 use cases and requirements which the OpenStack cascading solution aim to 
 address. 2) Need further discussion whether to satisfy the use cases and 
 requirements.

Correct, cells does not cover all of the use cases that cascading aims to 
address.  But it was expressed that the use cases that are not covered may not 
be cases that we want addressed.

[joehuang] Ok, Need further discussion to address the cases or not.

 On 12/05/2014 06:47 PM, joehuang wrote:
 Hello, Davanum,

 Thanks for your reply.

 Cells can't meet the demand for the use cases and requirements described 
 in the mail.
 You're right that cells doesn't solve all of the requirements you're 
 discussing.
 Cells addresses scale in a region.  My impression from the summit 
 session and other discussions is that the scale issues addressed by 
 cells are considered a priority, while the global API bits are not.
 [joehuang] Agree cells is in the first class priority.

 1. Use cases
 a). Vodafone use case[4](OpenStack summit speech video from 9'02
 to 12'30 ), establishing globally addressable tenants which result 
 in efficient services deployment.
 Keystone has been working on federated identity.
 That part makes sense, and is already well under way.
 [joehuang] The major challenge for VDF use case is cross OpenStack networking 
 for tenants. The tenant's VM/Volume may be allocated in different data 
 centers geographically, but virtual network (L2/L3/FW/VPN/LB) should be built 
 for each tenant automatically and isolated between tenants. Keystone 
 federation can help authorization automation, but the cross OpenStack network 
 automation challenge is still there.
 Using prosperity orchestration layer can solve the automation issue, but VDF 
 don't like prosperity API in the north-bound, because no ecosystem is 
 available. And other issues, for example, how to distribute image, also 
 cannot be solved by Keystone federation.

 b). Telefonica use case[5], create virtual DC( data center) cross 
 multiple physical DCs with seamless experience.
 If we're talking about multiple DCs that are effectively local to 
 each other with high bandwidth and low latency, that's one conversation.
 My impression is that you want to provide a single OpenStack API on 
 top of globally distributed DCs.  I honestly don't see that as a 
 problem we should be trying to tackle.  I'd rather continue to focus 
 on making OpenStack work
 *really* well split into regions.
 I think some people are trying to use cells in a geographically 
 distributed way, as well.  I'm not sure that's a well understood or 
 supported thing, though.
 Perhaps the folks working on the new version of cells can comment further.
 [joehuang] 1) Splited region way cannot provide cross OpenStack networking 
 automation for tenant. 2) exactly, the motivation for cascading is single 
 OpenStack API on top of globally distributed DCs. Of course, cascading can 
 also be used for DCs close to each other with high bandwidth and low 
 latency. 3) Folks comment from cells are welcome.
 .

Cells can handle a single API on top of globally distributed DCs.  I have 
spoken with a group that is doing exactly that.  But it requires that the API 
is a trusted part of the OpenStack deployments in those distributed DCs.

[joehuang] Could you pls. make it more clear for the deployment mode of cells 
when used for globally distributed DCs with single API. Do you mean 
cinder/neutron/glance/ceilometer will be shared by all 

Re: [openstack-dev] [all] [tc] [PTL] Cascading vs. Cells - summit recap and move forward

2014-12-11 Thread joehuang
Hello, Joe

Thank you for your good question.

Question:
How would something like flavors work across multiple vendors. The OpenStack 
API doesn't have any hard coded names and sizes for flavors. So a flavor such 
as m1.tiny may actually be very different vendor to vendor.

Answer:
The flavor is defined by Cloud Operator from the cascading OpenStack. And 
Nova-proxy ( which is the driver for Nova as hypervisor ) will sync the 
flavor to the cascaded OpenStack when it was first used in the cascaded 
OpenStack. If flavor was changed before a new VM is booted, the changed flavor 
will also be updated to the cascaded OpenStack just before the new VM booted 
request. Through this synchronization mechanism, all flavor used in 
multi-vendor's cascaded OpenStack will be kept the same as what used in the 
cascading level, provide a consistent view for flavor.

Best Regards

Chaoyi Huang ( joehuang )

From: Joe Gordon [mailto:joe.gord...@gmail.com]
Sent: Friday, December 12, 2014 8:17 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all] [tc] [PTL] Cascading vs. Cells - summit 
recap and move forward



On Thu, Dec 11, 2014 at 1:02 AM, joehuang 
joehu...@huawei.commailto:joehu...@huawei.com wrote:
Hello, Russell,

Many thanks for your reply. See inline comments.

-Original Message-
From: Russell Bryant [mailto:rbry...@redhat.commailto:rbry...@redhat.com]
Sent: Thursday, December 11, 2014 5:22 AM
To: openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [all] [tc] [PTL] Cascading vs. Cells - summit 
recap and move forward

 On Fri, Dec 5, 2014 at 8:23 AM, joehuang 
 joehu...@huawei.commailto:joehu...@huawei.com wrote:
 Dear all  TC  PTL,

 In the 40 minutes cross-project summit session Approaches for
 scaling out[1], almost 100 peoples attended the meeting, and the
 conclusion is that cells can not cover the use cases and
 requirements which the OpenStack cascading solution[2] aim to
 address, the background including use cases and requirements is also
 described in the mail.

I must admit that this was not the reaction I came away with the discussion 
with.
There was a lot of confusion, and as we started looking closer, many (or 
perhaps most)
people speaking up in the room did not agree that the requirements being 
stated are
things we want to try to satisfy.

[joehuang] Could you pls. confirm your opinion: 1) cells can not cover the use 
cases and requirements which the OpenStack cascading solution aim to address. 
2) Need further discussion whether to satisfy the use cases and requirements.

On 12/05/2014 06:47 PM, joehuang wrote:
 Hello, Davanum,

 Thanks for your reply.

 Cells can't meet the demand for the use cases and requirements described in 
 the mail.

You're right that cells doesn't solve all of the requirements you're 
discussing.
Cells addresses scale in a region.  My impression from the summit session
 and other discussions is that the scale issues addressed by cells are 
 considered
 a priority, while the global API bits are not.

[joehuang] Agree cells is in the first class priority.

 1. Use cases
 a). Vodafone use case[4](OpenStack summit speech video from 9'02
 to 12'30 ), establishing globally addressable tenants which result
 in efficient services deployment.

 Keystone has been working on federated identity.
That part makes sense, and is already well under way.

[joehuang] The major challenge for VDF use case is cross OpenStack networking 
for tenants. The tenant's VM/Volume may be allocated in different data centers 
geographically, but virtual network (L2/L3/FW/VPN/LB) should be built for each 
tenant automatically and isolated between tenants. Keystone federation can help 
authorization automation, but the cross OpenStack network automation challenge 
is still there.
Using prosperity orchestration layer can solve the automation issue, but VDF 
don't like prosperity API in the north-bound, because no ecosystem is 
available. And other issues, for example, how to distribute image, also cannot 
be solved by Keystone federation.

 b). Telefonica use case[5], create virtual DC( data center) cross
 multiple physical DCs with seamless experience.

If we're talking about multiple DCs that are effectively local to each other
with high bandwidth and low latency, that's one conversation.
My impression is that you want to provide a single OpenStack API on top of
globally distributed DCs.  I honestly don't see that as a problem we should
be trying to tackle.  I'd rather continue to focus on making OpenStack work
*really* well split into regions.
 I think some people are trying to use cells in a geographically distributed 
 way,
 as well.  I'm not sure that's a well understood or supported thing, though.
 Perhaps the folks working on the new version of cells can comment further.

[joehuang] 1) Splited region way cannot provide cross OpenStack networking 
automation for tenant. 2) exactly, the 

Re: [openstack-dev] [all] [tc] [PTL] Cascading vs. Cells – summit recap and move forward

2014-12-11 Thread joehuang
Hello, Joshua,

Sorry, my fault. You are right. I owe you two dollars.

Best regards

Chaoyi Huang ( joehuang )

-Original Message-
From: Joshua Harlow [mailto:harlo...@outlook.com] 
Sent: Friday, December 12, 2014 9:47 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all] [tc] [PTL] Cascading vs. Cells – summit 
recap and move forward

So I think u mean 'proprietary'?

http://www.merriam-webster.com/dictionary/proprietary

-Josh

joehuang wrote:
 Hi, Jay,

 Good question, see inline comments, pls.

 Best Regards
 Chaoyi Huang ( Joe Huang )

 -Original Message-
 From: Jay Pipes [mailto:jaypi...@gmail.com]
 Sent: Friday, December 12, 2014 1:58 AM
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [all] [tc] [PTL] Cascading vs. Cells – 
 summit recap and move forward

 On 12/11/2014 04:02 AM, joehuang wrote:
 [joehuang] The major challenge for VDF use case is cross OpenStack 
 networking for tenants. The tenant's VM/Volume may be allocated in 
 different data centers geographically, but virtual network
 (L2/L3/FW/VPN/LB) should be built for each tenant automatically and 
 isolated between tenants. Keystone federation can help authorization 
 automation, but the cross OpenStack network automation challenge is 
 still there. Using prosperity orchestration layer can solve the 
 automation issue, but VDF don't like prosperity API in the 
 north-bound, because no ecosystem is available. And other issues, 
 for example, how to distribute image, also cannot be solved by 
 Keystone federation.

 What is prosperity orchestration layer and prosperity API?

 [joehuang] suppose that there are two OpenStack instances in the cloud, and 
 vendor A developed an orchestration layer called CMPa (cloud management 
 platform a), vendor B's orchestration layer CMPb. CMPa will define boot VM 
 interface as CreateVM( Num, NameList, VMTemplate), CMPb may like to define 
 boot VM interface as bootVM( Name, projectID, flavorID, volumeSize, location, 
 networkID). After the customer asked more and more function to the cloud, the 
 API set of CMPa will be quite different from that of CMPb, and different from 
 OpenStack API. Now, all apps which consume OpenStack API like Heat, will not 
 be able to run above the prosperity software CMPa/CMPb. All OpenStack API 
 APPs ecosystem will be lost in the customer's cloud.

 [joehuang] This is the ETSI requirement and use cases specification 
 for NFV. ETSI is the home of the Industry Specification Group for NFV.
 In Figure 14 (virtualization of EPC) of this document, you can see 
 that the operator's  cloud including many data centers to provide 
 connection service to end user by inter-connected VNFs. The 
 requirements listed in
 (https://wiki.openstack.org/wiki/TelcoWorkingGroup) is mainly about 
 the requirements from specific VNF(like IMS, SBC, MME, HSS, S/P GW
 etc) to run over cloud, eg. migrate the traditional telco. APP from 
 prosperity hardware to cloud. Not all NFV requirements have been 
 covered yet. Forgive me there are so many telco terms here.

 What is prosperity hardware?

 [joehuang] For example, Huawei's IMS can only run over Huawei's ATCA 
 hardware, even you bought Nokia ATCA, the IMS from Huawei will not be 
 able to work over Nokia ATCA. The telco APP is sold with hardware 
 together. (More comments on ETSI: ETSI is also the standard 
 organization for GSM, 3G, 4G.)

 Thanks,
 -jay

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] out-of-tree plugin for Mech driver/L2 and vif_driver

2014-12-11 Thread henry hly
+100!

So, for the vif-type-vhostuser, a general script path name replace the
vif-detail vhost_user_ovs_plug, because it's not the responsibility
of nova to understand it.

On Thu, Dec 11, 2014 at 11:24 PM, Daniel P. Berrange
berra...@redhat.com wrote:
 On Thu, Dec 11, 2014 at 04:15:00PM +0100, Maxime Leroy wrote:
 On Thu, Dec 11, 2014 at 11:41 AM, Daniel P. Berrange
 berra...@redhat.com wrote:
  On Thu, Dec 11, 2014 at 09:37:31AM +0800, henry hly wrote:
  On Thu, Dec 11, 2014 at 3:48 AM, Ian Wells ijw.ubu...@cack.org.uk wrote:
   On 10 December 2014 at 01:31, Daniel P. Berrange berra...@redhat.com
   wrote:
  
  
 [..]
  The question is, do we really need such flexibility for so many nova vif 
  types?
 
  I also think that VIF_TYPE_TAP and VIF_TYPE_VHOSTUSER is good example,
  nova shouldn't known too much details about switch backend, it should
  only care about the VIF itself, how the VIF is plugged to switch
  belongs to Neutron half.
 
  However I'm not saying to move existing vif driver out, those open
  backend have been used widely. But from now on the tap and vhostuser
  mode should be encouraged: one common vif driver to many long-tail
  backend.
 
  Yes, I really think this is a key point. When we introduced the VIF type
  mechanism we never intended for there to be soo many different VIF types
  created. There is a very small, finite number of possible ways to configure
  the libvirt guest XML and it was intended that the VIF types pretty much
  mirror that. This would have given us about 8 distinct VIF type maximum.
 
  I think the reason for the larger than expected number of VIF types, is
  that the drivers are being written to require some arbitrary tools to
  be invoked in the plug  unplug methods. It would really be better if
  those could be accomplished in the Neutron code than the Nova code, via
  a host agent run  provided by the Neutron mechanism.  This would let
  us have a very small number of VIF types and so avoid the entire problem
  that this thread is bringing up.
 
  Failing that though, I could see a way to accomplish a similar thing
  without a Neutron launched agent. If one of the VIF type binding
  parameters were the name of a script, we could run that script on
  plug  unplug. So we'd have a finite number of VIF types, and each
  new Neutron mechanism would merely have to provide a script to invoke
 
  eg consider the existing midonet  iovisor VIF types as an example.
  Both of them use the libvirt ethernet config, but have different
  things running in their plug methods. If we had a mechanism for
  associating a plug script with a vif type, we could use a single
  VIF type for both.
 
  eg iovisor port binding info would contain
 
vif_type=ethernet
vif_plug_script=/usr/bin/neutron-iovisor-vif-plug
 
  while midonet would contain
 
vif_type=ethernet
vif_plug_script=/usr/bin/neutron-midonet-vif-plug
 

 Having less VIF types, then using scripts to plug/unplug the vif in
 nova is a good idea. So, +1 for the idea.

 If you want, I can propose a new spec for this. Do you think we have
 enough time to approve this new spec before the 18th December?

 Anyway I think we still need to have a vif_driver plugin mechanism:
 For example, if your external l2/ml2 plugin needs a specific type of
 nic (i.e. a new method get_config to provide specific parameters to
 libvirt for the nic) that is not supported in the nova tree.

 As I said above, there's a really small finite set of libvirt configs
 we need to care about. We don't need to have a plugin system for that.
 It is no real burden to support them in tree


 Regards,
 Daniel
 --
 |: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
 |: http://libvirt.org  -o- http://virt-manager.org :|
 |: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
 |: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] REST and Django

2014-12-11 Thread Richard Jones
On Fri Dec 12 2014 at 1:06:08 PM Tripp, Travis S travis.tr...@hp.com
wrote:

 ​Do we really need the lines:​

  project = api.keystone.tenant_get(request, id)
  kwargs = _tenant_kwargs_from_DATA(request.DATA, enabled=None)
 ​
 I agree that if you already have all the data it is really bad to have to
 do another call. I do think there is room for discussing the reasoning,
 though.
 As far as I can tell, they do this so that if you are updating an entity,
 you have to be very specific about the fields you are changing. I actually
 see this as potentially a protectionary measure against data loss and a
 sometimes very nice to have feature. It perhaps was intended to *help*
 guard against race conditions *sometimes*.


Yep, it looks like I broke this API by implementing it the way I did, and
I'll alter the API so that you pass both the current object (according to
the client) and the parameters to alter.

Thanks everyone for the great reviewing!


 Richard
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [heat-docker]Does the heat-docker supports auto-scaling and monitoring the Docker container?

2014-12-11 Thread Chenliang (L)
Hi,
Now We can deploying Docker containers in an OpenStack environment using Heat. 
But I feel confused. 
Could someone can tell me does the heat-docker supports monitoring the Docker 
container in a stack and how to monitor it?
Does it supports auto-scaling the Docker container?


Best Regards,
-- Liang Chen

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] [PTL] Cascading vs. Cells - summit recap and move forward

2014-12-11 Thread Joe Gordon
On Thu, Dec 11, 2014 at 6:25 PM, joehuang joehu...@huawei.com wrote:

  Hello, Joe



 Thank you for your good question.



 Question:

 How would something like flavors work across multiple vendors. The
 OpenStack API doesn't have any hard coded names and sizes for flavors. So a
 flavor such as m1.tiny may actually be very different vendor to vendor.



 Answer:

 The flavor is defined by Cloud Operator from the cascading OpenStack. And
 Nova-proxy ( which is the driver for “Nova as hypervisor” ) will sync the
 flavor to the cascaded OpenStack when it was first used in the cascaded
 OpenStack. If flavor was changed before a new VM is booted, the changed
 flavor will also be updated to the cascaded OpenStack just before the new
 VM booted request. Through this synchronization mechanism, all flavor used
 in multi-vendor’s cascaded OpenStack will be kept the same as what used in
 the cascading level, provide a consistent view for flavor.


I don't think this is sufficient. If the underlying hardware the between
multiple vendors is different setting the same values for a flavor will
result in different performance characteristics.  For example, nova allows
for setting VCPUs, but nova doesn't provide an easy way to define how
powerful a VCPU is.   Also flavors are commonly hardware dependent, take
what rackspace offers:

http://www.rackspace.com/cloud/public-pricing#cloud-servers

Rackspace has I/O Optimized flavors

* High-performance, RAID 10-protected SSD storage
* Option of booting from Cloud Block Storage (additional charges apply for
Cloud Block Storage)
* Redundant 10-Gigabit networking
* Disk I/O scales with the number of data disks up to ~80,000 4K random
read IOPS and ~70,000 4K random write IOPS.*

How would cascading support something like this?




 Best Regards



 Chaoyi Huang ( joehuang )



 *From:* Joe Gordon [mailto:joe.gord...@gmail.com]
 *Sent:* Friday, December 12, 2014 8:17 AM
 *To:* OpenStack Development Mailing List (not for usage questions)

 *Subject:* Re: [openstack-dev] [all] [tc] [PTL] Cascading vs. Cells -
 summit recap and move forward







 On Thu, Dec 11, 2014 at 1:02 AM, joehuang joehu...@huawei.com wrote:

 Hello, Russell,

 Many thanks for your reply. See inline comments.

 -Original Message-
 From: Russell Bryant [mailto:rbry...@redhat.com]
 Sent: Thursday, December 11, 2014 5:22 AM
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [all] [tc] [PTL] Cascading vs. Cells – summit
 recap and move forward

  On Fri, Dec 5, 2014 at 8:23 AM, joehuang joehu...@huawei.com wrote:
  Dear all  TC  PTL,
 
  In the 40 minutes cross-project summit session “Approaches for
  scaling out”[1], almost 100 peoples attended the meeting, and the
  conclusion is that cells can not cover the use cases and
  requirements which the OpenStack cascading solution[2] aim to
  address, the background including use cases and requirements is also
  described in the mail.

 I must admit that this was not the reaction I came away with the
 discussion with.
 There was a lot of confusion, and as we started looking closer, many (or
 perhaps most)
 people speaking up in the room did not agree that the requirements being
 stated are
 things we want to try to satisfy.

 [joehuang] Could you pls. confirm your opinion: 1) cells can not cover the
 use cases and requirements which the OpenStack cascading solution aim to
 address. 2) Need further discussion whether to satisfy the use cases and
 requirements.

 On 12/05/2014 06:47 PM, joehuang wrote:
  Hello, Davanum,
 
  Thanks for your reply.
 
  Cells can't meet the demand for the use cases and requirements
 described in the mail.

 You're right that cells doesn't solve all of the requirements you're
 discussing.
 Cells addresses scale in a region.  My impression from the summit session
  and other discussions is that the scale issues addressed by cells are
 considered
  a priority, while the global API bits are not.

 [joehuang] Agree cells is in the first class priority.

  1. Use cases
  a). Vodafone use case[4](OpenStack summit speech video from 9'02
  to 12'30 ), establishing globally addressable tenants which result
  in efficient services deployment.

  Keystone has been working on federated identity.
 That part makes sense, and is already well under way.

 [joehuang] The major challenge for VDF use case is cross OpenStack
 networking for tenants. The tenant's VM/Volume may be allocated in
 different data centers geographically, but virtual network
 (L2/L3/FW/VPN/LB) should be built for each tenant automatically and
 isolated between tenants. Keystone federation can help authorization
 automation, but the cross OpenStack network automation challenge is still
 there.
 Using prosperity orchestration layer can solve the automation issue, but
 VDF don't like prosperity API in the north-bound, because no ecosystem is
 available. And other issues, for example, how to distribute image, also
 cannot be solved by Keystone federation.

  

Re: [openstack-dev] [all] [tc] [PTL] Cascading vs. Cells – summit recap and move forward

2014-12-11 Thread Dan Smith
 [joehuang] Could you pls. make it more clear for the deployment mode
 of cells when used for globally distributed DCs with single API. Do
 you mean cinder/neutron/glance/ceilometer will be shared by all
 cells, and use RPC for inter-dc communication, and only support one
 vendor's OpenStack distribution? How to do the cross data center
 integration and troubleshooting with RPC if the
 driver/agent/backend(storage/network/sever) from different vendor.

Correct, cells only applies to single-vendor distributed deployments. In
both its current and future forms, it uses private APIs for
communication between the components, and thus isn't suited for a
multi-vendor environment.

Just MHO, but building functionality into existing or new components to
allow deployments from multiple vendors to appear as a single API
endpoint isn't something I have much interest in.

--Dan



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat-docker]Does the heat-docker supports auto-scaling and monitoring the Docker container?

2014-12-11 Thread Jay Lau
So you are using heat docker driver  but not nova docker driver, right?

If you are using nova docker driver, then the container was treated as VM
and you can do monitoring and auto scaling with heat.

But with heat docker driver, it talk to docker host directly which you need
to define in HEAT template and there is no monitoring for such case. Also
the manual scale heat resource-singal to scale up your stack manually;
for auto scale, IMHO, you may want to integrate with some 3rd party monitor
and do some development work to reach this.

Thanks.

2014-12-12 11:19 GMT+08:00 Chenliang (L) hs.c...@huawei.com:

 Hi,
 Now We can deploying Docker containers in an OpenStack environment using
 Heat. But I feel confused.
 Could someone can tell me does the heat-docker supports monitoring the
 Docker container in a stack and how to monitor it?
 Does it supports auto-scaling the Docker container?


 Best Regards,
 -- Liang Chen

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] out-of-tree plugin for Mech driver/L2 and vif_driver

2014-12-11 Thread Ryu Ishimoto
On Thu, Dec 11, 2014 at 7:41 PM, Daniel P. Berrange berra...@redhat.com
wrote:


 Yes, I really think this is a key point. When we introduced the VIF type
 mechanism we never intended for there to be soo many different VIF types
 created. There is a very small, finite number of possible ways to configure
 the libvirt guest XML and it was intended that the VIF types pretty much
 mirror that. This would have given us about 8 distinct VIF type maximum.

 I think the reason for the larger than expected number of VIF types, is
 that the drivers are being written to require some arbitrary tools to
 be invoked in the plug  unplug methods. It would really be better if
 those could be accomplished in the Neutron code than the Nova code, via
 a host agent run  provided by the Neutron mechanism.  This would let
 us have a very small number of VIF types and so avoid the entire problem
 that this thread is bringing up.

 Failing that though, I could see a way to accomplish a similar thing
 without a Neutron launched agent. If one of the VIF type binding
 parameters were the name of a script, we could run that script on
 plug  unplug. So we'd have a finite number of VIF types, and each
 new Neutron mechanism would merely have to provide a script to invoke

 eg consider the existing midonet  iovisor VIF types as an example.
 Both of them use the libvirt ethernet config, but have different
 things running in their plug methods. If we had a mechanism for
 associating a plug script with a vif type, we could use a single
 VIF type for both.

 eg iovisor port binding info would contain

   vif_type=ethernet
   vif_plug_script=/usr/bin/neutron-iovisor-vif-plug

 while midonet would contain

   vif_type=ethernet
   vif_plug_script=/usr/bin/neutron-midonet-vif-plug


 And so you see implementing a new Neutron mechanism in this way would
 not require *any* changes in Nova whatsoever. The work would be entirely
 self-contained within the scope of Neutron. It is simply a packaging
 task to get the vif script installed on the compute hosts, so that Nova
 can execute it.

 This is essentially providing a flexible VIF plugin system for Nova,
 without having to have it plug directly into the Nova codebase with
 the API  RPC stability constraints that implies.


+1

Port binding mechanism could vary among different networking technologies,
which is not nova's concern, so this proposal makes sense.  Note that some
vendors already provide port binding scripts that are currently executed
directly from nova's vif.py ('mm-ctl' of midonet and 'ifc_ctl' for iovisor
are two such examples), and this proposal makes it unnecessary to have
these hard-coded in nova.  The only question I have is, how would nova
figure out the arguments for these scripts?  Should nova dictate what they
are?

Ryu
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [python-cinderclient] Return request ID to caller

2014-12-11 Thread Malawade, Abhijeet
HI,

I want your thoughts on blueprint 'Log Request ID Mappings' for cross projects.
BP: https://blueprints.launchpad.net/nova/+spec/log-request-id-mappings
It will enable operators to get request id's mappings easily and will be useful 
in analysing logs effectively.

For logging 'Request ID Mappings', client needs to return 
'x-openstack-request-id' to the caller.
Currently python-cinderclient do not return 'x-openstack-request-id' back to 
the caller.

As of now, I could think of below two solutions to return 'request-id' back 
from cinder-client to the caller.

1. Return tuple containing response header and response body from all 
cinder-client methods.
  (response header contains 'x-openstack-request-id').

Advantages:

A.  In future, if the response headers are modified then it will be 
available to the caller without making any changes to the python-cinderclient 
code.


Disadvantages:
A. Affects all services using python-cinderclient library as the  return 
type of each method is changed to tuple.
 B. Need to refactor all methods exposed by the python-cinderclient 
library. Also requires changes in the cross projects wherever 
python-cinderclient calls are being made.

Ex. :-
 From Nova, you will need to call cinder-client 'get' method like below  :-
   resp_header, volume = cinderclient(context).volumes.get(volume_id)

x-openstack-request-id = resp_header.get('x-openstack-request-id', None)

Here cinder-client will return both response header and volume. From response 
header, you can get 'x-openstack-request-id'.

2. The optional parameter 'return_req_id' of type list will be passed to each 
of the cinder-client method. If this parameter is passed then cinder-client 
will append ''x-openstack-request-id' received from cinder api to this list.

This is already implemented in glance-client (for V1 api only)
Blueprint : 
https://blueprints.launchpad.net/python-glanceclient/+spec/return-req-id
Review link : https://review.openstack.org/#/c/68524/7

Advantages:

A.  Requires changes in the cross projects only at places wherever 
python-cinderclient calls are being made requiring 'x-openstack-request-id'.


Dis-advantages:

A.  Need to refactor all methods exposed by the python-cinderclient library.


Ex. :-
From Nova, you will need to pass  return_req_id parameter as a list.
kwargs['return_req_id'] = []
item = cinderclient(context).volumes.get(volume_id, **kwargs)

if kwargs.get('return_req_id'):
x-openstack-request-id = kwargs['return_req_id'].pop()

python-cinderclient will add 'x-openstack-request-id' to the 'return_req_id' 
list if it is provided in kwargs.

IMO, solution #2 is better than #1 for the reasons quoted above.
Takashi NATSUME has already proposed a patch for solution #2.  Please review 
patch https://review.openstack.org/#/c/104482/.
Would appreciate if you can think of any other better solution than #2.

Thank you.
Abhijeet

__
Disclaimer: This email and any attachments are sent in strictest confidence
for the sole use of the addressee and may contain legally privileged,
confidential, and proprietary data. If you are not the intended recipient,
please advise the sender by replying promptly to this email and then delete
and destroy this email and any attachments without any further use, copying
or forwarding.___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Action context passed to all action executions by default

2014-12-11 Thread Renat Akhmerov

 Maybe put all these different context under a kwarg called context?
 
 For example, 
 
 ctx = {
 env: {...},
 global: {...},
 runtime: {
 execution_id: ...,
 task_id: ...,
 ...
 }
 }
 
 action = SomeMistralAction(context=ctx)

IMO, that is a nice idea. I like it and would go further with it unless someone 
else has any other thoughts.

Renat
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] deprecation 'pattern' library??

2014-12-11 Thread Joshua Harlow

Filed spec @ https://review.openstack.org/#/c/141220/

Comments welcome :-)

-Josh

Joshua Harlow wrote:

Ya,

I to was surprised by the general lack of this kind of library on pypi.

One would think u know that people deprecate stuff, but maybe this isn't
the norm for python... Why deprecate when u can just make v2.0 ;)

-Josh

Davanum Srinivas wrote:

Surprisingly deprecator is still available on pypi

On Thu, Dec 11, 2014 at 2:04 AM, Julien Danjoujul...@danjou.info wrote:

On Wed, Dec 10 2014, Joshua Harlow wrote:


[…]


Or in general any other comments/ideas about providing such a
deprecation
pattern library?

+1


* debtcollector

made me think of loanshark :)

--
Julien Danjou
-- Free Software hacker
-- http://julien.danjou.info

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev







___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] [PTL] Cascading vs. Cells – summit recap and move forward

2014-12-11 Thread henry hly
On Fri, Dec 12, 2014 at 11:41 AM, Dan Smith d...@danplanet.com wrote:
 [joehuang] Could you pls. make it more clear for the deployment mode
 of cells when used for globally distributed DCs with single API. Do
 you mean cinder/neutron/glance/ceilometer will be shared by all
 cells, and use RPC for inter-dc communication, and only support one
 vendor's OpenStack distribution? How to do the cross data center
 integration and troubleshooting with RPC if the
 driver/agent/backend(storage/network/sever) from different vendor.

 Correct, cells only applies to single-vendor distributed deployments. In
 both its current and future forms, it uses private APIs for
 communication between the components, and thus isn't suited for a
 multi-vendor environment.

 Just MHO, but building functionality into existing or new components to
 allow deployments from multiple vendors to appear as a single API
 endpoint isn't something I have much interest in.

 --Dan


Even with the same distribution, cell still face many challenges
across multiple DC connected with WAN. Considering OAM, it's easier to
manage autonomous systems connected with external northband interface
across remote sites, than a single monolithic system connected with
internal RPC message.

Although Cell did some separation and modulation (not to say it's
still internal RPC across WAN), they leaves cinder, neutron,
ceilometer. Shall we wait for all these projects to re-factor with
Cell-like hierarchy structure, or adopt a more loose coupled way, to
distribute them into autonomous units at the basis of the whole
Openstack (except Keystone which can handle multiple region
naturally)?

As we can see, compared with Cell, much less work is needed to build a
Cascading solution, No patch is needed except Neutron (waiting some
upcoming features not landed in Juno), nearly all work lies in the
proxy, which is in fact another kind of driver/agent.

Best Regards
Henry



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] XenAPI questions

2014-12-11 Thread YAMAMOTO Takashi
hi,

good to hear.
do you have any estimate when it will be available?
will it cover dom0 side of the code found in
neutron/plugins/openvswitch/agent/xenapi?

YAMAMOTO Takashi

 Hi Yamamoto,
 
 XenAPI and Neutron do work well together, and we have an private CI that is 
 running Neutron jobs.  As it's not currently the public CI it's harder to 
 access logs.
 We're working on trying to move the existing XenServer CI from a nova-network 
 base to a neutron base, at which point the logs will of course be publically 
 accessible and tested against any changes, thus making it easy to answer 
 questions such as the below.
 
 Bob
 
 -Original Message-
 From: YAMAMOTO Takashi [mailto:yamam...@valinux.co.jp]
 Sent: 11 December 2014 03:17
 To: openstack-dev@lists.openstack.org
 Subject: [openstack-dev] [Neutron] XenAPI questions
 
 hi,
 
 i have questions for XenAPI folks:
 
 - what's the status of XenAPI support in neutron?
 - is there any CI covering it?  i want to look at logs.
 - is it possible to write a small program which runs with the xen
   rootwrap and proxies OpenFlow channel between domains?
   (cf. https://review.openstack.org/#/c/138980/)
 
 thank you.
 
 YAMAMOTO Takashi
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] [PTL] Cascading vs. Cells - summit recap and move forward

2014-12-11 Thread joehuang
Hi Joe,

Thank you to lead us to deep diving into cascading. My answer is listed below 
your question.

 I don't think this is sufficient. If the underlying hardware the
 between multiple vendors is different setting the same values for
 a flavor will result in different performance characteristics.
 For example, nova allows for setting VCPUs, but nova doesn't provide
 an easy way to define how powerful a VCPU is.   Also flavors are commonly
 hardware dependent, take what rackspace offers:

 http://www.rackspace.com/cloud/public-pricing#cloud-servers

 Rackspace has I/O Optimized flavors

 * High-performance, RAID 10-protected SSD storage
 * Option of booting from Cloud Block Storage (additional charges apply
 for Cloud Block Storage)
 * Redundant 10-Gigabit networking
 * Disk I/O scales with the number of data disks up to ~80,000 4K random
 read IOPS and ~70,000 4K random write IOPS.*

 How would cascading support something like this?

[joehuang] Just reminder you that the cascading works like a normal OpenStack, 
if it can be solved by one OpenStack instance, it should be feasible too in 
cascading through the self-similar mechanism used ( just treat the cascaded 
OpenStack as one huge compute node ). The only difference between cascading 
OpenStack and normal OpenStack is that the agent/driver processes running on 
compute-node /cinder-volume node / L2/L3 agent.

Let me give an example how the issues you mentioned can be solved in cascading.

Suppose that we have one cascading OpenStack (OpenStack0), two cascaded 
OpenStack( OpenStack1, OpenStack2 )

For OpenStack1: there are 5 compute nodes in OpenStack1 with “High-performance, 
RAID 10-protected SSD storage”, we can add these 5 nodes to host aggregate 
“SSD” with extra spec (Storage:SSD), there are another 5 nodes booting from 
Cloud Block Storage, we can add these 5 nodes to host aggregate “cloudstorage” 
with extra spec (Storage:cloud). All these 10 nodes belongs to AZ1 
(availability zone 1)

For OpenStack2: there are 5 compute nodes in OpenStack2 with “Redundant 
10-Gigabit networking”, we can add these 5 nodes to host aggregate “SSD” with 
extra spec (Storage:SSD), there are another 5 nodes with random access to 
volume with QoS requirement, we can add these 5 nodes to host aggregate 
“randomio” with extra spec (IO:random). All these 10 nodes belongs to AZ2 
(availability zone 2). We can define volume QoS associated with volume-type: 
vol-type-random-qos.

In the cascading OpenStack, add compute-node1 as the proxy-node (proxy-node1) 
for the cascaded OpenStack1, add compute-node2 as the proxy-node (proxy-node2) 
for the cascaded OpenStack2. From the information described for cascaded 
OpenStack, add proxy-node1 to AZ1, and host aggregate “SSD” and “cloudstorage”, 
add proxy-node2 to AZ2, and host aggregate “SSD” and “randomio”, cinder-proxy 
running on proxy-node2 will retrieve the volume type with QoS information from 
the cascaded OpenStack2. After that, the tenant user or the cloud admin can 
define “Flavor” with extra-spec which will be matched with host-aggregate spec.

In the cascading layer, you need to configure the regarding scheduler filter.

Now: if you boot a VM in AZ1 with flavor  (Storage:SSD), the request will be 
scheduled to proxy-node1, and the request will be reassembled as restful 
request to cascaded OpenStack1, and the node which were added into SSD host 
aggregate will be scheduled just like a normal OpenStack done.
if you boot a VM in AZ2 with flavor  (Storage:SSD), the request will be 
scheduled to proxy-node2, and the request will be reassembled as restful 
request to cascaded OpenStack2, and the node which were added into SSD host 
aggregate will be scheduled just like a normal OpenStack done.
But if you boot a VM in AZ2 with flavor  (randomio), the request will be 
scheduled to proxy-node2, and the request will be reassembled as restful 
request to cascaded OpenStack2, and the node which were added into randomio 
host aggregate will be scheduled just like a normal OpenStack done. If you 
attach a volume which was created with the volume-type “vol-type-random-qos” in 
AZ2 to VM2, and the qos for VM to access volume will also taken into effect.

I just give a relative easy to understand example, more complicated use cases 
can also be settled using the cascading amazing self-similar mechanism, I 
called it as FRACTAL (fantastic math) pattern 
(https://www.linkedin.com/pulse/20140729022031-23841540-openstack-cascading-and-fractal?trk=prof-post).

Best regards

Chaoyi Huang ( joehuang )

From: Joe Gordon [mailto:joe.gord...@gmail.com]
Sent: Friday, December 12, 2014 11:29 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all] [tc] [PTL] Cascading vs. Cells - summit 
recap and move forward



On Thu, Dec 11, 2014 at 6:25 PM, joehuang 
joehu...@huawei.commailto:joehu...@huawei.com wrote:
Hello, Joe

Thank you for your good question.

Question:
How would something like flavors 

Re: [openstack-dev] [all] [tc] [PTL] Cascading vs. Cells – summit recap and move forward

2014-12-11 Thread joehuang
Hello, Dan,

 Correct, cells only applies to single-vendor distributed deployments. 
 In both its current and future forms, it uses private APIs for 
 communication between the components, and thus isn't suited for a 
 multi-vendor environment.

Thank you for your confirmation. My doubt is what's the private APIs, which 
commponents included  communication between the components .

Best Regards

Chaoyi Huang ( joehuang )

-Original Message-
From: Dan Smith [mailto:d...@danplanet.com] 
Sent: Friday, December 12, 2014 11:41 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all] [tc] [PTL] Cascading vs. Cells – summit 
recap and move forward

 [joehuang] Could you pls. make it more clear for the deployment mode 
 of cells when used for globally distributed DCs with single API. Do 
 you mean cinder/neutron/glance/ceilometer will be shared by all cells, 
 and use RPC for inter-dc communication, and only support one vendor's 
 OpenStack distribution? How to do the cross data center integration 
 and troubleshooting with RPC if the
 driver/agent/backend(storage/network/sever) from different vendor.

Correct, cells only applies to single-vendor distributed deployments. In both 
its current and future forms, it uses private APIs for communication between 
the components, and thus isn't suited for a multi-vendor environment.

Just MHO, but building functionality into existing or new components to allow 
deployments from multiple vendors to appear as a single API endpoint isn't 
something I have much interest in.

--Dan


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev