[openstack-dev] Networks Generating In Fuel

2014-08-01 Thread zh...@certusnet.com.cn
Hi! I'm now developing in Fuel and I want to add a virtual ip address to the 
NIC of br-ex with the same ip range of public address. So I want to know where 
IPs of node's such as br-ex, br-storage are generated. Can you tell me ?




Mail:zh...@certusnet.com.cn

 
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] Compute agent local VM inspector - potential enhancement

2014-08-01 Thread Eoghan Glynn


 Disclaimer: I'm not fully vested on ceilometer internals, so bear with me.
 
 For consumers wanting to leverage ceilometer as a telemetry service atop
 non-OpenStack Clouds or infrastructure they don't own, some edge cases
 crop up. Most notably the consumer may not have access to the hypervisor
 host and therefore cannot leverage the ceilometer compute agent on a per
 host basis.

Yes, currently such access to the hypervisor host is required, least in
the case of the libvirt-based inspector.
 
 In such scenarios it's my understanding the main option is to employ the
 central agent to poll measurements from the monitored resources (VMs,
 etc.). 

Well, the ceilometer central agent is not generally concerned with
with polling related *directly* to VMs - rather it handles acquiring
data from RESTful API (glance, neutron etc.) that are not otherwise
available in the form of notifications, and also from host-level
interfaces such as SNMP.

 However this approach requires Cloud APIs (or other mechanisms)
 which allow the polling impl to obtain the desired measurements (VM
 memory, CPU, net stats, etc.) and moreover the polling approach has it's
 own set of pros / cons from a arch / topology perspective.

Indeed.

 The other potential option is to setup the ceilometer compute agent
 within the VM and have each VM publish measurements to the collector --
 a local VM agent / inspector if you will. With respect to this local VM
 agent approach:
 (a) I haven't seen this documented to date; is there any desire / reqs
 to support this topology?
 (b) If yes to #a, I whipped up a crude PoC here:
 http://tinyurl.com/pqjgotv  Are folks willing to consider a BP for this
 approach?

So in a sense this is similar to the Heat cfn-push-stats utility[1]
and seems to suffer from the same fundamental problem, i.e. the need
for injection of credentials (user/passwds, keys, whatever) into the
VM in order to allow the metric datapoints be pushed up to the
infrastructure layer (e.g. onto the AMQP bus, or to a REST API endpoint).

How would you propose to solve that credentialing issue?

Cheers,
Eoghan

[1] https://github.com/openstack/heat-cfntools/blob/master/bin/cfn-push-stats

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] tox -e pep8 fails at requirements.txt

2014-08-01 Thread trinath.soman...@freescale.com
Hi -

When I run tox to check pep8 on the newly cloned neutron from openstack git , I 
get the following error.

pep8 create: /root/fw_code_base/neutron_01082014/.tox/pep8
pep8 installdeps: -r/root/fw_code_base/neutron_01082014/requirements.txt, 
-r/root/fw_code_base/neutron_01082014/test-requirements.txt
ERROR: invocation failed, logfile: 
/root/fw_code_base/neutron_01082014/.tox/pep8/log/pep8-1.log
ERROR: actionid=pep8
msg=getenv
cmdargs=[local('/root/fw_code_base/neutron_01082014/.tox/pep8/bin/pip'), 
'install', '-U', '-r/root/fw_code_base/neutron_01082014/requirements.txt', 
'-r/root/fw_code_base/neutron_01082014/test-requirements.txt']
env={'PYTHONIOENCODING': 'utf_8', 'XDG_RUNTIME_DIR': '/run/user/0', 
'VIRTUAL_ENV': '/root/fw_code_base/neutron_01082014/.tox/pep8', 'LESSOPEN': '| 
/usr/bin/lesspipe %s', 'SSH_CLIENT': '10.232.84.77 61736 22', 'LOGNAME': 
'root', 'USER': 'root', 'HOME': 
'/root/fw_code_base/neutron_01082014/.tox/pep8/tmp/pseudo-home', 'PATH': 
'/root/fw_code_base/neutron_01082014/.tox/pep8/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games',
 'XDG_SESSION_ID': '19', '_': '/usr/bin/tox', 'SSH_CONNECTION': '10.232.84.77 
61736 10.232.90.26 22', 'LANG': 'en_IN', 'TERM': 'xterm', 'SHELL': '/bin/bash', 
'LESSCLOSE': '/usr/bin/lesspipe %s %s', 'LANGUAGE': 'en_IN:en', 'SHLVL': '1', 
'SSH_TTY': '/dev/pts/0', 'OLDPWD': '/root/fw_code_base', 'PWD': 
'/root/fw_code_base/neutron_01082014', 'PYTHONHASHSEED': '0', 'MAIL': 
'/var/mail/root', 'LS_COLORS': 
'rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arj=01;31:*.taz=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.lz=01;31:*.xz=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.jpg=01;35:*.jpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.axv=01;35:*.anx=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=00;36:*.au=00;36:*.flac=00;36:*.mid=00;36:*.midi=00;36:*.mka=00;36:*.mp3=00;36:*.mpc=00;36:*.ogg=00;36:*.ra=00;36:*.wav=00;36:*.axa=00;36:*.oga=00;36:*.spx=00;36:*.xspf=00;36:'}
Downloading/unpacking pbr=0.6,!=0.7,1.0 (from -r 
/root/fw_code_base/neutron_01082014/requirements.txt (line 1))
  Cannot fetch index base URL https://pypi.python.org/simple/
  Could not find any downloads that satisfy the requirement pbr=0.6,!=0.7,1.0 
(from -r /root/fw_code_base/neutron_01082014/requirements.txt (line 1))
Cleaning up...
No distributions at all found for pbr=0.6,!=0.7,1.0 (from -r 
/root/fw_code_base/neutron_01082014/requirements.txt (line 1))
Storing debug log for failure in 
/root/fw_code_base/neutron_01082014/.tox/pep8/tmp/pseudo-home/.pip/pip.log

ERROR: could not install deps 
[-r/root/fw_code_base/neutron_01082014/requirements.txt, 
-r/root/fw_code_base/neutron_01082014/test-requirements.txt]
__ summary 
__
ERROR:   pep8: could not install deps 
[-r/root/fw_code_base/neutron_01082014/requirements.txt, 
-r/root/fw_code_base/neutron_01082014/test-requirements.txt]

Can anyone help me on resolving this issue.

Kindly please help me in this regard.

Thanks in advance.

--
Trinath Somanchi - B39208
trinath.soman...@freescale.com | extn: 4048

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Marconi] All-hands documentation day

2014-08-01 Thread Flavio Percoco
On 07/31/2014 09:57 PM, Victoria Martínez de la Cruz wrote:
 Hi everyone,
 
 Earlier today I went through the documentation requirements for
 graduation [0] and it looks like there is some work do to.
 
 The structure we should follow is detailed
 in https://etherpad.openstack.org/p/marconi-graduation.
 
 It would be nice to do an all-hands documentation day next week to make
 this happen.
 
 Can you join us? When is it better for you?

Hey Vicky,

Awesome work, thanks for putting this together.

I'd propose doing it on Thursday since, hopefully, some other patches
will land during that week that will require documentation too.

Flavio,

 
 My best,
 
 Victoria
 
 [0] 
 https://github.com/openstack/governance/blob/master/reference/incubation-integration-requirements.rst#documentation--user-support-1


-- 
@flaper87
Flavio Percoco

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] Compute agent local VM inspector - potential enhancement

2014-08-01 Thread Rao Dingyuan
Heat cfntools is based on SSH, so I assume it requires TCP/IP connectivity
between VM and the central agent(or collector). But in the cloud, some
networks are isolated from infrastructure layer network, because of security
reasons. Some of our customers even explicitly require such security
protection. Does it mean those isolated VMs cannot be monitored by this
proposed-VM-agent?

I really wish we can figur out how it could work for all VMs but with no
security issues.

I'm not familiar with heat-cfntools, so, correct me if I am wrong :)


Best regards!
Kurt

-邮件原件-
发件人: Eoghan Glynn [mailto:egl...@redhat.com] 
发送时间: 2014年8月1日 14:46
收件人: OpenStack Development Mailing List (not for usage questions)
主题: Re: [openstack-dev] [ceilometer] Compute agent local VM inspector -
potential enhancement



 Disclaimer: I'm not fully vested on ceilometer internals, so bear with me.
 
 For consumers wanting to leverage ceilometer as a telemetry service 
 atop non-OpenStack Clouds or infrastructure they don't own, some edge 
 cases crop up. Most notably the consumer may not have access to the 
 hypervisor host and therefore cannot leverage the ceilometer compute 
 agent on a per host basis.

Yes, currently such access to the hypervisor host is required, least in the
case of the libvirt-based inspector.
 
 In such scenarios it's my understanding the main option is to employ 
 the central agent to poll measurements from the monitored resources 
 (VMs, etc.).

Well, the ceilometer central agent is not generally concerned with with
polling related *directly* to VMs - rather it handles acquiring data from
RESTful API (glance, neutron etc.) that are not otherwise available in the
form of notifications, and also from host-level interfaces such as SNMP.

 However this approach requires Cloud APIs (or other mechanisms) which 
 allow the polling impl to obtain the desired measurements (VM memory, 
 CPU, net stats, etc.) and moreover the polling approach has it's own 
 set of pros / cons from a arch / topology perspective.

Indeed.

 The other potential option is to setup the ceilometer compute agent 
 within the VM and have each VM publish measurements to the collector 
 -- a local VM agent / inspector if you will. With respect to this 
 local VM agent approach:
 (a) I haven't seen this documented to date; is there any desire / reqs 
 to support this topology?
 (b) If yes to #a, I whipped up a crude PoC here:
 http://tinyurl.com/pqjgotv  Are folks willing to consider a BP for 
 this approach?

So in a sense this is similar to the Heat cfn-push-stats utility[1] and
seems to suffer from the same fundamental problem, i.e. the need for
injection of credentials (user/passwds, keys, whatever) into the VM in order
to allow the metric datapoints be pushed up to the infrastructure layer
(e.g. onto the AMQP bus, or to a REST API endpoint).

How would you propose to solve that credentialing issue?

Cheers,
Eoghan

[1]
https://github.com/openstack/heat-cfntools/blob/master/bin/cfn-push-stats

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] tox -e pep8 fails at requirements.txt

2014-08-01 Thread Ajaya Agrawal
Hi,

It seems like pypi was not reachable at that moment. You could be behind a
proxy. You can try at a later time. It should work.

Cheers,
Ajaya


On Fri, Aug 1, 2014 at 12:29 PM, trinath.soman...@freescale.com 
trinath.soman...@freescale.com wrote:

  Hi –



 When I run tox to check pep8 on the newly cloned neutron from openstack
 git , I get the following error.



 pep8 create: /root/fw_code_base/neutron_01082014/.tox/pep8

 pep8 installdeps: -r/root/fw_code_base/neutron_01082014/requirements.txt,
 -r/root/fw_code_base/neutron_01082014/test-requirements.txt

 ERROR: invocation failed, logfile:
 /root/fw_code_base/neutron_01082014/.tox/pep8/log/pep8-1.log

 ERROR: actionid=pep8

 msg=getenv

 cmdargs=[local('/root/fw_code_base/neutron_01082014/.tox/pep8/bin/pip'),
 'install', '-U', '-r/root/fw_code_base/neutron_01082014/requirements.txt',
 '-r/root/fw_code_base/neutron_01082014/test-requirements.txt']

 env={'PYTHONIOENCODING': 'utf_8', 'XDG_RUNTIME_DIR': '/run/user/0',
 'VIRTUAL_ENV': '/root/fw_code_base/neutron_01082014/.tox/pep8', 'LESSOPEN':
 '| /usr/bin/lesspipe %s', 'SSH_CLIENT': '10.232.84.77 61736 22', 'LOGNAME':
 'root', 'USER': 'root', 'HOME':
 '/root/fw_code_base/neutron_01082014/.tox/pep8/tmp/pseudo-home', 'PATH':
 '/root/fw_code_base/neutron_01082014/.tox/pep8/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games',
 'XDG_SESSION_ID': '19', '_': '/usr/bin/tox', 'SSH_CONNECTION':
 '10.232.84.77 61736 10.232.90.26 22', 'LANG': 'en_IN', 'TERM': 'xterm',
 'SHELL': '/bin/bash', 'LESSCLOSE': '/usr/bin/lesspipe %s %s', 'LANGUAGE':
 'en_IN:en', 'SHLVL': '1', 'SSH_TTY': '/dev/pts/0', 'OLDPWD':
 '/root/fw_code_base', 'PWD': '/root/fw_code_base/neutron_01082014',
 'PYTHONHASHSEED': '0', 'MAIL': '/var/mail/root', 'LS_COLORS':
 'rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arj=01;31:*.taz=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.lz=01;31:*.xz=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.jpg=01;35:*.jpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.axv=01;35:*.anx=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=00;36:*.au=00;36:*.flac=00;36:*.mid=00;36:*.midi=00;36:*.mka=00;36:*.mp3=00;36:*.mpc=00;36:*.ogg=00;36:*.ra=00;36:*.wav=00;36:*.axa=00;36:*.oga=00;36:*.spx=00;36:*.xspf=00;36:'}

 Downloading/unpacking pbr=0.6,!=0.7,1.0 (from -r
 /root/fw_code_base/neutron_01082014/requirements.txt (line 1))

   Cannot fetch index base URL https://pypi.python.org/simple/

   Could not find any downloads that satisfy the requirement
 pbr=0.6,!=0.7,1.0 (from -r
 /root/fw_code_base/neutron_01082014/requirements.txt (line 1))

 Cleaning up...

 No distributions at all found for pbr=0.6,!=0.7,1.0 (from -r
 /root/fw_code_base/neutron_01082014/requirements.txt (line 1))

 Storing debug log for failure in
 /root/fw_code_base/neutron_01082014/.tox/pep8/tmp/pseudo-home/.pip/pip.log



 ERROR: could not install deps
 [-r/root/fw_code_base/neutron_01082014/requirements.txt,
 -r/root/fw_code_base/neutron_01082014/test-requirements.txt]

 __ summary
 __

 ERROR:   pep8: could not install deps
 [-r/root/fw_code_base/neutron_01082014/requirements.txt,
 -r/root/fw_code_base/neutron_01082014/test-requirements.txt]



 Can anyone help me on resolving this issue.



 Kindly please help me in this regard.



 Thanks in advance.



 --

 Trinath Somanchi - B39208

 trinath.soman...@freescale.com | extn: 4048



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Strategy for merging Heat HOT port

2014-08-01 Thread Ladislav Smola

Hello,

yes, this is a very needed change, so I'd vote for -2 everything unless 
it's rebased to 105347.


Thank you for the patches.

Ladislav

On 08/01/2014 02:19 AM, Steve Baker wrote:
The changes to port tripleo-heat-templates to HOT have been rebased to 
the current state and are ready to review. They are the next steps in 
blueprint tripleo-juno-remove-mergepy.


However there is coordination needed to merge since every existing 
tripleo-heat-templates change will need to be rebased and changed 
after the port lands (lucky you!).


Here is a summary of the important changes in the series:

https://review.openstack.org/#/c/105327/
Low risk and plenty of +2s, just needs enough validation from CI for 
an approve


https://review.openstack.org/#/c/105328/
Scripted conversion to HOT. Converts everything except Fn::Select

https://review.openstack.org/#/c/105347/
Manual conversion of Fn::Select to extended get_attr

I'd like to suggest the following approach for getting these to land:
* Any changes which really should land before the above 3 get listed 
in this mail thread (vlan?)

* Reviews of the above 3 changes, and local testing of change 105347
* All other tripleo-heat-templates need to be rebased/reworked to be 
after 105347 (and maybe -2 until they are?)


I'm available for any questions on porting your changes to HOT.

cheers


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Congress] data-source renovation

2014-08-01 Thread Rajdeep Dua
Option 2 looks like a better idea keeping in mind the data model
consistency with Neutron/Nova.
Could we write something similar to a view which becomes a layer on top if
this data model?


On Wed, Jul 30, 2014 at 3:33 AM, Tim Hinrichs thinri...@vmware.com wrote:

 Hi all,

 As I mentioned in a previous IRC, when writing our first few policies I
 had trouble using the tables we currently use to represent external data
 sources like Nova/Neutron.

 The main problem is that wide tables (those with many columns) are hard to
 use.  (a) it is hard to remember what all the columns are, (b) it is easy
 to mistakenly use the same variable in two different tables in the body of
 the rule, i.e. to create an accidental join, (c) changes to the datasource
 drivers can require tedious/error-prone modifications to policy.

 I see several options.  Once we choose something, I’ll write up a spec and
 include the other options as alternatives.


 1) Add a preprocessor to the policy engine that makes it easier to deal
 with large tables via named-argument references.

 Instead of writing a rule like

 p(port_id, name) :-
 neutron:ports(port_id, addr_pairs, security_groups, extra_dhcp_opts,
 binding_cap, status, name, admin_state_up, network_id, tenant_id,
 binding_vif, device_owner, mac_address, fixed_ips, router_id, binding_host)

 we would write

 p(id, nme) :-
 neutron:ports(port_id=id, name=nme)

 The preprocessor would fill in all the missing variables and hand the
 original rule off to the Datalog engine.

 Pros: (i) leveraging vanilla database technology under the hood
   (ii) policy is robust to changes in the fields of the original data
 b/c the Congress data model is different than the Nova/Neutron data models
 Cons: (i) we will need to invert the preprocessor when showing
 rules/traces/etc. to the user
   (ii) a layer of translation makes debugging difficult

 2) Be disciplined about writing narrow tables and write
 tutorials/recommendations demonstrating how.

 Instead of a table like...
 neutron:ports(port_id, addr_pairs, security_groups, extra_dhcp_opts,
 binding_cap, status, name, admin_state_up, network_id, tenant_id,
 binding_vif, device_owner, mac_address, fixed_ips, router_id, binding_host)

 we would have many tables...
 neutron:ports(port_id)
 neutron:ports.addr_pairs(port_id, addr_pairs)
 neutron:ports.security_groups(port_id, security_groups)
 neutron:ports.extra_dhcp_opts(port_id, extra_dhcp_opts)
 neutron:ports.name(port_id, name)
 ...

 People writing policy would write rules such as ...

 p(x) :- neutron:ports.name(port, name), ...

 [Here, the period e.g. in ports.name is not an operator--just a
 convenient way to spell the tablename.]

 To do this, Congress would need to know which columns in a table are
 sufficient to uniquely identify a row, which in most cases is just the ID.

 Pros: (i) this requires only changes in the datasource drivers; everything
 else remains the same
   (ii) still leveraging database technology under the hood
   (iii) policy is robust to changes in fields of original data
 Cons: (i) datasource driver can force policy writer to use wide tables
   (ii) this data model is much different than the original data models
   (iii) we need primary-key information about tables

 3) Enhance the Congress policy language to handle objects natively.

 Instead of writing a rule like the following ...

 p(port_id, name, group) :-
 neutron:ports(port_id, addr_pairs, security_groups, extra_dhcp_opts,
 binding_cap, status, name, admin_state_up, network_id, tenant_id,
 binding_vif, device_owner, mac_address, fixed_ips, router_id, binding_host),
 neutron:ports.security_groups(security_group, group)

 we would write a rule such as
 p(port_id, name) :-
 neutron:ports(port),
 port.name(name),
 port.id(port_id),
 port.security_groups(group)

 The big difference here is that the period (.) is an operator in the
 language, just as in C++/Java.

 Pros:
 (i) The data model we use in Congress is almost exactly the same as the
 data model we use in Neutron/Nova.

 (ii) Policy is robust to changes in the Neutron/Nova data model as long as
 those changes only ADD fields.

 (iii) Programmers may be slightly more comfortable with this language.

 Cons:

 (i) The obvious implementation (changing the engine to implement the (.)
 operator directly is quite a change from traditional database technology.
  At this point, that seems risky.

 (ii) It is unclear how to implement this via a preprocessor (thereby
 leveraging database technology).  The key problem I see is that we would
 need to translate port.name(...) into something like option (2) above.
  The difficulty is that TABLE could sometimes be a port, sometimes be a
 network, sometimes be a subnet, etc.

 (iii) Requires some extra syntactic restrictions to ensure we don't lose
 decidability.

 (iv) Because the Congress and Nova/Neutron models are the same, changes to
 the Nova/Neutron model can 

[openstack-dev] how and which tempest tests to run

2014-08-01 Thread Nikesh Kumar Mahalka
I deployed a single node devstack on Ubuntu 14.04.
This devstack belongs to Juno.
I have written a cinder-volume driver for my client backend.
I want to contribute this driver in Juno release.
As i analyzed the contribution process,it is telling to run tempest tests
for Continuous Integration.

Could any one tell me how and which tempest tests to run on this devstack
deployment for cinder volume driver?
Also tempest has many test cases.Do i have to pass all tests for
contribution of my driver?

Also am i missing any thing thing in below local.conf?

*Below are steps for my devstack deployment:*

1) git clone https://github.com/openstack-dev/devstack.git
2)cd devstack
3)vi local.conf

[[local|localrc]]

ADMIN_PASSWORD=some_password
DATABASE_PASSWORD=$ADMIN_PASSWORD
RABBIT_PASSWORD=$ADMIN_PASSWORD
SERVICE_PASSWORD=$ADMIN_PASSWORD
SERVICE_TOKEN=ADMIN
FLAT_INTERFACE=eth0
FIXED_RANGE=192.168.2.80/29
#FLOATING_RANGE=192.168.20.0/25
HOST_IP=192.168.2.64
LOGFILE=$DEST/logs/stack.sh.log
SCREEN_LOGDIR=$DEST/logs/screen
SYSLOG=True
SYSLOG_HOST=$HOST_IP
SYSLOG_PORT=516
RECLONE=yes
CINDER_ENABLED_BACKENDS=client:client_driver
TEMPEST_VOLUME_DRIVER=client_iscsi
TEMPEST_VOLUME_VENDOR=CLIENT
TEMPEST_STORAGE_PROTOCOL=iSCSI
VOLUME_BACKING_FILE_SIZE=20G

[[post-config|$CINDER_CONF]]

[client_driver]
volume_driver=cinder.volume.drivers.san.client.iscsi.
client_iscsi.ClientISCSIDriver
san_ip = 192.168.2.192
san_login = some_name
san_password =some_password
client_iscsi_ips = 192.168.2.193

4)./stack.sh
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] Compute agent local VM inspector - potential enhancement

2014-08-01 Thread Eoghan Glynn


 Heat cfntools is based on SSH, so I assume it requires TCP/IP connectivity
 between VM and the central agent(or collector). But in the cloud, some
 networks are isolated from infrastructure layer network, because of security
 reasons. Some of our customers even explicitly require such security
 protection. Does it mean those isolated VMs cannot be monitored by this
 proposed-VM-agent?

Yes, that sounds plausible to me.

Cheers,
Eoghan
 
 I really wish we can figur out how it could work for all VMs but with no
 security issues.
 
 I'm not familiar with heat-cfntools, so, correct me if I am wrong :)
 
 
 Best regards!
 Kurt
 
 -邮件原件-
 发件人: Eoghan Glynn [mailto:egl...@redhat.com]
 发送时间: 2014年8月1日 14:46
 收件人: OpenStack Development Mailing List (not for usage questions)
 主题: Re: [openstack-dev] [ceilometer] Compute agent local VM inspector -
 potential enhancement
 
 
 
  Disclaimer: I'm not fully vested on ceilometer internals, so bear with me.
  
  For consumers wanting to leverage ceilometer as a telemetry service
  atop non-OpenStack Clouds or infrastructure they don't own, some edge
  cases crop up. Most notably the consumer may not have access to the
  hypervisor host and therefore cannot leverage the ceilometer compute
  agent on a per host basis.
 
 Yes, currently such access to the hypervisor host is required, least in the
 case of the libvirt-based inspector.
  
  In such scenarios it's my understanding the main option is to employ
  the central agent to poll measurements from the monitored resources
  (VMs, etc.).
 
 Well, the ceilometer central agent is not generally concerned with with
 polling related *directly* to VMs - rather it handles acquiring data from
 RESTful API (glance, neutron etc.) that are not otherwise available in the
 form of notifications, and also from host-level interfaces such as SNMP.
 
  However this approach requires Cloud APIs (or other mechanisms) which
  allow the polling impl to obtain the desired measurements (VM memory,
  CPU, net stats, etc.) and moreover the polling approach has it's own
  set of pros / cons from a arch / topology perspective.
 
 Indeed.
 
  The other potential option is to setup the ceilometer compute agent
  within the VM and have each VM publish measurements to the collector
  -- a local VM agent / inspector if you will. With respect to this
  local VM agent approach:
  (a) I haven't seen this documented to date; is there any desire / reqs
  to support this topology?
  (b) If yes to #a, I whipped up a crude PoC here:
  http://tinyurl.com/pqjgotv  Are folks willing to consider a BP for
  this approach?
 
 So in a sense this is similar to the Heat cfn-push-stats utility[1] and
 seems to suffer from the same fundamental problem, i.e. the need for
 injection of credentials (user/passwds, keys, whatever) into the VM in order
 to allow the metric datapoints be pushed up to the infrastructure layer
 (e.g. onto the AMQP bus, or to a REST API endpoint).
 
 How would you propose to solve that credentialing issue?
 
 Cheers,
 Eoghan
 
 [1]
 https://github.com/openstack/heat-cfntools/blob/master/bin/cfn-push-stats
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Spec exceptions are closed, FPF is August 21

2014-08-01 Thread Mandeep Dhami
Hi Armando:

  If a core-reviewer puts a -2, there must be a good reason for it

I agree. The problem is that after the initial issue identified in the
initial -2 review has been fixed, and the patch updated, it (sometimes)
happens that we can not get the original reviewer to re-review that update
for weeks - creating the type of issues identified in this thread.

I would agree that if this was a one-off scenarios, we should handling this
as a specific case as you suggest. Unfortunately, this is not a one-off
instance, and hence my request for clearer guidelines from PTL for such
cases.

Regards,
Mandeep



On Thu, Jul 31, 2014 at 3:54 PM, Armando M. arma...@gmail.com wrote:

 It is not my intention debating, pointing fingers and finding culprits,
 these issues can be addressed in some other context.

 I am gonna say three things:

 1) If a core-reviewer puts a -2, there must be a good reason for it. If
 other reviewers blindly move on as some people seem to imply here, then
 those reviewers should probably not review the code at all! My policy is to
 review all the code I am interested in/I can, regardless of the score. My
 -1 may be someone's +1 (or vice versa), so 'trusting' someone else's vote
 is the wrong way to go about this.

 2) If we all feel that this feature is important (which I am not sure it
 was being marked as 'low' in oslo, not sure how it was tracked in Neutron),
 there is the weekly IRC Neutron meeting to raise awareness, since all cores
 participate; to the best of my knowledge we never spoke (or barely) of the
 rootwrap work.

 3) If people do want this work in Juno (Carl being one of them), we can
 figure out how to make one final push, and assess potential regression. We
 'rushed' other features late in cycle in the past (like nova/neutron event
 notifications) and if we keep this disabled by default in Juno, I don't
 think it's really that risky. I can work with Carl to give the patches some
 more love.

 Armando



 On 31 July 2014 15:40, Rudra Rugge ru...@contrailsystems.com wrote:

 Hi Kyle,

 I also agree with Mandeep's suggestion of putting a time frame on the
 lingering -2 if the addressed concerns have been taken care of. In my
 experience also a sticky -2 detracts other reviewers from reviewing an
 updated patch.

 Either a time-frame or a possible override by PTL (move to -1) would help
 make progress on the review.

 Regards,
 Rudra


 On Thu, Jul 31, 2014 at 2:29 PM, Mandeep Dhami dh...@noironetworks.com
 wrote:

 Hi Kyle:

 As -2 is sticky, and as there exists a possibility that the original
 core might not get time to get back to re-reviewing his, do you think that
 there should be clearer guidelines on it's usage (to avoid what you
 identified as dropping of the balls)?

 Salvatore had a good guidance in a related thread [0], do you agree with
 something like that?


 I try to avoid -2s as much as possible. I put a -2 only when I reckon your
 patch should never be merged because it'll make the software unstable or
 tries to solve a problem that does not exist. -2s stick across patches and
 tend to put off other reviewers.

 [0]
 http://lists.openstack.org/pipermail/openstack-dev/2014-July/041339.html


 Or do you think that 3-5 days after an update that addresses the issues
 identified in the original -2, we should automatically remove that -2? If
 this does not happen often, this process does not have to be automated,
 just an exception that the PTL can exercise to address issues where the
 original reason for -2 has been addressed and nothing new has been
 identified?



 On Thu, Jul 31, 2014 at 11:25 AM, Kyle Mestery mest...@mestery.com
 wrote:

 On Thu, Jul 31, 2014 at 7:11 AM, Yuriy Taraday yorik@gmail.com
 wrote:
  On Wed, Jul 30, 2014 at 11:52 AM, Kyle Mestery mest...@mestery.com
 wrote:
  and even less
  possibly rootwrap [3] if the security implications can be worked out.
 
  Can you please provide some input on those security implications that
 are
  not worked out yet?
  I'm really surprised to see such comments in some ML thread not
 directly
  related to the BP. Why is my spec blocked? Neither spec [1] nor code
 (which
  is available for a really long time now [2] [3]) can get enough
 reviewers'
  attention because of those groundless -2's. Should I abandon these
 change
  requests and file new ones to get some eyes on my code and proposals?
 It's
  just getting ridiculous. Let's take a look at timeline, shall we?
 
 I share your concerns here as well, and I'm sorry you've had a bad
 experience working with the community here.

  Mar, 25 - first version of the first part of Neutron code is
 published at
  [2]
  Mar, 28 - first reviewers come and it gets -1'd by Mark because of
 lack of
  BP (thankful it wasn't -2 yet, so reviews continued)
  Apr, 1 - Both Oslo [5] and Neturon [6] BPs are created;
  Apr, 2 - first version of the second part of Neutron code is
 published at
  [3];
  May, 16 - first version of Neutron spec is published at [1];
  May, 19 

[openstack-dev] [nova] libvirtError: XML error: Missing CPU model name on 2nd level vm

2014-08-01 Thread Chen CH Ji

Hi
  I don't have a real PC to so created a test env ,so I
created a 2nd level env (create a kvm virtual machine on top of a physical
host then run devstack o the vm)
  I am not sure whether it's doable because I saw following
error when start nova-compute service , is it a bug or I need to update my
configuration instead? thanks


2014-08-01 17:04:51.532 DEBUG nova.virt.libvirt.config [-] Generated XML
('cpu\n  archx86_64/arch\n  topology sockets=1 cores=1
threads=1/\n/cpu\n',)  from (pid=16956)
to_xml /opt/stack/nova/nova/virt/libvirt/config.py:79
Traceback (most recent call last):
  File /usr/lib/python2.7/dist-packages/eventlet/hubs/hub.py, line 346,
in fire_timers
timer()
  File /usr/lib/python2.7/dist-packages/eventlet/hubs/timer.py, line 56,
in __call__
cb(*args, **kw)
  File /usr/lib/python2.7/dist-packages/eventlet/event.py, line 163, in
_do_send
waiter.switch(result)
  File /usr/lib/python2.7/dist-packages/eventlet/greenthread.py, line
194, in main
result = function(*args, **kwargs)
  File /opt/stack/nova/nova/openstack/common/service.py, line 490, in
run_service
service.start()
  File /opt/stack/nova/nova/service.py, line 164, in start
self.manager.init_host()
  File /opt/stack/nova/nova/compute/manager.py, line 1055, in init_host
self.driver.init_host(host=self.host)
  File /opt/stack/nova/nova/virt/libvirt/driver.py, line 633, in
init_host
self._do_quality_warnings()
  File /opt/stack/nova/nova/virt/libvirt/driver.py, line 616, in
_do_quality_warnings
caps = self._get_host_capabilities()
  File /opt/stack/nova/nova/virt/libvirt/driver.py, line 2942, in
_get_host_capabilities
libvirt.VIR_CONNECT_BASELINE_CPU_EXPAND_FEATURES)
  File /usr/lib/python2.7/dist-packages/eventlet/tpool.py, line 179, in
doit
result = proxy_call(self._autowrap, f, *args, **kwargs)
  File /usr/lib/python2.7/dist-packages/eventlet/tpool.py, line 139, in
proxy_call
rv = execute(f,*args,**kwargs)
  File /usr/lib/python2.7/dist-packages/eventlet/tpool.py, line 77, in
tworker
rv = meth(*args,**kwargs)
  File /usr/lib/python2.7/dist-packages/libvirt.py, line 3127, in
baselineCPU
if ret is None: raise libvirtError ('virConnectBaselineCPU() failed',
conn=self)
libvirtError: XML error: Missing CPU model name

Best Regards!

Kevin (Chen) Ji 纪 晨

Engineer, zVM Development, CSTL
Notes: Chen CH Ji/China/IBM@IBMCN   Internet: jiche...@cn.ibm.com
Phone: +86-10-82454158
Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District,
Beijing 100193, PRC___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] Cinder tempest api volume tests failed

2014-08-01 Thread Nikesh Kumar Mahalka
Hi Mike,test which is failed for me is:
*tempest.api.volume.admin.test_volume_types.VolumeTypesTest*

I am getting error in below function call in above test
 *self.volumes_client.wait_for_volume_status**(volume['id'],*
* 'available')**.*

This function call is in below function:

*@test.attr(type='smoke')*
*def
test_create_get_delete_volume_with_volume_type_and_extra_specs(self)*


I saw in c-sch log and i found this major issue:
*2014-08-01 14:08:05.773 11853 ERROR cinder.scheduler.flows.create_volume
[req-ceafd00c-30b1-4846-a555-6116556efb3b 43af88811b2243238d3d9fc732731565
a39922e8e5284729b07fcd045cfd5a88 - - -] Failed to run task
cinder.scheduler.flows.create_volume.ScheduleCreateVolumeTask;volume:create:
No valid host was found. No weighed hosts available*

Actually by analyzing the test i found:
1)it is creating a volume-type with extra_specs
2)it is creating a volume with volume type and here it is failing.


*Below is my new local.conf file. *
*Am i missing anything in this?*

[[local|localrc]]
ADMIN_PASSWORD=some_password
DATABASE_PASSWORD=$ADMIN_PASSWORD
RABBIT_PASSWORD=$ADMIN_PASSWORD
SERVICE_PASSWORD=$ADMIN_PASSWORD
SERVICE_TOKEN=ADMIN
FLAT_INTERFACE=eth0
FIXED_RANGE=192.168.2.80/29
HOST_IP=192.168.2.64
LOGFILE=$DEST/logs/stack.sh.log
SCREEN_LOGDIR=$DEST/logs/screen
SYSLOG=True
SYSLOG_HOST=$HOST_IP
SYSLOG_PORT=516
RECLONE=yes
CINDER_ENABLED_BACKENDS=client:client_driver
TEMPEST_VOLUME_DRIVER=client_iscsi
TEMPEST_VOLUME_VENDOR=CLIENT
TEMPEST_STORAGE_PROTOCOL=iSCSI
VOLUME_BACKING_FILE_SIZE=20G

[[post-config|$CINDER_CONF]]
[client_driver]
volume_driver=cinder.volume.drivers.san.client.iscsi.client_iscsi.ClientISCSIDriver
san_ip=192.168.2.192
san_login=some_name
san_password=some_password
client_iscsi_ips = 192.168.2.193


*Below is my cinder.conf:*
[keystone_authtoken]
auth_uri = http://192.168.2.64:5000/v2.0
signing_dir = /var/cache/cinder
admin_password = some_password
admin_user = cinder
admin_tenant_name = service
cafile =
identity_uri = http://192.168.2.64:35357

[DEFAULT]
rabbit_password = some_password
rabbit_hosts = 192.168.2.64
rpc_backend = cinder.openstack.common.rpc.impl_kombu
use_syslog = True
default_volume_type = client_driver
enabled_backends = client_driver
enable_v1_api = true
periodic_interval = 60
lock_path = /opt/stack/data/cinder
state_path = /opt/stack/data/cinder
osapi_volume_extension = cinder.api.contrib.standard_extensions
rootwrap_config = /etc/cinder/rootwrap.conf
api_paste_config = /etc/cinder/api-paste.ini
sql_connection = mysql://root:some_password@127.0.0.1/cinder?charset=utf8
iscsi_helper = tgtadm
my_ip = 192.168.2.64
verbose = True
debug = True
auth_strategy = keystone

[client_driver]
client_iscsi_ips = 192.168.2.193
san_password = !manage
san_login = manage
san_ip = 192.168.2.192
volume_driver =
cinder.volume.drivers.san.client.iscsi.client_iscsi.ClientISCSIDriver



Regards
Nikesh









On Fri, Aug 1, 2014 at 1:56 AM, Mike Perez thin...@gmail.com wrote:

 On 11:30 Thu 31 Jul , Nikesh Kumar Mahalka wrote:
  I deployed a single node devstack on Ubuntu 14.04.
  This devstack belongs to Juno.
 
  When i am running tempest api volume test, i am getting some tests
 failed.

 Hi Nikesh,

 To further figure out what's wrong, take a look at the c-vol, c-api and
 c-sch
 tabs in the stack screen session. If you're unsure where to go from there
 after
 looking at the output, set the `SCREEN_LOGDIR` setting in your local.conf
 [1]
 and copy the logs from those tabs to paste.openstack.org for us to see.

 [1] - http://devstack.org/configuration.html

 --
 Mike Perez

 ___
 Mailing list:
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
 Post to : openst...@lists.openstack.org
 Unsubscribe :
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Strategy for merging Heat HOT port

2014-08-01 Thread Tomas Sedovic
On 01/08/14 02:19, Steve Baker wrote:
 The changes to port tripleo-heat-templates to HOT have been rebased to
 the current state and are ready to review. They are the next steps in
 blueprint tripleo-juno-remove-mergepy.
 
 However there is coordination needed to merge since every existing
 tripleo-heat-templates change will need to be rebased and changed after
 the port lands (lucky you!).
 
 Here is a summary of the important changes in the series:
 
 https://review.openstack.org/#/c/105327/
 Low risk and plenty of +2s, just needs enough validation from CI for an
 approve
 
 https://review.openstack.org/#/c/105328/
 Scripted conversion to HOT. Converts everything except Fn::Select
 
 https://review.openstack.org/#/c/105347/
 Manual conversion of Fn::Select to extended get_attr
 
 I'd like to suggest the following approach for getting these to land:
 * Any changes which really should land before the above 3 get listed in
 this mail thread (vlan?)
 * Reviews of the above 3 changes, and local testing of change 105347
 * All other tripleo-heat-templates need to be rebased/reworked to be
 after 105347 (and maybe -2 until they are?)

Agreed to all this. We've done a similar thing for the software config work.

I'll try to do a local run of 105347 today.

 
 I'm available for any questions on porting your changes to HOT.
 
 cheers
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Trove] Backup/restore namespace config move has leftovers

2014-08-01 Thread Denis Makogon
On Fri, Aug 1, 2014 at 2:30 AM, Mark Kirkwood mark.kirkw...@catalyst.net.nz
 wrote:

 In my latest devstack pull I notice that

 backup_namespace
 restore_namespace

 have moved from the default conf group to per datastore (commit 61935d3).
 However they still appear in the common_opts section of


 trove/common/cfg.py



Correct, they are still there, see
https://github.com/openstack/trove/blob/master/trove/common/cfg.py#L177-L182
.

This this options from DEFAULT section should be dropped of, or at least
marked as DEPRECATED.



 This seems like an oversight - or is there something I'm missing?


You're not missing anything, you are right. I'd suggest to file a bug
report and fix given issue.


Best regards,
Denis Makogon


 Cheers

 Mark

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Creating database model and agent

2014-08-01 Thread Maciej Nabożny

Hello,
some days ago I was asking you about creating extension and service 
plugin for Neutron. Now, I am trying to find how to create database 
model for this plugin and agent :)


Could you check if I am understanding this issues properly?

The database model for new plugin should be created in neutron/db 
directory. The model classes should inherit:

1. neutron.db.model_base.BASEV2, which is related to NeutronBaseV2 class
2. if model should contain id or relation to tenant, it should inherit 
also HasTenant and HasId from module neutron.db.models_v2

3. All other fields should be defined according to sqlalchemy orm.
I have also question - how Neutron knows (or sqlalchemy) which 
models/tables should be created in database? At this moment I cannot 
find any code, which initializes the database. The only thing which I 
found is declarative.declarative_base in db.model_base



And one question about creating new agents - is it just thread/process, 
which is managed by Neutron? I was analysing lbaas code and it is just 
python script in /usr/bin/, which executes class. Do I have to create 
anything additional like with database or service plugin to create new 
agent? Or scripts like in LBAAS will be enough + init script?



I also found great diagram, which describes in a nut shell how Neutron 
service plugins are organised internal:

https://wiki.openstack.org/wiki/Neutron/LBaaS/Architecture
Maybe it would be good idea to put diagram like this to official dev 
wiki? Of corse after some modifications. Now wiki points to the 
SecurityGroups code, which is not very helpful for beginners like me :)


regards!
Maciej

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] [swift] Improving ceilometer.objectstore.swift_middleware

2014-08-01 Thread Osanai, Hisashi

I would like to follow this discussion so I picked up points.

- There are two way to collect info from swift, one is pollster and 
  the other is notification. And we discussed about how to solve the 
  performance degradation of swift_middleware here. 
  pollster:
   - storage.objects
   - storage.objects.size
   - storage.objects.containers
   - storage.containers.objects
   - storage.containers.objects.size
  notification:
   - storage.objects.incoming.bytes
   - storage.objects.outgoing.bytes
   - storage.api.request

- storage.objects.imcoming.bytes, storage.objects.outgoing.bytes and 
  storage.api.request are handled with swift_middleware because ceilometer 
  need to have the info with per-user and per-tenant basis.
- swift has statsd but there is no per-user and per-tenant related info 
  because to realize this swift has to have keystone-isms into core swift code.
- improves swift_middleware with stopping the 1:1 mapping b/w API calls and 
  notifications
- swift may consume 10s of thousands of event per second and this case is 
fairly 
  unique so far.

I would like to think this performance problem with the following point of view.
- need to handle 10s of thousands of event per second
- possibility to lost events (i.e. swift proxy goes down when events queued in 
a swift process)

With the notification style there are restriction for above points. Therefore I 
change the style 
to get storage.objects.imcoming.bytes, storage.objects.outgoing.bytes and 
storage.api.request from notification to pollster.
Here I met a problem that pointed out by Mr. Merritt, swift has dependency with 
keystone.
But I prefer to solve this problem rather than a problem for notification 
style. What do you think?

My rough idea to solve the dependency problem is 
- enable statsd (or similar function) in swift
- put a middleware in swift proxy
- this middleware does not have any communication with ceilometer but 
  put a mark to followed middleware or swift proxy
- store metrics with a tenant and a user by statsd if there is the mark
  store metrics by statsd if there is no mark
- Ceilometer (central agent) call APIs to get the metrics

Is there any way to solve the dependency problem?

Best Regards,
Hisashi Osanai


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] plan for moving to using oslo.db

2014-08-01 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

On 13/05/14 17:55, Matt Riedemann wrote:
 
 
 On 5/13/2014 7:35 AM, Doug Hellmann wrote:
 On Mon, May 12, 2014 at 3:25 PM, Roman Podoliaka 
 rpodoly...@mirantis.com wrote:
 Hi all,
 
 Yes, once the oslo.db initial release is cut, we expect the
 migration from using of its oslo-incubator version to a library
 one to be as simple as following the steps you've mentioned.
 Though, we still need to finish the setup of oslo.db repo
 (AFAIK, this is currently blocked by the fact we don't run gate
 tests for oslo.db patches. Doug, Victor, please correct me, if
 I'm wrong).
 
 Yes, we need to work out the best way to test pre-releases of
 the libraries with apps before we have anything depending on
 those libraries so we can avoid breaking anything in a way that
 is hard to find or fix. We have a summit session scheduled for
 Thursday morning [1].
 
 Doug
 
 1 - 
 http://junodesignsummit.sched.org/event/4f92763857fbe0686fe0436fecae8fbc



 
Thanks,
 Roman
 
 On Mon, May 5, 2014 at 7:47 AM, Matt Riedemann 
 mrie...@linux.vnet.ibm.com wrote:
 Just wanted to get some thoughts down while they are in my
 head this morning.
 
 Oslo DB is now a library [1].  I'm trying to figure out what
 the steps are to getting Nova to using that so we can rip out
 the sync'ed common db code.
 
 1. Looks like it's not in global-requirements yet [2], so
 that's probably a first step.
 
 2. We'll want to cut a sqlalchemy-migrate release once this
 patch is merged [3]. This moves a decent chunk of unique
 constraint patch code out of oslo and into sqlalchemy-migrate
 where it belongs so we can run unit tests with sqlite to drop
 unique constraints.
 
 3. Rip this [4] out of oslo.db once migrate is updated and
 released.
 
 4. Replace nova.openstack.common.db with oslo.db.
 
 5. ???
 
 6. Profit!
 
 Did I miss anything?
 
 [1] http://git.openstack.org/cgit/openstack/oslo.db/ [2] 
 http://git.openstack.org/cgit/openstack/requirements/tree/global-requirements.txt


 
[3] https://review.openstack.org/#/c/87773/
 [4] https://review.openstack.org/#/c/31016/
 
 --
 
 Thanks,
 
 Matt Riedemann
 
 
 ___ OpenStack-dev
 mailing list OpenStack-dev@lists.openstack.org 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 
___
 OpenStack-dev mailing list OpenStack-dev@lists.openstack.org 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 
___
 OpenStack-dev mailing list OpenStack-dev@lists.openstack.org 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 
 Just an update on progress, step 2 is complete, the review for step
 3 is here:
 
 https://review.openstack.org/#/c/92393/
 

Any updates? I'm interested in making nova using oslo.db, this is
needed for ongoing effort to make openstack services mysql-connector
aware [we have some code for this in oslo.db but nit in oslo-incubator].

Cheers,
/Ihar
-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2.0.22 (Darwin)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQEcBAEBCgAGBQJT24EPAAoJEC5aWaUY1u57LGcIANJAGPVk61sIyY77Me1Xn3Sx
Z/lx6C4/zdXi6PmBUgVMzdnh/r7cjw0Twe7B1D495qDD0rA/OwrP229WaM4wHBnz
bqtih+Bl9rwP6Dij57O6D9cHmcW1gmAxEiqX6iva9RomIWMyB8cAJoEsnD95Dw+q
3PIeZyikDy42p1BTnVlXRLNJhC9RaZyw8wVQ9aUVe6ydYkerbGBALQTOTirlUa8y
cq1k3GrKczB53zFpZT1i07wIP4cl9J0xXWcQTjH5cA4Sw/5kxGplT18DHX2/rrs9
YGSYIlpu1UDKyofVB69DMu6/Ryov3bBEf91NGZ3wJwqkdNu7f9qzFOM4mrT7Qq8=
=LYQX
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Trove] V1 Client improvements

2014-08-01 Thread Denis Makogon
Hello, Stackers.

I’d like to raise the question related to list of API calls that were
implemented at Trove site but not used as part of V1 Client.

Ignored V1 Client APIs:

https://github.com/openstack/python-troveclient/blob/master/troveclient/v1/client.py#L72-L79

The problem is that described list of API endpoints are available for
at Trove, and can’t be used as part of V1 client (see link above). Given
API endpoints can be used only as part of compat client.

So, as i can see, we have  options:

   -

   Clean-up V1 client - remove ignored API, develop plans for V2
   -

   Make ignored API available through V1 client (will take certain efforts
   to add CLI representation of this calls, same for Trove site - will need to
   add integration tests to verify that given endpoints are accessible via V1
   client)


Thoughts?


Best regards,

Denis Makogon
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [sahara] team meeting minutes July 31

2014-08-01 Thread Sergey Lukjanov
Thanks everyone who have joined Sahara meeting.

Here are the logs from the meeting:

http://eavesdrop.openstack.org/meetings/sahara/2014/sahara.2014-07-31-17.59.html
http://eavesdrop.openstack.org/meetings/sahara/2014/sahara.2014-07-31-17.59.log.html

-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] Compute agent local VM inspector - potential enhancement

2014-08-01 Thread boden

On 8/1/2014 4:37 AM, Eoghan Glynn wrote:




Heat cfntools is based on SSH, so I assume it requires TCP/IP connectivity
between VM and the central agent(or collector). But in the cloud, some
networks are isolated from infrastructure layer network, because of security
reasons. Some of our customers even explicitly require such security
protection. Does it mean those isolated VMs cannot be monitored by this
proposed-VM-agent?


Yes, that sounds plausible to me.


My understanding is that this VM agent for ceilometer would need 
connectivity to nova API as well as to the AMQP broker. IMHO the 
infrastructure requirements from a network topology POV will differ from 
provider to provider and based on customer reqs / env.




Cheers,
Eoghan


I really wish we can figur out how it could work for all VMs but with no
security issues.

I'm not familiar with heat-cfntools, so, correct me if I am wrong :)


Best regards!
Kurt

-邮件原件-
发件人: Eoghan Glynn [mailto:egl...@redhat.com]
发送时间: 2014年8月1日 14:46
收件人: OpenStack Development Mailing List (not for usage questions)
主题: Re: [openstack-dev] [ceilometer] Compute agent local VM inspector -
potential enhancement




Disclaimer: I'm not fully vested on ceilometer internals, so bear with me.

For consumers wanting to leverage ceilometer as a telemetry service
atop non-OpenStack Clouds or infrastructure they don't own, some edge
cases crop up. Most notably the consumer may not have access to the
hypervisor host and therefore cannot leverage the ceilometer compute
agent on a per host basis.


Yes, currently such access to the hypervisor host is required, least in the
case of the libvirt-based inspector.


In such scenarios it's my understanding the main option is to employ
the central agent to poll measurements from the monitored resources
(VMs, etc.).


Well, the ceilometer central agent is not generally concerned with with
polling related *directly* to VMs - rather it handles acquiring data from
RESTful API (glance, neutron etc.) that are not otherwise available in the
form of notifications, and also from host-level interfaces such as SNMP.



Thanks for additional clarity. Perhaps this proposed local VM agent 
fills additional use cases whereupon ceilometer is being used without 
openstack proper (e.g. not a full set of openstack complaint services 
like neutron, glance, etc.).



However this approach requires Cloud APIs (or other mechanisms) which
allow the polling impl to obtain the desired measurements (VM memory,
CPU, net stats, etc.) and moreover the polling approach has it's own
set of pros / cons from a arch / topology perspective.


Indeed.


The other potential option is to setup the ceilometer compute agent
within the VM and have each VM publish measurements to the collector
-- a local VM agent / inspector if you will. With respect to this
local VM agent approach:
(a) I haven't seen this documented to date; is there any desire / reqs
to support this topology?
(b) If yes to #a, I whipped up a crude PoC here:
http://tinyurl.com/pqjgotv  Are folks willing to consider a BP for
this approach?


So in a sense this is similar to the Heat cfn-push-stats utility[1] and
seems to suffer from the same fundamental problem, i.e. the need for
injection of credentials (user/passwds, keys, whatever) into the VM in order
to allow the metric datapoints be pushed up to the infrastructure layer
(e.g. onto the AMQP bus, or to a REST API endpoint).

How would you propose to solve that credentialing issue?



My initial approximation would be to target use cases where end users do 
not have direct guest access or have limited guest access such that 
their UID / GID cannot access the conf file. For example instances which 
only provide app access provisioned using heat SoftwareDeployments 
(http://tinyurl.com/qxmh2of) or trove database instances.


In general I don't see this approach from a security POV much different 
than whats done with the trove guest agent (http://tinyurl.com/ohvtmtz).


Longer term perhaps credentials could be mitigated using Barbican as 
suggested here: https://bugs.launchpad.net/nova/+bug/1158328



Cheers,
Eoghan

[1]
https://github.com/openstack/heat-cfntools/blob/master/bin/cfn-push-stats

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.messaging][infra] Adding support for AMQP 1.0 Messaging to Oslo.Messaging and infra/config

2014-08-01 Thread Ken Giusti
On Wed, 30 Jul 2014 15:04:41 -0700, Matt Riedemann wrote:
On 7/30/2014 11:59 AM, Ken Giusti wrote:
 On Wed, 30 Jul 2014 14:25:46 +0100, Daniel P. Berrange wrote:
 On Wed, Jul 30, 2014 at 08:54:01AM -0400, Ken Giusti wrote:
 Greetings,
snip
 At this point, there are no integration tests that exercise the
 driver.  However, the new unit tests include a test 'broker', which
 allow the unit tests to fully exercise the new driver, right down to
 the network.  That's a bonus of AMQP 1.0 - it can support peer-2-peer
 messaging.

 So its the new unit tests that have the 'hard' requirement of the
 proton libraries.And mocking-out the proton libraries really
 doesn't allow us to do any meaningful tests of the driver.

snip

If your unit tests are dependent on a specific dependent library aren't
they no longer unit tests but integration tests anyway?


Good point - yes, they are certainly more than just unit tests.  I'd
consider them more functional tests than integration tests, tho:
they only test from the new driver API down to the wire (and back up
again via the fake loopback broker).  For integration testing, I'd
want to put a real broker in there, and run real subprojects over
oslo.messaging using the new driver (neutron, etc).

I'd really like to avoid the classic unit test approach of mocking out
the underlying messaging client api if possible.  Even though that
would avoid the dependency, I think it could result in the same issues
we've had with the existing impl_qpid tests passing in mock, but
failing when run against qpidd.

Just wondering, not trying to put up road-blocks because I'd like to see
how this code performs but haven't had time yet to play with it.


np, a good question, thanks!  When you do get a chance to kick the tires,
feel free to ping me with any questions/issues you have.  Thanks!

--

Thanks,

Matt Riedemann
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.messaging][infra] Adding support for AMQP 1.0 Messaging to Oslo.Messaging and infra/config

2014-08-01 Thread Kevin Benton
It seems like this is precisely what the functional test setup was designed
to handle. Is there a reason you don't want to run them as functional tests
instead of unit tests?

As functional tests, nobody would need new prereqs just to make it through
unit tests and anyone that wants to do the full tests can install them and
run 'tox functional'.

This is how neutron is starting to test the behavior of OVS and it seems to
work well.
On Aug 1, 2014 6:01 AM, Ken Giusti kgiu...@gmail.com wrote:


 On Wed, 30 Jul 2014 15:04:41 -0700, Matt Riedemann wrote:
 On 7/30/2014 11:59 AM, Ken Giusti wrote:
  On Wed, 30 Jul 2014 14:25:46 +0100, Daniel P. Berrange wrote:
  On Wed, Jul 30, 2014 at 08:54:01AM -0400, Ken Giusti wrote:
  Greetings,
 snip
  At this point, there are no integration tests that exercise the
  driver.  However, the new unit tests include a test 'broker', which
  allow the unit tests to fully exercise the new driver, right down to
  the network.  That's a bonus of AMQP 1.0 - it can support peer-2-peer
  messaging.
 
  So its the new unit tests that have the 'hard' requirement of the
  proton libraries.And mocking-out the proton libraries really
  doesn't allow us to do any meaningful tests of the driver.
 
 snip
 
 If your unit tests are dependent on a specific dependent library aren't
 they no longer unit tests but integration tests anyway?
 

 Good point - yes, they are certainly more than just unit tests.  I'd
 consider them more functional tests than integration tests, tho:
 they only test from the new driver API down to the wire (and back up
 again via the fake loopback broker).  For integration testing, I'd
 want to put a real broker in there, and run real subprojects over
 oslo.messaging using the new driver (neutron, etc).

 I'd really like to avoid the classic unit test approach of mocking out
 the underlying messaging client api if possible.  Even though that
 would avoid the dependency, I think it could result in the same issues
 we've had with the existing impl_qpid tests passing in mock, but
 failing when run against qpidd.

 Just wondering, not trying to put up road-blocks because I'd like to see
 how this code performs but haven't had time yet to play with it.
 

 np, a good question, thanks!  When you do get a chance to kick the tires,
 feel free to ping me with any questions/issues you have.  Thanks!

 --
 
 Thanks,
 
 Matt Riedemann


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.messaging][infra] Adding support for AMQP 1.0 Messaging to Oslo.Messaging and infra/config

2014-08-01 Thread Ken Giusti
On Wed, 30 Jul 2014 22:14:51 +, Jeremy Stanley wrote:
On 2014-07-30 14:59:09 -0400 (-0400), Ken Giusti wrote:
 Thanks Daniel.  It was my understanding - which may be wrong - that
 having devstack install the 'out of band' packages would only help in
 the case of the devstack-based integration tests, not in the case of
 CI running the unit tests.  Is that indeed the case?
[...]
 I'm open to any thoughts on how best to solve this, thanks.

Since they're in EPEL and we run Python 2.6 unit tests today on
CentOS 6 servers, if the proton libraries install successfully there
perhaps we could opportunistically exercise it only under Python 2.6
for now? Not ideal, but it does get it enforced upstream with
minimal fuss. I'd really rather not start accumulating arbitrary PPA
sources on our job workers... I know we've done it for a handful of
multi-project efforts where we needed select backports from non-LTS
releases, but we've so far limited that to only PPAs maintained by
the same package teams as the mainline distro packages themselves.


Yeah, it's becoming pretty clear that adding this PPA to infra is not
The Right Thing To Do.  How does this sound as an alternative:

1) _for_ _now_, make the dependent unit tests optional for
oslo.messaging.  Specifically, by default tox will not run them, but
I'll add a new testenv that adds a requirement for the dependent
packages and runs all the unit tests (default tests + new amqp1.0
tests).  Eg, do 'tox -e amqp1' to pull in the python packages that
require the libraries, and run all unit tests.  This allows those
developers that have installed the proton libraries to run the tests,
and avoid making life hard for those devs who don't have the libraries
installed.

2) Propose a new optional configuration flag in devstack that enables
the AMQP 1.0 messaging protocol (default off).  Something like
$RPC_MESSAGING_PROTOCOL == AMQP1.  When this is set in the devstack
config, rpc_backend will install the AMQP 1.0 libraries, adding the
Qpid PPA in the case of ubuntu for now.

3) Create a non-voting oslo.messaging gate test [0] that merely
runs the 'tox -e amqp1' tests.  Of course, additional integration
tests are a Good Thing, but at the very least we should start with
this. This would give us a heads up should new patches break the amqp
1.0 driver.  This test could eventually become gating once the driver
matures and the packages find their way into all the proper repos.

4) Profit (couldn't resist :)

Opinions?

[0] I honestly have no idea how to do this, or if it's even feasible
btw - I've never written a gating test before.  I'd appreciate any
pointers to get me started, thanks!


Longer term, I'd suggest getting it sponsored into Debian
unstable/testing ASAP, interesting the Ubuntu OpenStack team in
importing it into the development tree for the next Ubuntu release,
and then incorporating it into the Trusty Ubuntu Cloud Archive.
We're not using UCA yet, but on Trusty we probably should consider
adding it sooner rather than later since when we tried to tack on
the Precise UCA in the last couple cycles we had too many headaches
from trying to jump ahead substantially on fundamental bits like
libvirt. Breaking sooner and more often means those incremental
issues are easier to identify and address, usually.

Ah - I didn't know that, thanks!  I know one of the Qpid devs is
currently engaged in getting these packages into Debian.  I'll reach
out to him and see if he can work on getting it into UCA next.

Thanks again - very valuable info!

--
Jeremy Stanley


-- 
Ken Giusti  (kgiu...@gmail.com)
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.messaging][infra] Adding support for AMQP 1.0 Messaging to Oslo.Messaging and infra/config

2014-08-01 Thread Flavio Percoco
On 08/01/2014 03:29 PM, Ken Giusti wrote:
 On Wed, 30 Jul 2014 22:14:51 +, Jeremy Stanley wrote:
On 2014-07-30 14:59:09 -0400 (-0400), Ken Giusti wrote:
 Thanks Daniel.  It was my understanding - which may be wrong - that
 having devstack install the 'out of band' packages would only help in
 the case of the devstack-based integration tests, not in the case of
 CI running the unit tests.  Is that indeed the case?
[...]
 I'm open to any thoughts on how best to solve this, thanks.

Since they're in EPEL and we run Python 2.6 unit tests today on
CentOS 6 servers, if the proton libraries install successfully there
perhaps we could opportunistically exercise it only under Python 2.6
for now? Not ideal, but it does get it enforced upstream with
minimal fuss. I'd really rather not start accumulating arbitrary PPA
sources on our job workers... I know we've done it for a handful of
multi-project efforts where we needed select backports from non-LTS
releases, but we've so far limited that to only PPAs maintained by
the same package teams as the mainline distro packages themselves.

 
 Yeah, it's becoming pretty clear that adding this PPA to infra is not
 The Right Thing To Do.  How does this sound as an alternative:
 
 1) _for_ _now_, make the dependent unit tests optional for
 oslo.messaging.  Specifically, by default tox will not run them, but
 I'll add a new testenv that adds a requirement for the dependent
 packages and runs all the unit tests (default tests + new amqp1.0
 tests).  Eg, do 'tox -e amqp1' to pull in the python packages that
 require the libraries, and run all unit tests.  This allows those
 developers that have installed the proton libraries to run the tests,
 and avoid making life hard for those devs who don't have the libraries
 installed.
 
 2) Propose a new optional configuration flag in devstack that enables
 the AMQP 1.0 messaging protocol (default off).  Something like
 $RPC_MESSAGING_PROTOCOL == AMQP1.  When this is set in the devstack
 config, rpc_backend will install the AMQP 1.0 libraries, adding the
 Qpid PPA in the case of ubuntu for now.
 
 3) Create a non-voting oslo.messaging gate test [0] that merely
 runs the 'tox -e amqp1' tests.  Of course, additional integration
 tests are a Good Thing, but at the very least we should start with
 this. This would give us a heads up should new patches break the amqp
 1.0 driver.  This test could eventually become gating once the driver
 matures and the packages find their way into all the proper repos.
 
 4) Profit (couldn't resist :)


+1 As long as we get the tests running, I'm happy. This sounds like
something more acceptable for the infrastructure - at least based on the
discussions on this thread. The plan sounds good to me.

I think it's also possible to run amqp10 gate *just* for the changes
happening in the *amqp* package but it's probably worth it to just make
it non-voting and run it for every patch, as you mentioned.


Flavio

-- 
@flaper87
Flavio Percoco

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Marconi][Zaqar] Adding Victoria Martínez de la Cruz (vkmc) to Zaqar's core team

2014-08-01 Thread Flavio Percoco
Greetings,

I'd like to propose adding Victoria Martínez de la Cruz (vkmc) to
zaqar's core team. Victoria has been part of the community for several
months now. During this time, she's been contributing to many different
areas:

- Docs
- Code
- Community Support
- Research (amazing work on the amqp side)

She's a great asset of Zaqar's community.

If no one objects, I'll proceed and add her in a week from now.

Flavio

-- 
@flaper87
Flavio Percoco

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Marconi][Zaqar] Adding Victoria Martínez de la Cruz (vkmc) to Zaqar's core team

2014-08-01 Thread Malini Kamalambal
A HUGE +1 !



On 8/1/14 10:11 AM, Flavio Percoco fla...@redhat.com wrote:

Greetings,

I'd like to propose adding Victoria Martínez de la Cruz (vkmc) to
zaqar's core team. Victoria has been part of the community for several
months now. During this time, she's been contributing to many different
areas:

- Docs
- Code
- Community Support
- Research (amazing work on the amqp side)

She's a great asset of Zaqar's community.

If no one objects, I'll proceed and add her in a week from now.

Flavio

-- 
@flaper87
Flavio Percoco

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] plan for moving to using oslo.db

2014-08-01 Thread Andrey Kurilin

 Any updates?


Eugeniya Kudryashova is working hard on using oslo.db in nova.
Please, look at https://review.openstack.org/#/c/101901/


On Fri, Aug 1, 2014 at 2:59 PM, Ihar Hrachyshka ihrac...@redhat.com wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA512

 On 13/05/14 17:55, Matt Riedemann wrote:
 
 
  On 5/13/2014 7:35 AM, Doug Hellmann wrote:
  On Mon, May 12, 2014 at 3:25 PM, Roman Podoliaka
  rpodoly...@mirantis.com wrote:
  Hi all,
 
  Yes, once the oslo.db initial release is cut, we expect the
  migration from using of its oslo-incubator version to a library
  one to be as simple as following the steps you've mentioned.
  Though, we still need to finish the setup of oslo.db repo
  (AFAIK, this is currently blocked by the fact we don't run gate
  tests for oslo.db patches. Doug, Victor, please correct me, if
  I'm wrong).
 
  Yes, we need to work out the best way to test pre-releases of
  the libraries with apps before we have anything depending on
  those libraries so we can avoid breaking anything in a way that
  is hard to find or fix. We have a summit session scheduled for
  Thursday morning [1].
 
  Doug
 
  1 -
 
 http://junodesignsummit.sched.org/event/4f92763857fbe0686fe0436fecae8fbc
 
 
 
 
 Thanks,
  Roman
 
  On Mon, May 5, 2014 at 7:47 AM, Matt Riedemann
  mrie...@linux.vnet.ibm.com wrote:
  Just wanted to get some thoughts down while they are in my
  head this morning.
 
  Oslo DB is now a library [1].  I'm trying to figure out what
  the steps are to getting Nova to using that so we can rip out
  the sync'ed common db code.
 
  1. Looks like it's not in global-requirements yet [2], so
  that's probably a first step.
 
  2. We'll want to cut a sqlalchemy-migrate release once this
  patch is merged [3]. This moves a decent chunk of unique
  constraint patch code out of oslo and into sqlalchemy-migrate
  where it belongs so we can run unit tests with sqlite to drop
  unique constraints.
 
  3. Rip this [4] out of oslo.db once migrate is updated and
  released.
 
  4. Replace nova.openstack.common.db with oslo.db.
 
  5. ???
 
  6. Profit!
 
  Did I miss anything?
 
  [1] http://git.openstack.org/cgit/openstack/oslo.db/ [2]
 
 http://git.openstack.org/cgit/openstack/requirements/tree/global-requirements.txt
 
 
 
 [3] https://review.openstack.org/#/c/87773/
  [4] https://review.openstack.org/#/c/31016/
 
  --
 
  Thanks,
 
  Matt Riedemann
 
 
  ___ OpenStack-dev
  mailing list OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 ___
  OpenStack-dev mailing list OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 ___
  OpenStack-dev mailing list OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
  Just an update on progress, step 2 is complete, the review for step
  3 is here:
 
  https://review.openstack.org/#/c/92393/
 

 Any updates? I'm interested in making nova using oslo.db, this is
 needed for ongoing effort to make openstack services mysql-connector
 aware [we have some code for this in oslo.db but nit in oslo-incubator].

 Cheers,
 /Ihar
 -BEGIN PGP SIGNATURE-
 Version: GnuPG/MacGPG2 v2.0.22 (Darwin)
 Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

 iQEcBAEBCgAGBQJT24EPAAoJEC5aWaUY1u57LGcIANJAGPVk61sIyY77Me1Xn3Sx
 Z/lx6C4/zdXi6PmBUgVMzdnh/r7cjw0Twe7B1D495qDD0rA/OwrP229WaM4wHBnz
 bqtih+Bl9rwP6Dij57O6D9cHmcW1gmAxEiqX6iva9RomIWMyB8cAJoEsnD95Dw+q
 3PIeZyikDy42p1BTnVlXRLNJhC9RaZyw8wVQ9aUVe6ydYkerbGBALQTOTirlUa8y
 cq1k3GrKczB53zFpZT1i07wIP4cl9J0xXWcQTjH5cA4Sw/5kxGplT18DHX2/rrs9
 YGSYIlpu1UDKyofVB69DMu6/Ryov3bBEf91NGZ3wJwqkdNu7f9qzFOM4mrT7Qq8=
 =LYQX
 -END PGP SIGNATURE-

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Best regards,
Andrey Kurilin.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Status of A/A HA for neutron-metadata-agent?

2014-08-01 Thread mar...@redhat.com
Hi all,

I have been asked by a colleague about the status of A/A HA for
neutron-* processes. From the 'HA guide' [1], l3-agent and
metadata-agent are the only neutron components that can't be deployed in
A/A HA (corosync/pacemaker for a/p is documented as available 'out of
the box' for both).

The l3-agent work is approved for J3 [4] but I am unaware of any work on
the metadata-agent and can't see any mention in [2][3]. Is this someone
has looked at, or is planning to (though ultimately K would be the
earliest right?)?

thanks! marios

[1] http://docs.openstack.org/high-availability-guide/content/index.html
[2] https://wiki.openstack.org/wiki/NeutronJunoProjectPlan
[3] https://launchpad.net/neutron/+milestone/juno-3
[4]
http://git.openstack.org/cgit/openstack/neutron-specs/tree/specs/juno/l3-high-availability.rst

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Networks Generating In Fuel

2014-08-01 Thread Aleksey Kasatkin
Hi!

Request to assign vips is here:
https://github.com/stackforge/fuel-web/blob/master/nailgun/nailgun/orchestrator/deployment_serializers.py#L376

Most requests to assign other ips are here:
https://github.com/stackforge/fuel-web/blob/master/nailgun/nailgun/objects/node.py#L679-684

You can track assign_ips() and assign_vip() usage to see all calls to them.


Aleksey Kasatkin




 -- Forwarded message --
 From: zh...@certusnet.com.cn zh...@certusnet.com.cn
 Date: Fri, Aug 1, 2014 at 10:36 AM
 Subject: [openstack-dev] Networks Generating In Fuel
 To: openstack-dev openstack-dev@lists.openstack.org


 Hi! I'm now developing in Fuel and I want to add a virtual ip address to
 the NIC of br-ex with the same ip range of public address. So I want to
 know where IPs of node's such as br-ex, br-storage are generated. Can you
 tell me ?



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Marconi][Zaqar] Adding Victoria Martínez de la Cruz (vkmc) to Zaqar's core team

2014-08-01 Thread Sriram Madapusi Vasudevan
+1 for sure! :D
Sriram Madapusi Vasudevan



On Aug 1, 2014, at 10:21 AM, Malini Kamalambal 
malini.kamalam...@rackspace.com wrote:

 A HUGE +1 !
 
 
 
 On 8/1/14 10:11 AM, Flavio Percoco fla...@redhat.com wrote:
 
 Greetings,
 
 I'd like to propose adding Victoria Martínez de la Cruz (vkmc) to
 zaqar's core team. Victoria has been part of the community for several
 months now. During this time, she's been contributing to many different
 areas:
 
 - Docs
 - Code
 - Community Support
 - Research (amazing work on the amqp side)
 
 She's a great asset of Zaqar's community.
 
 If no one objects, I'll proceed and add her in a week from now.
 
 Flavio
 
 -- 
 @flaper87
 Flavio Percoco
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Nominating Jay Pipes for nova-core

2014-08-01 Thread Pádraig Brady
+1


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] DevStack program change

2014-08-01 Thread Dean Troyer
I propose we de-program DevStack and consolidate it into the QA program.
 Some of my concerns about doing this in the beginning have proven to be a
non-issue in practice.  Also, I believe a program's focus can and should be
wider than we have implemented up to now and this a step toward
consolidating narrowly defined programs.

I read the QA mission statement to already include DevStack's purpose so no
change should be required there.  I'll propose the governance changes
following a few days of discussion.

This is purely a program-level change, I do not anticipate changes to the
DevStack project itself.

dt
(soon-to-be-former?) DevStack PTL

-- 

Dean Troyer
dtro...@gmail.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] Cinder tempest api volume tests failed

2014-08-01 Thread Matt Riedemann



On 8/1/2014 4:16 AM, Nikesh Kumar Mahalka wrote:

Hi Mike,test which is failed for me is:
*tempest.api.volume.admin.test_volume_types.VolumeTypesTest*

I am getting error in below function call in above test
  *self.volumes_client.wait_for_volume_status**(volume['id'],** 
'available')**.*

This function call is in below function:
*@test.attr(type='smoke')
*
*def
test_create_get_delete_volume_with_volume_type_and_extra_specs(self)*


I saw in c-sch log and i found this major issue:
*2014-08-01 14:08:05.773 11853 ERROR
cinder.scheduler.flows.create_volume
[req-ceafd00c-30b1-4846-a555-6116556efb3b
43af88811b2243238d3d9fc732731565 a39922e8e5284729b07fcd045cfd5a88 - - -]
Failed to run task
cinder.scheduler.flows.create_volume.ScheduleCreateVolumeTask;volume:create:
No valid host was found. No weighed hosts available*

Actually by analyzing the test i found:
1)it is creating a volume-type with extra_specs
2)it is creating a volume with volume type and here it is failing.


*Below is my new local.conf file. *
*Am i missing anything in this?*

[[local|localrc]]
ADMIN_PASSWORD=some_password
DATABASE_PASSWORD=$ADMIN_PASSWORD
RABBIT_PASSWORD=$ADMIN_PASSWORD
SERVICE_PASSWORD=$ADMIN_PASSWORD
SERVICE_TOKEN=ADMIN
FLAT_INTERFACE=eth0
FIXED_RANGE=192.168.2.80/29 http://192.168.2.80/29
HOST_IP=192.168.2.64
LOGFILE=$DEST/logs/stack.sh.log
SCREEN_LOGDIR=$DEST/logs/screen
SYSLOG=True
SYSLOG_HOST=$HOST_IP
SYSLOG_PORT=516
RECLONE=yes
CINDER_ENABLED_BACKENDS=client:client_driver
TEMPEST_VOLUME_DRIVER=client_iscsi
TEMPEST_VOLUME_VENDOR=CLIENT
TEMPEST_STORAGE_PROTOCOL=iSCSI
VOLUME_BACKING_FILE_SIZE=20G

[[post-config|$CINDER_CONF]]
[client_driver]
volume_driver=cinder.volume.drivers.san.client.iscsi.client_iscsi.ClientISCSIDriver
san_ip=192.168.2.192
san_login=some_name
san_password=some_password
client_iscsi_ips = 192.168.2.193


*Below is my cinder.conf:*
[keystone_authtoken]
auth_uri = http://192.168.2.64:5000/v2.0
signing_dir = /var/cache/cinder
admin_password = some_password
admin_user = cinder
admin_tenant_name = service
cafile =
identity_uri = http://192.168.2.64:35357

[DEFAULT]
rabbit_password = some_password
rabbit_hosts = 192.168.2.64
rpc_backend = cinder.openstack.common.rpc.impl_kombu
use_syslog = True
default_volume_type = client_driver
enabled_backends = client_driver
enable_v1_api = true
periodic_interval = 60
lock_path = /opt/stack/data/cinder
state_path = /opt/stack/data/cinder
osapi_volume_extension = cinder.api.contrib.standard_extensions
rootwrap_config = /etc/cinder/rootwrap.conf
api_paste_config = /etc/cinder/api-paste.ini
sql_connection =
mysql://root:some_password@127.0.0.1/cinder?charset=utf8
http://127.0.0.1/cinder?charset=utf8
iscsi_helper = tgtadm
my_ip = 192.168.2.64
verbose = True
debug = True
auth_strategy = keystone

[client_driver]
client_iscsi_ips = 192.168.2.193
san_password = !manage
san_login = manage
san_ip = 192.168.2.192
volume_driver =
cinder.volume.drivers.san.client.iscsi.client_iscsi.ClientISCSIDriver



Regards
Nikesh









On Fri, Aug 1, 2014 at 1:56 AM, Mike Perez thin...@gmail.com
mailto:thin...@gmail.com wrote:

On 11:30 Thu 31 Jul , Nikesh Kumar Mahalka wrote:
  I deployed a single node devstack on Ubuntu 14.04.
  This devstack belongs to Juno.
 
  When i am running tempest api volume test, i am getting some
tests failed.

Hi Nikesh,

To further figure out what's wrong, take a look at the c-vol, c-api
and c-sch
tabs in the stack screen session. If you're unsure where to go from
there after
looking at the output, set the `SCREEN_LOGDIR` setting in your
local.conf [1]
and copy the logs from those tabs to paste.openstack.org
http://paste.openstack.org for us to see.

[1] - http://devstack.org/configuration.html

--
Mike Perez

___
Mailing list:
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openst...@lists.openstack.org
mailto:openst...@lists.openstack.org
Unsubscribe :
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Can you please start tagging your threads on an out-of-tree cinder 
driver with [cinder] in the subject line so this gets filtered into the 
cinder channel at least.


Generally when people come to the openstack-dev list asking for help 
with a deployment they get sent to ask.openstack.org or the general 
openstack mailing list.


This sort of falls in between since it sounds like you're doing 
development on a new driver and trying to get tempest working, but if 
this is going to be an openstack-dev list discussion, please isolate it 
to [cinder], or go to the #openstack-cinder channel in IRC.


--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing 

Re: [openstack-dev] how and which tempest tests to run

2014-08-01 Thread Matt Riedemann



On 8/1/2014 3:32 AM, Nikesh Kumar Mahalka wrote:

I deployed a single node devstack on Ubuntu 14.04.
This devstack belongs to Juno.
I have written a cinder-volume driver for my client backend.
I want to contribute this driver in Juno release.
As i analyzed the contribution process,it is telling to run tempest
tests for Continuous Integration.

Could any one tell me how and which tempest tests to run on this
devstack deployment for cinder volume driver?
Also tempest has many test cases.Do i have to pass all tests for
contribution of my driver?

Also am i missing any thing thing in below local.conf?

*_Below are steps for my devstack deployment:_*

1) git clone https://github.com/openstack-dev/devstack.git
2)cd devstack
3)vi local.conf

[[local|localrc]]

ADMIN_PASSWORD=some_password
DATABASE_PASSWORD=$ADMIN_PASSWORD
RABBIT_PASSWORD=$ADMIN_PASSWORD
SERVICE_PASSWORD=$ADMIN_PASSWORD
SERVICE_TOKEN=ADMIN
FLAT_INTERFACE=eth0
FIXED_RANGE=192.168.2.80/29 http://192.168.2.80/29
#FLOATING_RANGE=192.168.20.0/25 http://192.168.20.0/25
HOST_IP=192.168.2.64
LOGFILE=$DEST/logs/stack.sh.log
SCREEN_LOGDIR=$DEST/logs/screen
SYSLOG=True
SYSLOG_HOST=$HOST_IP
SYSLOG_PORT=516
RECLONE=yes
CINDER_ENABLED_BACKENDS=client:client_driver
TEMPEST_VOLUME_DRIVER=client_iscsi
TEMPEST_VOLUME_VENDOR=CLIENT
TEMPEST_STORAGE_PROTOCOL=iSCSI
VOLUME_BACKING_FILE_SIZE=20G

[[post-config|$CINDER_CONF]]

[client_driver]
volume_driver=cinder.volume.drivers.san.client.iscsi.client_iscsi.ClientISCSIDriver
san_ip = 192.168.2.192
san_login = some_name
san_password =some_password
client_iscsi_ips = 192.168.2.193

4)./stack.sh



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Please tag your cinder-specific driver test questions with [cinder] so 
these threads are filtered appropriately in people's mail clients.


--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Nominating Jay Pipes for nova-core

2014-08-01 Thread Mitsuhiro Tanino
+1

Regards,
Mitsuhiro Tanino mitsuhiro.tan...@hds.com
 HITACHI DATA SYSTEMS
 c/o Red Hat, 314 Littleton Road, Westford, MA 01886


 -Original Message-
 From: Russell Bryant [mailto:rbry...@redhat.com]
 Sent: Thursday, July 31, 2014 7:58 AM
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Nova] Nominating Jay Pipes for nova-core
 
 On 07/30/2014 05:10 PM, Russell Bryant wrote:
  On 07/30/2014 05:02 PM, Michael Still wrote:
  Greetings,
 
  I would like to nominate Jay Pipes for the nova-core team.
 
  Jay has been involved with nova for a long time now.  He's previously
  been a nova core, as well as a glance core (and PTL). He's been
  around so long that there are probably other types of core status I
  have missed.
 
  Please respond with +1s or any concerns.
 
  +1
 
 
 Further, I'd like to propose that we treat all of existing +1 reviews as
 +2 (once he's officially added to the team).  Does anyone have a problem
 with doing that?  I think some folks would have done that anyway, but I 
 wanted to clarify that
 it's OK.
 
 --
 Russell Bryant
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] a small experiment with Ansible in TripleO

2014-08-01 Thread Allison Randal
A few of us have been independently experimenting with Ansible as a
backend for TripleO, and have just decided to try experimenting
together. I've chatted with Robert, and he says that TripleO was always
intended to have pluggable backends (CM layer), and just never had
anyone interested in working on them. (I see it now, even in the early
docs and talks, I guess I just couldn't see the forest for the trees.)
So, the work is in line with the overall goals of the TripleO project.

We're starting with a tiny scope, focused only on updating a running
TripleO deployment, so our first work is in:

- Create an Ansible Dynamic Inventory plugin to extract metadata from Heat
- Improve/extend the Ansible nova_compute Cloud Module (or create a new
one), for Nova rebuild
- Develop a minimal handoff from Heat to Ansible, particularly focused
on the interactions between os-collect-config and Ansible

We're merging our work in this repo, until we figure out where it should
live:

https://github.com/allisonrandal/tripleo-ansible

We've set ourselves one week as the first sanity-check to see whether
this idea is going anywhere, and we may scrap it all at that point. But,
it seems best to be totally transparent about the idea from the start,
so no-one is surprised later.

Cheers,
Allison

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Spec exceptions are closed, FPF is August 21

2014-08-01 Thread Carl Baldwin
Armando's point #2 is a good one.  I see that we should have raised
awareness of this more than we did.  The bulk of the discussion and
the development work moved over to the oslo team and I focused energy
on other things.  What I didn't realize was that the importance of
this work to Neutron did not transfer along with it and that simply
delivering the new functionality in oslo by Juno was not sufficient as
Neutron would need time to incorporate it.

I am at a point now where I have some time to work on this.  If
reconsideration for Juno is still an option at this time then I think
what we need to do is to resolve the concerns that are still
outstanding.  I'll admit that I really don't understand what the
concerns are.  I believe that the security concerns have been
addressed.  If you still have concerns around the design of this
feature please bring them up specifically.

Thanks,
Carl

On Thu, Jul 31, 2014 at 4:54 PM, Armando M. arma...@gmail.com wrote:
 It is not my intention debating, pointing fingers and finding culprits,
 these issues can be addressed in some other context.

 I am gonna say three things:

 1) If a core-reviewer puts a -2, there must be a good reason for it. If
 other reviewers blindly move on as some people seem to imply here, then
 those reviewers should probably not review the code at all! My policy is to
 review all the code I am interested in/I can, regardless of the score. My -1
 may be someone's +1 (or vice versa), so 'trusting' someone else's vote is
 the wrong way to go about this.

 2) If we all feel that this feature is important (which I am not sure it was
 being marked as 'low' in oslo, not sure how it was tracked in Neutron),
 there is the weekly IRC Neutron meeting to raise awareness, since all cores
 participate; to the best of my knowledge we never spoke (or barely) of the
 rootwrap work.

 3) If people do want this work in Juno (Carl being one of them), we can
 figure out how to make one final push, and assess potential regression. We
 'rushed' other features late in cycle in the past (like nova/neutron event
 notifications) and if we keep this disabled by default in Juno, I don't
 think it's really that risky. I can work with Carl to give the patches some
 more love.

 Armando



 On 31 July 2014 15:40, Rudra Rugge ru...@contrailsystems.com wrote:

 Hi Kyle,

 I also agree with Mandeep's suggestion of putting a time frame on the
 lingering -2 if the addressed concerns have been taken care of. In my
 experience also a sticky -2 detracts other reviewers from reviewing an
 updated patch.

 Either a time-frame or a possible override by PTL (move to -1) would help
 make progress on the review.

 Regards,
 Rudra


 On Thu, Jul 31, 2014 at 2:29 PM, Mandeep Dhami dh...@noironetworks.com
 wrote:

 Hi Kyle:

 As -2 is sticky, and as there exists a possibility that the original core
 might not get time to get back to re-reviewing his, do you think that there
 should be clearer guidelines on it's usage (to avoid what you identified as
 dropping of the balls)?

 Salvatore had a good guidance in a related thread [0], do you agree with
 something like that?

 I try to avoid -2s as much as possible. I put a -2 only when I reckon
 your
 patch should never be merged because it'll make the software unstable or
 tries to solve a problem that does not exist. -2s stick across patches
 and
 tend to put off other reviewers.

 [0]
 http://lists.openstack.org/pipermail/openstack-dev/2014-July/041339.html


 Or do you think that 3-5 days after an update that addresses the issues
 identified in the original -2, we should automatically remove that -2? If
 this does not happen often, this process does not have to be automated, just
 an exception that the PTL can exercise to address issues where the
 original reason for -2 has been addressed and nothing new has been
 identified?



 On Thu, Jul 31, 2014 at 11:25 AM, Kyle Mestery mest...@mestery.com
 wrote:

 On Thu, Jul 31, 2014 at 7:11 AM, Yuriy Taraday yorik@gmail.com
 wrote:
  On Wed, Jul 30, 2014 at 11:52 AM, Kyle Mestery mest...@mestery.com
  wrote:
  and even less
  possibly rootwrap [3] if the security implications can be worked out.
 
  Can you please provide some input on those security implications that
  are
  not worked out yet?
  I'm really surprised to see such comments in some ML thread not
  directly
  related to the BP. Why is my spec blocked? Neither spec [1] nor code
  (which
  is available for a really long time now [2] [3]) can get enough
  reviewers'
  attention because of those groundless -2's. Should I abandon these
  change
  requests and file new ones to get some eyes on my code and proposals?
  It's
  just getting ridiculous. Let's take a look at timeline, shall we?
 
 I share your concerns here as well, and I'm sorry you've had a bad
 experience working with the community here.

  Mar, 25 - first version of the first part of Neutron code is published
  at
  [2]
  Mar, 28 - first reviewers come and it gets -1'd by 

[openstack-dev] [Neutron] Crash Issue: OVS-Agent status needs to be fully represented/processed

2014-08-01 Thread Robin Wang
Recently we encountered some ovs-agent crash issues.  [1][2][3]

*[Root cause]*
1. Currently only a 'restarted' flag is used in rpc_loop() to identify ovs
status.
* ovs_restarted = self.check_ovs_restart() *

*True*: ovs is running, but a restart happened before this loop. rpc_loop()
reset bridges and re-process ports.
*False*: ovs is running since last loop, rpc_loop() continue to process in
a normal way.

But if ovs is dead, or is not up yet during a restart, check_ovs_restart()
will incorrectly returns True. Then rpc_loop() continues to reset
bridges, and apply other ovs operations, till causing exceptions/crash.
 Related Bug: [1] [2]

2. Also, during agent boot up, ovs status is not checked at all. Agent
crashes without no useful log info, when ovs is dead. Related Bug: [3]

*[Proposal]*
1. Add const {NORMAL, DEAD, RESTARTED} to represent ovs status.
NORMAL - ovs is running since last loop, rpc_loop() continue to process in
a normal way.
RESTARTED - ovs is running, but a restart happened before this loop.
rpc_loop() reset bridges and re-process ports.
DEAD - keep agent running, but rpc_loop() doesn't apply ovs operations to
prevent unnecessary exceptions/crash. When ovs is up, it enters RESTARTED
mode;

2. Check ovs status during agent boot up, if it's DEAD, exit graceful since
subsequent operations causes a crash, and write log to remind that ovs_dead
causes agent termination.

*[Code Review]* https://review.openstack.org/#/c/110538/   Will be
appreciated if you could share some thoughts or do a quick code review.
Thanks.

Best,
Robin

[1] https://bugs.launchpad.net/neutron/+bug/1296202
[2] https://bugs.launchpad.net/neutron/+bug/1350179
[3] https://bugs.launchpad.net/neutron/+bug/1351135
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [marconi] Proposal to add Victoria Martínez de la Cruz as a core reviewer

2014-08-01 Thread Kurt Griffiths
Hi crew, I’d like to propose Vicky (vkmc) be added to Marconi’s core reviewer 
team. She is a regular contributor in terms of both code and reviews, is an 
insightful and regular participant in team discussions, and leads by example in 
terms of quality of work, treating others as friends and family, and being 
honest and constructive in her community participation.

Marconi core reviewers, please respond with +1 or -1 per your vote on adding 
Vicky.

--
Kurt G. (kgriffs)
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Creating database model and agent

2014-08-01 Thread Doug Wiegley
If it helps as a reference, these three reviews by Brandon Logan are
adding exactly what you're talking about for the new LBaaS api.  Models,
migration, extension, plugin, unit tests:

* New extension for version 2 of LBaaS API -
https://review.openstack.org/#/c/105331
* Plugin/DB additions for version 2 of LBaaS API -
https://review.openstack.org/#/c/105609
* Tests for extension, db and plugin for LBaaS V2 -
https://review.openstack.org/#/c/105610


And I don't believe there's anything extra in neutron for a new agent,
though I'm not 100% certain about that.

Thanks,

Doug


On 8/1/14, 3:14 AM, Maciej Nabożny m...@mnabozny.pl wrote:

Hello,
some days ago I was asking you about creating extension and service
plugin for Neutron. Now, I am trying to find how to create database
model for this plugin and agent :)

Could you check if I am understanding this issues properly?

The database model for new plugin should be created in neutron/db
directory. The model classes should inherit:
1. neutron.db.model_base.BASEV2, which is related to NeutronBaseV2 class
2. if model should contain id or relation to tenant, it should inherit
also HasTenant and HasId from module neutron.db.models_v2
3. All other fields should be defined according to sqlalchemy orm.
I have also question - how Neutron knows (or sqlalchemy) which
models/tables should be created in database? At this moment I cannot
find any code, which initializes the database. The only thing which I
found is declarative.declarative_base in db.model_base


And one question about creating new agents - is it just thread/process,
which is managed by Neutron? I was analysing lbaas code and it is just
python script in /usr/bin/, which executes class. Do I have to create
anything additional like with database or service plugin to create new
agent? Or scripts like in LBAAS will be enough + init script?


I also found great diagram, which describes in a nut shell how Neutron
service plugins are organised internal:
https://wiki.openstack.org/wiki/Neutron/LBaaS/Architecture
Maybe it would be good idea to put diagram like this to official dev
wiki? Of corse after some modifications. Now wiki points to the
SecurityGroups code, which is not very helpful for beginners like me :)

regards!
Maciej

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] API confusion

2014-08-01 Thread Mike Spreitzer
http://developer.openstack.org/api-ref-networking-v2.html and 
http://docs.openstack.org/api/openstack-network/2.0/content/GET_listMembers__v2.0_pools__pool_id__members_lbaas_ext_ops_member.html
 
say that to list LB pool members, the URL to GET is 
/v2.0/pools/{pool_id}/members

When I use the CLI (`neutron -v lb-member-list`) I see a GET on 
/v2.0/lb/members.json

What's going on here?

Thanks,
Mike___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] DevStack program change

2014-08-01 Thread Jay Pipes

On 08/01/2014 11:48 AM, Dean Troyer wrote:

I propose we de-program DevStack and consolidate it into the QA
program. Some of my concerns about doing this in the beginning have
proven to be a non-issue in practice.  Also, I believe a program's
focus can and should be wider than we have implemented up to now and
this a step toward consolidating narrowly defined programs.

I read the QA mission statement to already include DevStack's purpose
so no change should be required there.  I'll propose the governance
changes following a few days of discussion.

This is purely a program-level change, I do not anticipate changes to
 the DevStack project itself.

dt (soon-to-be-former?) DevStack PTL


+1

Thanks for bringing up this topic and being mature enough to recognize 
that may not be good reasons any more for having a separate program for 
DevStack.


It's important, as a mature community, that we have the ability to 
change course and revisit prior decisions based on real-world experience.


Best,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [marconi] Proposal to add Victoria Martínez de la Cruz as a core reviewer

2014-08-01 Thread Malini Kamalambal
There is another thread going on for the same 
http://www.mail-archive.com/openstack-dev@lists.openstack.org/msg30897.html
We sure need vkmc in the core :)

From: Kurt Griffiths 
kurt.griffi...@rackspace.commailto:kurt.griffi...@rackspace.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Friday, August 1, 2014 1:17 PM
To: OpenStack Dev 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: [openstack-dev] [marconi] Proposal to add Victoria Martínez de la Cruz 
as a core reviewer

Hi crew, I’d like to propose Vicky (vkmc) be added to Marconi’s core reviewer 
team. She is a regular contributor in terms of both code and reviews, is an 
insightful and regular participant in team discussions, and leads by example in 
terms of quality of work, treating others as friends and family, and being 
honest and constructive in her community participation.

Marconi core reviewers, please respond with +1 or -1 per your vote on adding 
Vicky.

--
Kurt G. (kgriffs)
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Nominating Jay Pipes for nova-core

2014-08-01 Thread Jay Pipes

On 07/31/2014 10:49 PM, Sean Dague wrote:

On 07/31/2014 06:26 PM, Michael Still wrote:

On Thu, Jul 31, 2014 at 9:57 PM, Russell Bryant rbry...@redhat.com wrote:


Further, I'd like to propose that we treat all of existing +1 reviews as
+2 (once he's officially added to the team).  Does anyone have a problem
with doing that?  I think some folks would have done that anyway, but I
wanted to clarify that it's OK.


As a core I sometimes +1 something to indicate a weak acceptance of
the code instead of a strong acceptance (perhaps its not my area of
expertise). Do we think it would be better to ask Jay to scan through
his recent +1s and promote those he is comfortable with to +2s? I
don't think that would take very long, and would keep the intent of
the reviews clear.


Agreed. That's more typical and means that you don't need to parse
intent on +1s. Let jay upgrade the votes he feels comfortable with
holding a full +2 on.


Of course. I have no problem doing that. Frankly, I kind of revisit 
reviews often just in the course of my review work each day, so it's not 
a big deal at all.


Oh, and thanks very much for the nomination.

Best,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Marconi] All-hands documentation day

2014-08-01 Thread Kurt Griffiths
I’m game for thursday. Love to help out.

On 8/1/14, 2:26 AM, Flavio Percoco fla...@redhat.com wrote:

On 07/31/2014 09:57 PM, Victoria Martínez de la Cruz wrote:
 Hi everyone,
 
 Earlier today I went through the documentation requirements for
 graduation [0] and it looks like there is some work do to.
 
 The structure we should follow is detailed
 in https://etherpad.openstack.org/p/marconi-graduation.
 
 It would be nice to do an all-hands documentation day next week to make
 this happen.
 
 Can you join us? When is it better for you?

Hey Vicky,

Awesome work, thanks for putting this together.

I'd propose doing it on Thursday since, hopefully, some other patches
will land during that week that will require documentation too.

Flavio,

 
 My best,
 
 Victoria
 
 [0] 
https://github.com/openstack/governance/blob/master/reference/incubation-
integration-requirements.rst#documentation--user-support-1


-- 
@flaper87
Flavio Percoco

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] a small experiment with Ansible in TripleO

2014-08-01 Thread Galbraith, Patrick
Hi all,

I have been working on Ansible nova_compute, a new “nova_facts” module as well 
as the nova dynamic inventory plugin, so please do feel free to collaborate 
with me on this.

Regards,

Patrick

On Aug 1, 2014, at 12:07 PM, Allison Randal alli...@lohutok.net wrote:

 A few of us have been independently experimenting with Ansible as a
 backend for TripleO, and have just decided to try experimenting
 together. I've chatted with Robert, and he says that TripleO was always
 intended to have pluggable backends (CM layer), and just never had
 anyone interested in working on them. (I see it now, even in the early
 docs and talks, I guess I just couldn't see the forest for the trees.)
 So, the work is in line with the overall goals of the TripleO project.
 
 We're starting with a tiny scope, focused only on updating a running
 TripleO deployment, so our first work is in:
 
 - Create an Ansible Dynamic Inventory plugin to extract metadata from Heat
 - Improve/extend the Ansible nova_compute Cloud Module (or create a new
 one), for Nova rebuild
 - Develop a minimal handoff from Heat to Ansible, particularly focused
 on the interactions between os-collect-config and Ansible
 
 We're merging our work in this repo, until we figure out where it should
 live:
 
 https://github.com/allisonrandal/tripleo-ansible
 
 We've set ourselves one week as the first sanity-check to see whether
 this idea is going anywhere, and we may scrap it all at that point. But,
 it seems best to be totally transparent about the idea from the start,
 so no-one is surprised later.
 
 Cheers,
 Allison
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



smime.p7s
Description: S/MIME cryptographic signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] a small experiment with Ansible in TripleO

2014-08-01 Thread Allison Randal
On 08/01/2014 12:06 PM, Galbraith, Patrick wrote:
 I have been working on Ansible nova_compute, a new “nova_facts”
 module as well as the nova dynamic inventory plugin, so please do
 feel free to collaborate with me on this.

Great, are you on #tripleo? I'm 'wendar', and I'm working on this bit.

Allison

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [OpenStack-Dev] [Cinder] running tempest tests against your custom backend

2014-08-01 Thread John Griffith
Seems there's a number of new folks trying to run devstack/tempest tests
against third party backends.  Please note the Cinder Wiki page [1]
includes documentation to help you get this working and point out what
needs modified in the Tempest settings.

Thanks,
John

[1]: https://wiki.openstack.org/wiki/Cinder  (See the section: Configuring
devstack to use your driver and backend
​)​
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] Cinder tempest api volume tests failed

2014-08-01 Thread Mike Perez
On 14:46 Fri 01 Aug , Nikesh Kumar Mahalka wrote:
 Hi Mike,test which is failed for me is:
 *tempest.api.volume.admin.test_volume_types.VolumeTypesTest*
 
 I am getting error in below function call in above test
  *self.volumes_client.wait_for_volume_status**(volume['id'],*
 * 'available')**.*
 
 This function call is in below function:
 
 *@test.attr(type='smoke')*
 *def
 test_create_get_delete_volume_with_volume_type_and_extra_specs(self)*

This is due to the extra spec test by default setting the vendor name
capability to 'Open Source'. Since your driver probably has a different vendor
name, the scheduler is not able to find a suitable host to fulfill the volume
create request with that volume type. There is a wiki page [1] that covers how
to test your driver in devstack with Tempest, which will avoid this problem.

[1] - 
https://wiki.openstack.org/wiki/Cinder#Configuring_devstack_to_use_your_driver_and_backend

-- 
Mike Perez

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [HEAT] Qestions on adding a new Software Config element for Opscode Chef

2014-08-01 Thread Tao Tao
Hi, All:We are trying to leverage Heat software config model to support Chef-based software installation. Currently the chef-based software config is not in place with Heat version 0.2.9.Therefore, we do have a number of questions on the implementation by ourselves:1. Should we create new software config child resource types (e.g.OS::Heat::SoftwareConfig::Chef and OS::Heat::SoftwareDeployment::Chef proposed in thehttps://wiki.openstack.org/wiki/Heat/Blueprints/hot-software-config-spec) or should we reuse the existing software config resource type (e.g. OS::Heat::SoftwareConfig by leveraging group attribute) like the following example with Puppet? What are the pros and cons with either approach?config:type:OS::Heat::SoftwareConfigproperties:group:puppetinputs:-name:foo-name:baroutputs:-name:resultconfig:get_file:config-scripts/example-puppet-manifest.ppdeployment:type:OS::Heat::SoftwareDeploymentproperties:config:get_resource:configserver:get_resource:serverinput_values:foo:fobar:ba2. Regarding OpsCode Chef and Heat integration, should our software config support chef-solo only, or should support Chef server? In another word, should we let Heat to do the orchestration for the chef-based software install or should we continue to use chef-server for the chef-based software install?3. In the current implementation of software config hook for puppet as follows:heat-templates/hot/software-config/elements/heat-config-puppet/install.d/50-heat-config-hook-puppet3.1 why we need a 50-* as a prefix for the heat-config hook name?3.2 In the script as follows, what is the "install-packages" script? where does it load puppet package? How would we change the script to install chef package?#!/bin/bashset-xSCRIPTDIR=$(dirname$0)install-packages puppetinstall -D -g root -o root -m0755${SCRIPTDIR}/hook-puppet.py /var/lib/heat-config/hooks/puppet4. Withdiskimage-builder, we can build in images with many software config elements(chef, puppet, script, salt), which means there will be many hooks in the image.However, By reading the source code of the os-refresh-config, it seems it will execute only the hooks which has corresponding "group" defined in the software config, is that right?definvoke_hook(c,log):# sanitise the group to get an alphanumeric hook file namehook="".join(xforxinc['group']ifx=='-'orx=='_'orx.isalnum())hook_path=os.path.join(HOOKS_DIR,hook)signal_data=Noneifnotos.path.exists(hook_path):log.warn('Skipping group%swith no hook script%s'%(c['group'],hook_path))else:Thanks a lot for your kind assistance!Thanks,Tao Tao, Ph.D.IBM T. J. Watson Research Center1101 Kitchawan RoadYorktown Heights, NY 10598Phone: (914) 945-4541Email: t...@us.ibm.com


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Glance Store Future

2014-08-01 Thread Jay Pipes

cc'ing ML since it's an important discussion, IMO...

On 07/31/2014 11:54 AM, Arnaud Legendre wrote:

Hi Jay,

I would be interested if you could share your point of view on this
item: we want to make the glance stores a standalone library
(glance.stores) which would be consumed directly by Nova and Cinder.


Yes, I have been enthusiastic about this effort for a long time now :) 
In fact, I have been pushing a series of patches (most merged at this 
point) in Nova to clean up the (very) messy nova.image.glance module and 
standardize the image API in Nova.


The messiest part of the current image API in Nova, by far, is the 
nova.image.glance.GlanceImageService.download() method, which you 
highlight below. The reason it is so messy is that the method does 
different things (and returns different things!) depending on how you 
call it and what arguments you provide. :(



I think it would be nice to get your pov since you worked a lot on
the Nova image interface recently. To give you an example:

Here
https://github.com/openstack/nova/blob/master/nova/image/glance.py#L333,
 we would do:

1. location = get_image_location(image_id),
2. get(location) from the
glance.stores library like for example rbd
(https://github.com/openstack/glance/blob/master/glance/store/rbd.py#L206)


Yup. Though I'd love for this code to live in olso, not glance...

Plus, I'd almost prefer to see an interface that hides the location URIs 
entirely and makes the discovery of those location URIs entirely 
encapsulated within glance.store. So, for instance, instead of getting 
the image location using a call to glanceclient.show(), parsing the 
locations collection from the v2 API response, and passing that URI to 
the glance.store.get() function, I'd prefer to see an interface more 
like this:


```python
# This code would go in a new nova.image.API.copy() method:
import io

from oslo.image import move
from oslo.image.move import exception as mexc

from nova import exception as exc

...
def copy(image_id_or_uri, stream_writer):
try:
config = {
   # Some Nova CONF options...
}
mover = move.Mover(image_id_or_uri, config)
success, bytes_written = mover.copy(stream_writer)
if success:
if bytes_written == 0:
LOG.info(Copied image %s using zero-copy 
 transfer., image_id_or_uri)
else:
LOG.info(Copied image %s using standard 
 filesystem copy. Copied %d bytes.,
 image_id_or_uri, bytes_written)
return success
except mexc.ImageNotFound:
raise exc.NotFound(...)
except mexc.ImageInvalidApi:
# Fall back to pull image from Glance
# API server via HTTP and write to disk
# via the stream_writer argument's write()
# interface... and return True or False
# depending on whether write()s succeeded
```

And then, the caller of such an nova.image.API.copy() function would be 
in the existing various virt utils and imagebackends, and would call the 
API function like so:


```python
# This code would go in something like nova.virt.libvirt.utils:

from nova import image

IMAGE_API = image.API()

write_file = io.FileIO(dst_path, mode='wb')
writer = io.BufferedWriter(write_file)

image_id_or_uri = https://images.example.com/images/123;

result = IMAGE_API.copy(image_id_or_uri, writer)
# Test result if needed...
```

Notice that the caller never needs to know about the locations 
collection of the image -- and thus we correct the leaked implementation 
details that currently ooze out of the download() method in 
nova.image.glance.GlanceImageService.download.


Also note that we no longer pass a variety of file descriptors, file 
writers, file destination paths to the download method. Instead, we 
always just pass the image ID or URI and a writeable bytestream 
iterator. And we always return either True or False, instead of None or 
a file iterator depending on the supplied arguments to download().



 The same kind of logic could be added in Cinder.


Sure.


We see that as a benefit for Nova, which would be able to directly
consume the stores instead of going through the glance api.


Exactly.


We had a vote today to figure out if we continue the effort on the
glance.stores library. We had a majority of +1 but there was a couple
of -1 due to the fact that we don’t have enough concrete examples of
this will be useful or not.


It will definitely be useful in the following:

1) Making the copy/zero-copy/transfer/download methods consistent 
between all the various places in the Nova virt drivers that do similar 
things.


2) Allowing a single place to innovate for the transfer of image bits 
between sources and destinations


Hopefully, the above sample code and interfaces will spark some renewed 
interest in this. I'd love 

Re: [openstack-dev] DevStack program change

2014-08-01 Thread Anne Gentle
On Fri, Aug 1, 2014 at 10:48 AM, Dean Troyer dtro...@gmail.com wrote:

 I propose we de-program DevStack and consolidate it into the QA program.
  Some of my concerns about doing this in the beginning have proven to be a
 non-issue in practice.  Also, I believe a program's focus can and should be
 wider than we have implemented up to now and this a step toward
 consolidating narrowly defined programs.


Sounds like a good idea to me, as long as QA PTL Matt is good with it.
Thanks Dean for your service!

Anne


 I read the QA mission statement to already include DevStack's purpose so
 no change should be required there.  I'll propose the governance changes
 following a few days of discussion.

 This is purely a program-level change, I do not anticipate changes to the
 DevStack project itself.

 dt
 (soon-to-be-former?) DevStack PTL

 --

 Dean Troyer
 dtro...@gmail.com

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Spec exceptions are closed, FPF is August 21

2014-08-01 Thread Sridar Kandaswamy (skandasw)
Hi All:


There is no doubt the cores are quite stretched and it does take a finite 
amount of time to absorb the context and the content of the multitude of 
patches on any given core reviewer¹s queue. Life happens for everyone and 
things slip thru the cracks, but this suggestion on a timeline for reassessing 
the sticky -2 after a response from the patch owner seems very reasonable to 
adopt.


It certainly helps the submitter to make forward progress rather than exit the 
project in frustration (I know of at least one instance with a contributor 
expressing this as a reason to move on) and establishes a process so that cores 
can rely on a automatic throttle mechanism if they suddenly find themselves 
dealing with other things that are a higher priority for them.


Thanks


Sridar

From: Mandeep Dhami dh...@noironetworks.commailto:dh...@noironetworks.com
Reply-To: OpenStack List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Friday, August 1, 2014 at 4:53 AM
To: OpenStack List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [neutron] Spec exceptions are closed, FPF is 
August 21

Hi Armando:

  If a core-reviewer puts a -2, there must be a good reason for it

I agree. The problem is that after the initial issue identified in the initial 
-2 review has been fixed, and the patch updated, it (sometimes) happens that we 
can not get the original reviewer to re-review that update for weeks - creating 
the type of issues identified in this thread.

I would agree that if this was a one-off scenarios, we should handling this as 
a specific case as you suggest. Unfortunately, this is not a one-off instance, 
and hence my request for clearer guidelines from PTL for such cases.

Regards,
Mandeep



On Thu, Jul 31, 2014 at 3:54 PM, Armando M. 
arma...@gmail.commailto:arma...@gmail.com wrote:
It is not my intention debating, pointing fingers and finding culprits, these 
issues can be addressed in some other context.

I am gonna say three things:

1) If a core-reviewer puts a -2, there must be a good reason for it. If other 
reviewers blindly move on as some people seem to imply here, then those 
reviewers should probably not review the code at all! My policy is to review 
all the code I am interested in/I can, regardless of the score. My -1 may be 
someone's +1 (or vice versa), so 'trusting' someone else's vote is the wrong 
way to go about this.

2) If we all feel that this feature is important (which I am not sure it was 
being marked as 'low' in oslo, not sure how it was tracked in Neutron), there 
is the weekly IRC Neutron meeting to raise awareness, since all cores 
participate; to the best of my knowledge we never spoke (or barely) of the 
rootwrap work.

3) If people do want this work in Juno (Carl being one of them), we can figure 
out how to make one final push, and assess potential regression. We 'rushed' 
other features late in cycle in the past (like nova/neutron event 
notifications) and if we keep this disabled by default in Juno, I don't think 
it's really that risky. I can work with Carl to give the patches some more love.

Armando



On 31 July 2014 15:40, Rudra Rugge 
ru...@contrailsystems.commailto:ru...@contrailsystems.com wrote:
Hi Kyle,

I also agree with Mandeep's suggestion of putting a time frame on the lingering 
-2 if the addressed concerns have been taken care of. In my experience also a 
sticky -2 detracts other reviewers from reviewing an updated patch.

Either a time-frame or a possible override by PTL (move to -1) would help make 
progress on the review.

Regards,
Rudra


On Thu, Jul 31, 2014 at 2:29 PM, Mandeep Dhami 
dh...@noironetworks.commailto:dh...@noironetworks.com wrote:
Hi Kyle:

As -2 is sticky, and as there exists a possibility that the original core might 
not get time to get back to re-reviewing his, do you think that there should be 
clearer guidelines on it's usage (to avoid what you identified as dropping of 
the balls)?

Salvatore had a good guidance in a related thread [0], do you agree with 
something like that?

I try to avoid -2s as much as possible. I put a -2 only when I reckon your
patch should never be merged because it'll make the software unstable or
tries to solve a problem that does not exist. -2s stick across patches and
tend to put off other reviewers.

[0] http://lists.openstack.org/pipermail/openstack-dev/2014-July/041339.html


Or do you think that 3-5 days after an update that addresses the issues 
identified in the original -2, we should automatically remove that -2? If this 
does not happen often, this process does not have to be automated, just an 
exception that the PTL can exercise to address issues where the original 
reason for -2 has been addressed and nothing new has been identified?



On Thu, Jul 31, 2014 at 11:25 AM, Kyle Mestery 
mest...@mestery.commailto:mest...@mestery.com wrote:
On Thu, Jul 31, 2014 at 7:11 AM, Yuriy Taraday 

[openstack-dev] Preparing for 2014.1.2 -- branches freeze Aug 7

2014-08-01 Thread Chuck Short
Hi All-

We have frozen the stable/icehouse branches for intergrated projects for
release Thurs August 7th in preparation for the 2014.1.2 stable release You
can view the current queue of proposed patches on gerrit [1]. I'd like to
request all interested parties review current bugs affecting Havana and
help ensure any relevant fixes be proposed soon and merged by Thursday, or
notify the stable-maint team of anything critical that may land late.

Thanks
chuck

[1] https://review.openstack.org/#/q/status:open+branch:stable/havana,n,z
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Status of A/A HA for neutron-metadata-agent?

2014-08-01 Thread Assaf Muller
Hey Marios, comments inline.

- Original Message -
 Hi all,
 
 I have been asked by a colleague about the status of A/A HA for
 neutron-* processes. From the 'HA guide' [1], l3-agent and
 metadata-agent are the only neutron components that can't be deployed in
 A/A HA (corosync/pacemaker for a/p is documented as available 'out of
 the box' for both).
 
 The l3-agent work is approved for J3 [4] but I am unaware of any work on
 the metadata-agent and can't see any mention in [2][3]. Is this someone
 has looked at, or is planning to (though ultimately K would be the
 earliest right?)?
 

With L3 HA turned on you can run the metadata agent on all network nodes.
The active instance of each router will have the proxy up in its namespace
and it will forward it to the agent as expected.

 thanks! marios
 
 [1] http://docs.openstack.org/high-availability-guide/content/index.html
 [2] https://wiki.openstack.org/wiki/NeutronJunoProjectPlan
 [3] https://launchpad.net/neutron/+milestone/juno-3
 [4]
 http://git.openstack.org/cgit/openstack/neutron-specs/tree/specs/juno/l3-high-availability.rst
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][rally][qa] Application for a new OpenStack Program: Performance and Scalability

2014-08-01 Thread Alex Freedland
Angus,

Rally is designed as an operations tool. Its purpose is to run a production
cloud and give an operator tools and data to profile a production cloud. It
is intended as a first of many such tools.

There is a strong support in the community that operations tools should be
developed as part of OpenStack and Rally is the first such successful
community effort.

I can envision other tools building a community around them and they too
should become part of OpenStack operations tooling.  Maybe Operator Tools
program would be a better name?



Alex Freedland
Co-Founder
Mirantis, Inc.




On Thu, Jul 31, 2014 at 3:55 AM, Angus Salkeld angus.salk...@rackspace.com
wrote:

 On Sun, 2014-07-27 at 07:57 -0700, Sean Dague wrote:
  On 07/26/2014 05:51 PM, Hayes, Graham wrote:
   On Tue, 2014-07-22 at 12:18 -0400, Sean Dague wrote:
   On 07/22/2014 11:58 AM, David Kranz wrote:
   On 07/22/2014 10:44 AM, Sean Dague wrote:
   Honestly, I'm really not sure I see this as a different program,
 but is
   really something that should be folded into the QA program. I feel
 like
   a top level effort like this is going to lead to a lot of
 duplication in
   the data analysis that's currently going on, as well as
 functionality
   for better load driver UX.
  
-Sean
   +1
   It will also lead to pointless discussions/arguments about which
   activities are part of QA and which are part of
   Performance and Scalability Testing.
  
   I think that those discussions will still take place, it will just be
 on
   a per repository basis, instead of a per program one.
  
   [snip]
  
  
   Right, 100% agreed. Rally would remain with it's own repo + review
 team,
   just like grenade.
  
  -Sean
  
  
   Is the concept of a separate review team not the point of a program?
  
   In the the thread from Designate's Incubation request Thierry said [1]:
  
   Programs just let us bless goals and teams and let them organize
   code however they want, with contribution to any code repo under that
   umbrella being considered official and ATC-status-granting.
  
   I do think that this is something that needs to be clarified by the TC
 -
   Rally could not get a PTL if they were part of the QA project, but
 every
   time we get a program request, the same discussion happens.
  
   I think that mission statements can be edited to fit new programs as
   they occur, and that it is more important to let teams that have been
   working closely together to stay as a distinct group.
 
  My big concern here is that many of the things that these efforts have
  been doing are things we actually want much closer to the base. For
  instance, metrics on Tempest runs.
 
  When Rally was first created it had it's own load generator. It took a
  ton of effort to keep the team from duplicating that and instead just
  use some subset of Tempest. Then when measuring showed up, we actually
  said that is something that would be great in Tempest, so whoever ran
  it, be it for Testing, Monitoring, or Performance gathering, would have
  access to that data. But the Rally team went off in a corner and did it
  otherwise. That's caused the QA team to have to go and redo this work
  from scratch with subunit2sql, in a way that can be consumed by multiple
  efforts.
 
  So I'm generally -1 to this being a separate effort on the basis that so
  far the team has decided to stay in their own sandbox instead of
  participating actively where many of us thing the functions should be
  added. I also think this isn't like Designate, because this isn't
  intended to be part of the integrated release.

 From reading Boris's email it seems like rally will provide a horizon
 panel and api to back it (for the operator to kick of performance runs
 and view stats). So this does seem like something that would be a
 part of the integrated release (if I am reading things correctly).

 Is the QA program happy to extend their scope to include that?
 QA could become Quality Assurance of upstream code and running
 OpenStack installations. If not we need to find some other program
 for rally.

 -Angus

 
  Of course you could decide to slice up the universe in a completely
  different way, but we have toolchains today, which I think the focus
  should be on participating there.
 
-Sean
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Trove] Backup/restore namespace config move has leftovers

2014-08-01 Thread Mark Kirkwood

On 01/08/14 21:35, Denis Makogon wrote:


I'd suggest to file a bug
report and fix given issue.




Done.

https://bugs.launchpad.net/trove/+bug/1351545


I also took the opportunity to check if all the currently defined 
datastores had backup/restore_namespace set - they didn't, so I noted 
that too (I'm guessing they now actually *need* to have something set to 
avoid breakage)...


regards

Mark

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] API confusion

2014-08-01 Thread Brandon Logan
Hi Mike,

So that looks like those docs are for the v2 LBaaS API.  The CLI changes
for v2 are not in yet, and the v2 API implementation code is in review
right now.

I am a bit worried that I do not see the v1 docs anymore because v1 will
still remain until its deprecated. In fact, I'm pretty sure v2 will not
be used very much in Juno because it will not have very much driver
support.  So this i something we will have to fix.

Thanks for bring this up!

-Brandon 

On Fri, 2014-08-01 at 13:46 -0400, Mike Spreitzer wrote:
 http://developer.openstack.org/api-ref-networking-v2.htmland
 http://docs.openstack.org/api/openstack-network/2.0/content/GET_listMembers__v2.0_pools__pool_id__members_lbaas_ext_ops_member.htmlsay
  that to list LB pool members, the URL to GET is 
 /v2.0/pools/{pool_id}/members 
 
 When I use the CLI (`neutron -v lb-member-list`) I see a GET
 on /v2.0/lb/members.json 
 
 What's going on here? 
 
 Thanks,
 Mike
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev