Re: [openstack-dev] [Horizon] Problem with compressing scss files

2014-09-29 Thread Radomir Dopieralski
On 09/28/2014 09:41 AM, Thomas Goirand wrote:
 On 09/28/2014 03:35 PM, Thomas Goirand wrote:
 After a long investigation, I have found out that, in python-pyscss,
 there's the following code in scss/expression.py:

 return String(
 six.u(%s(%s) % (func_name, six.u(, .join(rendered_args,
 quotes=None)

 If I remove the first six.u(), and the code becomes like this:

 return String(
 %s(%s) % (func_name, six.u(, .join(rendered_args))),
 quotes=None)

 then everything works. Though this comes from a Debian specific patch
 for which I added the six.u() calls, to make it work in Python 3.2 in
 Wheezy. The original code is in fact:

 return String(
 u%s(%s) % (func_name, u, .join(rendered_args)),
 quotes=None)

 So, could anyone help me fixing this? What's the way to make it always
 work? I wouldn't like to just drop Python 3.x support because of this... :(
 
 Ops, silly me. The parenthesis aren't correct. Fixing it made it all
 work. Sorry for the noise, issue closed!

By the way, did you consider sending that python 3 patch upstream to the
python-pyscss guys, so that you don't have to apply it manually every
time? They are quite responsive.

-- 
Radomir Dopieralski


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Trove] Trove Blueprint Meeting on 29 September canceled

2014-09-29 Thread Nikhil Manchanda
Hey folks:

There's nothing to discuss on the BP Agenda for this week and most folks
are busy with fixing and reviewing RC1 blocking bugs, so I'd like to
cancel the Trove blueprint meeting for this week.

See you guys at the regular Trove meeting on Wednesday.

Thanks,
Nikhil

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] setuptools 6.0 ruins the world

2014-09-29 Thread Thierry Carrez
Sean Dague wrote:
 Setuptools 6.0 was released Friday night. (Side note: as a service to
 others releasing major software bumps on critical python software on a
 Friday night should be avoided.)

Since it's hard to prevent upstream from releasing over the weekends,
could we somehow freeze our PyPI mirrors from Friday noon to Monday noon
infra-team time ?

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [horizon] - Customize image list in project dashboard

2014-09-29 Thread Marcos Fermin Lobo
Hi all,

I would like to know if is possible to customize the image list in project 
dashboard in Horizon Icehouse. I want to make some modifications in the HTML 
table, but ONLY for image list in project dashboard.


-  Is it possible?

-  How can I customize this view?

-  Is there a procedure to customize specific HTML's in Horizon?

The modification that I want to do is this: 
https://ask.openstack.org/en/question/48479/split-image-list-in-n-tables/

I've deep on the code and I realized that I need to re-define these HTML's:

-  /horizon/templates/horizon/common/_data_table.html

-  /horizon/templates/horizon/common/_data_table_table_actions.html

-  /horizon/templates/horizon/common/_data_table_cell.html

I've achieved to re-define _data_table.html file for image list on project 
dashboard, I've just copy paste that file to 
/openstack_dashboard/templates/images/ directory and change the value of 
template parameter to images/_data_table.html in Meta class [1]. I tried to 
do the same procedure to re-define other files, but the django and horizon 
loaders don't check on /openstack_dashboard/templates/ directory for 
table_actions and table_cell re-definition (Meta class doesn't have parameter 
for more templates).

Thank you.

Best regards,
Marcos.

[1] - 
https://github.com/openstack/horizon/blob/stable/icehouse/openstack_dashboard/dashboards/project/images/images/tables.py#L229
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Stack Abandon/Adopt in Juno

2014-09-29 Thread Steven Hardy
On Fri, Sep 26, 2014 at 03:44:18PM -0400, Zane Bitter wrote:
 At the time of the Icehouse release, we realised that the just-merged stack
 abandon/adopt features were still in a very flaky state. A bunch of bugs
 were opened and in the release notes we called this out as a 'preview'
 feature, not fully supported.
 
 Fast-forward 6 months and we still have a bunch of open bugs (although
 others have been fixed, including at least one in Juno):
 
 https://bugs.launchpad.net/heat/+bug/1300336
 https://bugs.launchpad.net/heat/+bug/1301314
 https://bugs.launchpad.net/heat/+bug/1301323
 https://bugs.launchpad.net/heat/+bug/1350908
 https://bugs.launchpad.net/heat/+bug/1353670
 
 The first bug has patches with unacknowledged -1 comments. The others don't
 appear to have been started. Two of these are in the -rc1 bug list, and
 given that there appears to be a negligible chance of them actually being
 fixed we need to decide what to do about them.
 
 Particular areas of concern:
 
 * bug 1353670
 
 Basically we all agree that we need to change the semantics of the abandon
 call to prevent it deleting the critical data _before_ returning it to the
 user.
 
 * bug 1301323
 
 The writeup on this suggests that it will present a potential security hole
 once bug 1300734 is fixed - which it has been, but this one is still not.
 
 
 I don't know that there are any good solutions here. Abandon is a really
 handy thing to have around to get you out of trouble (although we hope that
 with Juno people won't get into trouble quite as often). But I'm not sure
 that we can go another release claiming this as a 'preview', especially with
 potential security issues involved. Given that approximately nobody reads
 the release notes it's a bit of a cop-out and in retrospect was probably a
 mistake last time.

I agree, I think it's really unfortunate the way this has played out.

If it weren't a feature which I think some folks are actually using, I'd be
tempted to say lets revert the whole thing, given the nature of the
long-standing issues, and that it's, umm, not exactly been actively
maintained by the original author.. :(

 What do folks think about adding a config option for this feature (or even
 separate ones for abandon  adopt?) and disabling it by default?

Obviously the risk here is that the disabled bugginess then persists
forever, so if we go down this route, I'd like to get some assurances from
those involved with or interested in this feature that the problems will
get worked out during Kilo, with a view to re-enabling in future.

Also, this is a terrible experience for users from an API versionining
perspective - we probably need to move towards an discoverable optional
extension model, or micro-versioning, so folks have some chance of figuring
out what functionality exists in a given heat endpoint.

Until recently I was of the opinion that incrementally adding things to the
API was OK, but  this is a prime example of why it's actually not IMO.

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][cross-project-work] What about adding cross-project-spec repo?

2014-09-29 Thread Thierry Carrez
Boris Pavlovic wrote:
 it goes without saying that working on cross-project stuff in OpenStack
 is quite hard task. 
 
 Because it's always hard to align something between a lot of people from
 different project. And when topic start being too HOT  the discussion
 goes in wrong direction and attempt to do cross project change fails, as
 a result maybe not *ideal* but *good enough* change in OpenStack will be
 abandoned. 
 
 The another issue that we have are specs. Projects are asking to make
 spec for change in their project, and in case of cross project stuff you
 need to make N similar specs (for every project). That is really hard to
 manage, and as a result you have N different specs that are describing
 the similar stuff. 
 
 To make this process more formal, clear and simple, let's reuse process
 of specs but do it in one repo /openstack/cross-project-specs.
 
 It means that every cross project topic: Unification of python clients,
 Unification of logging, profiling, debugging api, bunch of others will
 be discussed in one single place..

I think it's a good idea, as long as we truly limit it to cross-project
specs, that is, to concepts that may apply to every project. The
examples you mention are good ones. As a counterexample, if we have to
sketch a plan to solve communication between Nova and Neutron, I don't
think it would belong to that repository (it should live in whatever
project would have the most work to do).

 Process description of cross-project-specs:
 
   * PTL - person that mange core team members list and puts workflow +1
 on accepted specs
   * Every project have 1 core position (stackforge projects are included)
   * Cores are chosen by project team, they task is to advocate project
 team opinion 
   * No more veto, and -2 votes
   * If  75% cores +1 spec it's accepted. It means that all project have
 to accept this change.
   * Accepted specs gret high priority blueprints in all projects

So I'm not sure you can force all projects to accept the change.
Ideally, projects should see the benefits of alignment and adopt the
common spec. In our recent discussions we are also going towards more
freedom to projects, rather than less : imposing common specs to
stackforge projects sounds like a step backwards there.

Finally, I see some overlap with Oslo, which generally ends up
implementing most of the common policy into libraries it encourages
usage of. Therefore I'm not sure having a cross-project PTL makes
sense, as he would be stuck between the Oslo PTL and the Technical
Committee.

 With such simple rules we will simplify cross project work: 
 
 1) Fair rules for all projects, as every project has 1 core that has 1
 vote. 

A project is hardly a metric for fairness. Some projects are 50 times
bigger than others. What is a project in your mind ? A code repository
? Or more like a program (a collection of code repositories being worked
on by the same team ?)

So in summary, yes we need a place to discuss truly cross-project specs,
but I think it can't force decisions to all projects (especially
stackforge ones), and it can live within a larger-scope Oslo effort
and/or the Technical Committee.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Kilo Blueprints and Specs

2014-09-29 Thread John Garbutt
On 27 September 2014 00:31, Joe Gordon joe.gord...@gmail.com wrote:
 On Thu, Sep 25, 2014 at 9:21 AM, John Garbutt j...@johngarbutt.com wrote:
 On 25 September 2014 14:10, Daniel P. Berrange berra...@redhat.com
 wrote:
  The proposal is to keep kilo-1, kilo-2 much the same as juno. Except,
  we work harder on getting people to buy into the priorities that are
  set, and actively provoke more debate on their correctness, and we
  reduce the bar for what needs a blueprint.
 
  We can't have 50 high priority blueprints, it doesn't mean anything,
  right? We need to trim the list down to a manageable number, based on
  the agreed project priorities. Thats all I mean by slots / runway at
  this point.
 
  I would suggest we don't try to rank high/medium/low as that is
  too coarse, but rather just an ordered priority list. Then you
  would not be in the situation of having 50 high blueprints. We
  would instead naturally just start at the highest priority and
  work downwards.

 OK. I guess I was fixating about fitting things into launchpad.

 I guess having both might be what happens.

   The runways
   idea is just going to make me less efficient at reviewing. So I'm
   very much against it as an idea.
 
  This proposal is different to the runways idea, although it certainly
  borrows aspects of it. I just don't understand how this proposal has
  all the same issues?
 
 
  The key to the kilo-3 proposal, is about getting better at saying no,
  this blueprint isn't very likely to make kilo.
 
  If we focus on a smaller number of blueprints to review, we should be
  able to get a greater percentage of those fully completed.
 
  I am just using slots/runway-like ideas to help pick the high priority
  blueprints we should concentrate on, during that final milestone.
  Rather than keeping the distraction of 15 or so low priority
  blueprints, with those poor submitters jamming up the check queue, and
  constantly rebasing, and having to deal with the odd stray review
  comment they might get lucky enough to get.
 
  Maybe you think this bit is overkill, and thats fine. But I still
  think we need a way to stop wasting so much of peoples time on things
  that will not make it.
 
  The high priority blueprints are going to end up being mostly the big
  scope changes which take alot of time to review  probably go through
  many iterations. The low priority blueprints are going to end up being
  the small things that don't consume significant resource to review and
  are easy to deal with in the time we're waiting for the big items to
  go through rebases or whatever. So what I don't like about the runways
  slots idea is that removes the ability to be agile and take the
  initiative
  to review  approve the low priority stuff that would otherwise never
  make it through.

 The idea is more around concentrating on the *same* list of things.

 Certainly we need to avoid the priority inversion of concentrating
 only on the big things.

 Its also why I suggested that for kilo-1 and kilo-2, we allow any
 blueprint to merge, and only restrict it to a specific list in kilo-3,
 the idea being to maximise the number of things that get completed,
 rather than merging some half blueprints, but not getting to the good
 bits.


 Do we have to decide this now, or can we see how project priorities go and
 reevaluate half way through Kilo-2?

What we need to decide is not to use the runway idea for kilo-1 and
kilo-2. At this point, I guess we have (passively) decided that now.

I like the idea of waiting till mid kilo-2. Thats around Spec freeze,
which is handy.

Thanks,
John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [MagnetoDB] PKI tokens size performance impact

2014-09-29 Thread Aleksandr Chudnovets
Hello team,

As it was discussed on IRC meeting yesterday I’m glad to share the results
of my testing of performance impact of PKI token validation.

My research is connected with bp [1] about adding lightweight session to
mdb for improvement overall performance.

For my tests I used PKI tokens with and without service catalog.

(Actually, PKI tokens is only option for MagnetoDB, because MagnetoDB

was designed to handle huge amount of requests per second.)

Here is my results:

- for lightweight requests, like list_tables, we can get 5% - 8%
performance boost using tokens without service catalog;

- for big and slow requests, like batch_write, performance boost is smaller.

- disabling keystone support in api-paste gives us 20% performance boost
for PKI tokens :)

So it can be good practice for MagnetoDB clients to use PKI tokens without

service catalog.

Another conclusion I can made, the PKI token validation works fast enough
and adding additional session mechanism won’t give big performance boost.

Please share your views.

[1] https://blueprints.launchpad.net/magnetodb/+spec/light-weight-session

Thanks,

Aleksandr Chudnovets
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] No one replying on tempest issue?Please share your experience

2014-09-29 Thread Nikesh Kumar Mahalka
How to get nova-compute logs in juno devstack?
Below are nova services:
vedams@vedams-compute-fc:/opt/stack/tempest$ ps -aef | grep nova
vedams   15065 14812  0 10:56 pts/10   00:00:52 /usr/bin/python
/usr/local/bin/nova-compute --config-file /etc/nova/nova.conf
vedams   15077 14811  0 10:56 pts/900:02:06 /usr/bin/python
/usr/local/bin/nova-api
vedams   15086 14818  0 10:56 pts/12   00:00:09 /usr/bin/python
/usr/local/bin/nova-cert --config-file /etc/nova/nova.conf
vedams   15095 14836  0 10:56 pts/17   00:00:09 /usr/bin/python
/usr/local/bin/nova-consoleauth --config-file /etc/nova/nova.conf
vedams   15096 14821  0 10:56 pts/13   00:00:09 /usr/bin/python
/usr/local/bin/nova-network --config-file /etc/nova/nova.conf
vedams   15100 14844  0 10:56 pts/18   00:00:00 /usr/bin/python
/usr/local/bin/nova-objectstore --config-file /etc/nova/nova.conf
vedams   15101 14826  0 10:56 pts/15   00:00:05 /usr/bin/python
/usr/local/bin/nova-novncproxy --config-file /etc/nova/nova.conf --web
/opt/stack/noVNC
vedams   15103 14814  0 10:56 pts/11   00:02:02 /usr/bin/python
/usr/local/bin/nova-conductor --config-file /etc/nova/nova.conf
vedams   15104 14823  0 10:56 pts/14   00:00:11 /usr/bin/python
/usr/local/bin/nova-scheduler --config-file /etc/nova/nova.conf
vedams   15117 14831  0 10:56 pts/16   00:00:00 /usr/bin/python
/usr/local/bin/nova-xvpvncproxy --config-file /etc/nova/nova.conf
vedams   15195 15103  0 10:56 pts/11   00:00:24 /usr/bin/python
/usr/local/bin/nova-conductor --config-file /etc/nova/nova.conf
vedams   15196 15103  0 10:56 pts/11   00:00:25 /usr/bin/python
/usr/local/bin/nova-conductor --config-file /etc/nova/nova.conf
vedams   15197 15103  0 10:56 pts/11   00:00:24 /usr/bin/python
/usr/local/bin/nova-conductor --config-file /etc/nova/nova.conf
vedams   15198 15103  0 10:56 pts/11   00:00:24 /usr/bin/python
/usr/local/bin/nova-conductor --config-file /etc/nova/nova.conf
vedams   15208 15077  0 10:56 pts/900:00:00 /usr/bin/python
/usr/local/bin/nova-api
vedams   15209 15077  0 10:56 pts/900:00:00 /usr/bin/python
/usr/local/bin/nova-api
vedams   15238 15077  0 10:56 pts/900:00:03 /usr/bin/python
/usr/local/bin/nova-api
vedams   15239 15077  0 10:56 pts/900:00:01 /usr/bin/python
/usr/local/bin/nova-api
vedams   15240 15077  0 10:56 pts/900:00:03 /usr/bin/python
/usr/local/bin/nova-api
vedams   15241 15077  0 10:56 pts/900:00:03 /usr/bin/python
/usr/local/bin/nova-api
vedams   15248 15077  0 10:56 pts/900:00:00 /usr/bin/python
/usr/local/bin/nova-api
vedams   15249 15077  0 10:56 pts/900:00:00 /usr/bin/python
/usr/local/bin/nova-api
vedams   21850 14712  0 16:16 pts/000:00:00 grep --color=auto nova


Below are nova logs files:
vedams@vedams-compute-fc:/opt/stack/tempest$ ls
/opt/stack/logs/screen/screen-n-
screen-n-api.2014-09-28-101810.logscreen-n-cond.log
screen-n-net.2014-09-28-101810.logscreen-n-obj.log
screen-n-api.log  screen-n-cpu.2014-09-28-101810.log
screen-n-net.log  screen-n-sch.2014-09-28-101810.log
screen-n-cauth.2014-09-28-101810.log  screen-n-cpu.log
screen-n-novnc.2014-09-28-101810.log  screen-n-sch.log
screen-n-cauth.logscreen-n-crt.2014-09-28-101810.log
screen-n-novnc.logscreen-n-xvnc.2014-09-28-101810.log
screen-n-cond.2014-09-28-101810.log   screen-n-crt.log
screen-n-obj.2014-09-28-101810.logscreen-n-xvnc.log


Below  are nova screen-seesions:
6-$(L) n-api  7$(L) n-cpu  8$(L) n-cond  9$(L) n-crt  10$(L) n-net  11$(L)
n-sch  12$(L) n-novnc  13$(L) n-xvnc  14$(L) n-cauth  15$(L) n-obj




Regards
Nikesh


On Tue, Sep 23, 2014 at 3:10 PM, Nikesh Kumar Mahalka 
nikeshmaha...@vedams.com wrote:

 Hi,
 I am able to do all volume operations through dashboard and cli commands.
 But when i am running tempest tests,some tests are getting failed.
 For contributing cinder volume driver for my client in cinder,do all
 tempest tests should passed?

 Ex:
 1)
 ./run_tempest.sh tempest.api.volume.test_volumes_snapshots : 1or 2 tests
 are getting failed

 But when i am running individual tests in test_volumes_snapshots,all
 tests are getting passed.

 2)
 ./run_tempest.sh
 tempest.api.volume.test_volumes_actions.VolumesV2ActionsTest.test_volume_upload:
 This is also getting failed.



 Regards
 Nikesh

 On Mon, Sep 22, 2014 at 4:12 PM, Ken'ichi Ohmichi ken1ohmi...@gmail.com
 wrote:

 Hi Nikesh,

  -Original Message-
  From: Nikesh Kumar Mahalka [mailto:nikeshmaha...@vedams.com]
  Sent: Saturday, September 20, 2014 9:49 PM
  To: openst...@lists.openstack.org; OpenStack Development Mailing List
 (not for usage questions)
  Subject: Re: [Openstack] No one replying on tempest issue?Please share
 your experience
 
  Still i didnot get any reply.

 Jay has already replied to this mail, please check the nova-compute
 and cinder-volume log as he said[1].

 [1]:
 http://lists.openstack.org/pipermail/openstack-dev/2014-September/046147.html

  Now i ran below command:
  

Re: [openstack-dev] VPNaaS site to site connection down.

2014-09-29 Thread Paul Michali (pcm)
masoom alam,

It’s been a little while since I’ve used the reference VPN implementation, but 
here are some suggestions/questions…

Can you show the ipsec-site-connection-create command used on each end?
Can you show the topology with IP addresses used (and indicate how the two 
clouds are connected)?
Are you using devstack? Two physical nodes? How are they interconnected?

First thing would be to ensure that you can ping from one host to another over 
the public IPs involved. You can then go to the namespace of the router and see 
if you can ping the public I/F of the other end’s router.

You can look at the screen-q-vpn.log (assuming devstack used) to see if any 
errors during setup.

Note: When I stack, I turn off neutron security groups and then set nova 
security groups to allow SSH and ICMP. I imagine the alternative would be to 
setup neutron security groups to allow these two protocols.

I didn’t quite follow what you meant by Please note that my two devstack nodes 
are on different public addresses, so scenario is a little different than the 
one described here: 
https://wiki.openstack.org/wiki/Neutron/VPNaaS/HowToInstall”. Can you elaborate 
(showing the commands and topology will help)?

Germy,

I have created this BP during Juno (unfortunately no progress on it however), 
regarding being able to see more status information for troubleshooting: 
https://blueprints.launchpad.net/neutron/+spec/l3-svcs-vendor-status-report

It was targeted for vendor implementations, but would include reference 
implementation status too. Right now, if a VPN connection negotiation fails, 
there’s no indication of what went wrong.

Regards,


PCM (Paul Michali)

MAIL …..…. p...@cisco.com
IRC ……..… pcm_ (irc.freenode.com)
TW ………... @pmichali
GPG Key … 4525ECC253E31A83
Fingerprint .. 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83



On Sep 29, 2014, at 1:38 AM, masoom alam masoom.a...@gmail.com wrote:

 Hi Germy
 
 We cannot ping the public interface of the 2nd devstack setup (devstack 
 West). From our Cirros instance (First devstack -- devstack east), we can 
 ping our own public ip, but cannot ping the other public ip. I think problem 
 lies here, if we are reaching the devstack west, how can we make a VPN 
 connection
 
 Our topology looks like:
 
 CirrOS ---QrouterPublic IP ---publicIPQrouter-CirrOS
 _ _
devstack EASTdevstack WEST
 
 
 Also it is important to note that we are not able to ssh the instance private 
 ip, without sudo ip netns qrouter id so this means we cannot even ssh with 
 floating ip.
 
 
 it seems there is a problem in firewall or iptables. 
 
 Please guide
 
 
 
 On Sunday, September 28, 2014, Germy Lure germy.l...@gmail.com wrote:
 Hi,
 
 masoom:
 I think firstly you can just check that if you could ping from left to right 
 without installing VPN connection.
 If it worked, then you should cat the system logs to confirm the configure's 
 OK.
 You can ping and tcpdump to dialog where packets are blocked.
 
 stackers:
 I think we should give mechanism to show the cause when vpn-connection is 
 down. At least, we could extend an attribute to explain this. Maybe the 
 VPN-incubator project is a chance?
 
 BR,
 Germy
 
 
 On Sat, Sep 27, 2014 at 7:04 PM, masoom alam masoom.a...@gmail.com wrote:
 Hi Every one, 
 
 I am trying to establish the VPN connection by giving the neutron 
 ipsec-site-connection-create.
 
 neutron ipsec-site-connection-create --name vpnconnection1 --vpnservice-id 
 myvpn --ikepolicy-id ikepolicy1 --ipsecpolicy-id ipsecpolicy1 --peer-address 
 172.24.4.233 --peer-id 172.24.4.233 --peer-cidr 10.2.0.0/24 --psk secret
 
 For the --peer-address I am giving the public interface of the other devstack 
 node. Please note that my two devstack nodes are on different public 
 addresses, so scenario is a little different than the one described here: 
 https://wiki.openstack.org/wiki/Neutron/VPNaaS/HowToInstall
 
 The --peer-id is the ip address of the Qrouter connected to the public 
 interface. With this configuration, I am not able to up the VPN site to site 
 connection. Do you think its a firewall issue, I have disabled both firewalls 
 with sudo ufw disable. Any help in this regard. Am I giving the correct 
 parameters?
 
 Thanks
 
 
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] setuptools 6.0 ruins the world

2014-09-29 Thread Sean Dague
On 09/29/2014 05:06 AM, Thierry Carrez wrote:
 Sean Dague wrote:
 Setuptools 6.0 was released Friday night. (Side note: as a service to
 others releasing major software bumps on critical python software on a
 Friday night should be avoided.)
 
 Since it's hard to prevent upstream from releasing over the weekends,
 could we somehow freeze our PyPI mirrors from Friday noon to Monday noon
 infra-team time ?

Honestly, I'm not sure that would be very helpful. There tend to be
people with one eye open on things over the weekend (like I was this
weekend), and the fact that it was fixed then meant most people never
saw the break. If we did a giant requirements release every monday
morning it would also be *far* more difficult to figure out just based
on release dates which upstream dependency probably just killed us.

-Sean

-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Limitation of permissions on modification some resources

2014-09-29 Thread Andrey Epifanov

Hi All,

I started working on the the https://bugs.launchpad.net/neutron/+bug/1339028
and realized that we have the same issue with other connected resources 
in Neutron.


The problem is that we have API for the modification of any resources 
without
limitations, for example, we can modify Router IP and connected to this 
subnet

VMs never will know about it and lose the default router. The same situation
with routes and IP for DHCP/DNS ports.

https://bugs.launchpad.net/neutron/+bug/1374398
https://bugs.launchpad.net/neutron/+bug/1267310

So, we need to have common approach for the resolving these issues.

Solution might be  the following:
- To deny any modification of resources that were created and
   configured automatically during usual operations.
- To provide modification permissions only to admin.

What is your opinion?

/Thanks and Best Regards,
Andrey./
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [MagnetoDB] PKI tokens size performance impact

2014-09-29 Thread Ilya Sviridov
Hello Aleksandr,

Thank you for your efforts and sharing this.

Looking closer to figures, I can assume that lightweight session won't help
us a lot, but will introduce additional complexity.

So, I'm marking the BP as Obsolete.

Ilya Sviridov
isviridov @ FreeNode



On Mon, Sep 29, 2014 at 1:21 PM, Aleksandr Chudnovets 
achudnov...@mirantis.com wrote:

 Hello team,

 As it was discussed on IRC meeting yesterday I’m glad to share the results
 of my testing of performance impact of PKI token validation.

 My research is connected with bp [1] about adding lightweight session to
 mdb for improvement overall performance.

 For my tests I used PKI tokens with and without service catalog.

 (Actually, PKI tokens is only option for MagnetoDB, because MagnetoDB

 was designed to handle huge amount of requests per second.)

 Here is my results:

 - for lightweight requests, like list_tables, we can get 5% - 8%
 performance boost using tokens without service catalog;

 - for big and slow requests, like batch_write, performance boost is
 smaller.

 - disabling keystone support in api-paste gives us 20% performance boost
 for PKI tokens :)

 So it can be good practice for MagnetoDB clients to use PKI tokens without

 service catalog.

 Another conclusion I can made, the PKI token validation works fast enough
 and adding additional session mechanism won’t give big performance boost.

 Please share your views.

 [1] https://blueprints.launchpad.net/magnetodb/+spec/light-weight-session

 Thanks,

 Aleksandr Chudnovets

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceph] Why performance of benchmarks with small blocks is extremely small?

2014-09-29 Thread Pasquale Porreca

Hello

I have no experience with Ceph and this specific benchmark tool, anyway 
I have experience with several other performance benchmark tools and 
file systems and I can say it always happen to have very very low 
performance results when the file size is too small (i.e.  1MB).


My suspect is that benchmark tools are not reliable for file size so 
small, since the time to write is so small that the overhead introduced 
by the test itself is not at all negligible.


I saw that the default object size for rados is 4 MB, did you try your 
test without the option -b 512? I think the results should be 
different for several order of magnitude.


BR

On 09/27/14 17:14, Timur Nurlygayanov wrote:

Hello all,

I installed OpenStack with Glance + Ceph OSD with replication factor 2 
and now I can see the write operations are extremly slow.
For example, I can see only 0.04 MB/s write speed when I run rados 
bench with 512b blocks:


rados bench -p test 60 write --no-cleanup -t 1 -b 512

 Maintaining 1 concurrent writes of 512 bytes for up to 60 seconds or 
0 objects

 Object prefix: benchmark_data_node-17.domain.tld_15862
   sec Cur ops   started  finishedavg MB/s cur MB/s   last 
lat  avg lat
 0   0 0 0 0
0   -   0
 1   183820.0400341 0.0400391  
0.008465   0.0120985
 2   1   169   168  0.0410111 0.0419922  
0.080433   0.0118995
 3   1   240   239  0.0388959 0.034668   
0.008052   0.0125385
 4   1   356   355  0.0433309 0.0566406  
0.00837 0.0112662
 5   1   472   471  0.0459919 0.0566406  
0.008343   0.0106034
 6   1   550   549  0.0446735 0.0380859  
0.036639   0.0108791
 7   1   581   580  0.0404538 0.0151367  
0.008614   0.0120654



My test environment configuration:
Hardware servers with 1Gb network interfaces, 64Gb RAM and 16 CPU 
cores per node, HDDs WDC WD5003ABYX-01WERA0.
OpenStack with 1 controller, 1 compute and 2 ceph nodes (ceph on 
separate nodes).

CentOS 6.5, kernel 2.6.32-431.el6.x86_64.

I tested several config options for optimizations, like in 
/etc/ceph/ceph.conf:


[default]
...
osd_pool_default_pg_num = 1024
osd_pool_default_pgp_num = 1024
osd_pool_default_flag_hashpspool = true
...
[osd]
osd recovery max active = 1
osd max backfills = 1
filestore max sync interval = 30
filestore min sync interval = 29
filestore flusher = false
filestore queue max ops = 1
filestore op threads = 16
osd op threads = 16
...
[client]
rbd_cache = true
rbd_cache_writethrough_until_flush = true

and in /etc/cinder/cinder.conf:

[DEFAULT]
volume_tmp_dir=/tmp

but in the result performance was increased only on ~30 % and it not 
looks like huge success.


Non-default mount options and TCP optimization increase the speed in 
about 1%:


[root@node-17 ~]# mount | grep ceph
/dev/sda4 on /var/lib/ceph/osd/ceph-0 type xfs 
(rw,noexec,nodev,noatime,nodiratime,user_xattr,data=writeback,barrier=0)


[root@node-17 ~]# cat /etc/sysctl.conf
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
net.ipv4.tcp_window_scaling = 1
net.ipv4.tcp_timestamps = 1
net.ipv4.tcp_sack = 1


Do we have other ways to significantly improve CEPH storage performance?
Any feedback and comments are welcome!

Thank you!


--

Timur,
QA Engineer
OpenStack Projects
Mirantis Inc


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
Pasquale Porreca

DEK Technologies
Via dei Castelli Romani, 22
00040 Pomezia (Roma)

Mobile +39 3394823805
Skype paskporr

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Kilo Blueprints and Specs

2014-09-29 Thread Gary Kotton
Hi,
Is the process documented anywhere? That is, if say for example I had a
spec approved in J and its code did not land, how do we go about kicking
the tires for K on that spec.
Thanks
Gary

On 9/29/14, 1:07 PM, John Garbutt j...@johngarbutt.com wrote:

On 27 September 2014 00:31, Joe Gordon joe.gord...@gmail.com wrote:
 On Thu, Sep 25, 2014 at 9:21 AM, John Garbutt j...@johngarbutt.com
wrote:
 On 25 September 2014 14:10, Daniel P. Berrange berra...@redhat.com
 wrote:
  The proposal is to keep kilo-1, kilo-2 much the same as juno.
Except,
  we work harder on getting people to buy into the priorities that are
  set, and actively provoke more debate on their correctness, and we
  reduce the bar for what needs a blueprint.
 
  We can't have 50 high priority blueprints, it doesn't mean anything,
  right? We need to trim the list down to a manageable number, based
on
  the agreed project priorities. Thats all I mean by slots / runway at
  this point.
 
  I would suggest we don't try to rank high/medium/low as that is
  too coarse, but rather just an ordered priority list. Then you
  would not be in the situation of having 50 high blueprints. We
  would instead naturally just start at the highest priority and
  work downwards.

 OK. I guess I was fixating about fitting things into launchpad.

 I guess having both might be what happens.

   The runways
   idea is just going to make me less efficient at reviewing. So I'm
   very much against it as an idea.
 
  This proposal is different to the runways idea, although it
certainly
  borrows aspects of it. I just don't understand how this proposal has
  all the same issues?
 
 
  The key to the kilo-3 proposal, is about getting better at saying
no,
  this blueprint isn't very likely to make kilo.
 
  If we focus on a smaller number of blueprints to review, we should
be
  able to get a greater percentage of those fully completed.
 
  I am just using slots/runway-like ideas to help pick the high
priority
  blueprints we should concentrate on, during that final milestone.
  Rather than keeping the distraction of 15 or so low priority
  blueprints, with those poor submitters jamming up the check queue,
and
  constantly rebasing, and having to deal with the odd stray review
  comment they might get lucky enough to get.
 
  Maybe you think this bit is overkill, and thats fine. But I still
  think we need a way to stop wasting so much of peoples time on
things
  that will not make it.
 
  The high priority blueprints are going to end up being mostly the big
  scope changes which take alot of time to review  probably go through
  many iterations. The low priority blueprints are going to end up
being
  the small things that don't consume significant resource to review
and
  are easy to deal with in the time we're waiting for the big items to
  go through rebases or whatever. So what I don't like about the
runways
  slots idea is that removes the ability to be agile and take the
  initiative
  to review  approve the low priority stuff that would otherwise never
  make it through.

 The idea is more around concentrating on the *same* list of things.

 Certainly we need to avoid the priority inversion of concentrating
 only on the big things.

 Its also why I suggested that for kilo-1 and kilo-2, we allow any
 blueprint to merge, and only restrict it to a specific list in kilo-3,
 the idea being to maximise the number of things that get completed,
 rather than merging some half blueprints, but not getting to the good
 bits.


 Do we have to decide this now, or can we see how project priorities go
and
 reevaluate half way through Kilo-2?

What we need to decide is not to use the runway idea for kilo-1 and
kilo-2. At this point, I guess we have (passively) decided that now.

I like the idea of waiting till mid kilo-2. Thats around Spec freeze,
which is handy.

Thanks,
John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [IPv6] New API format for extra_dhcp_opts

2014-09-29 Thread Robert Li (baoli)
Hi Xu Han,

My question is how the CLI user interface would look like to distinguish 
between v4 and v6 dhcp options?

Thanks,
Robert

On 9/28/14, 10:29 PM, Xu Han Peng 
pengxu...@gmail.commailto:pengxu...@gmail.com wrote:

Mark's suggestion works for me as well. If no one objects, I am going to start 
the implementation.

Thanks,
Xu Han

On 09/27/2014 01:05 AM, Mark McClain wrote:

On Sep 26, 2014, at 2:39 AM, Xu Han Peng 
pengxu...@gmail.commailto:pengxu...@gmail.com wrote:

Currently the extra_dhcp_opts has the following API interface on a port:

{
port:
{
extra_dhcp_opts: [
{opt_value: testfile.1,opt_name: bootfile-name},
{opt_value: 123.123.123.123, opt_name: tftp-server},
{opt_value: 123.123.123.45, opt_name: server-ip-address}
],

 }
}

During the development of DHCPv6 function for IPv6 subnets, we found this 
format doesn't work anymore because an port can have both IPv4 and IPv6 
address. So we need to find a new way to specify extra_dhcp_opts for DHCPv4 and 
DHCPv6, respectively. ( https://bugs.launchpad.net/neutron/+bug/1356383)

Here are some thoughts about the new format:

Option1: Change the opt_name in extra_dhcp_opts to add a prefix (v4 or v6) so 
we can distinguish opts for v4 or v6 by parsing the opt_name. For backward 
compatibility, no prefix means IPv4 dhcp opt.

extra_dhcp_opts: [
{opt_value: testfile.1,opt_name: bootfile-name},
{opt_value: 123.123.123.123, opt_name: v4:tftp-server},
{opt_value: [2001:0200:feed:7ac0::1], opt_name: 
v6:dns-server}
]

Option2: break extra_dhcp_opts into IPv4 opts and IPv6 opts. For backward 
compatibility, both old format and new format are acceptable, but old format 
means IPv4 dhcp opts.

extra_dhcp_opts: {
 ipv4: [
{opt_value: testfile.1,opt_name: bootfile-name},
{opt_value: 123.123.123.123, opt_name: tftp-server},
 ],
 ipv6: [
{opt_value: [2001:0200:feed:7ac0::1], opt_name: 
dns-server}
 ]
}

The pro of Option1 is there is no need to change API structure but only need to 
add validation and parsing to opt_name. The con of Option1 is that user need to 
input prefix for every opt_name which can be error prone. The pro of Option2 is 
that it's clearer than Option1. The con is that we need to check two formats 
for backward compatibility.

We discussed this in IPv6 sub-team meeting and we think Option2 is preferred. 
Can I also get community's feedback on which one is preferred or any other 
comments?


I’m -1 for both options because neither is properly backwards compatible.  
Instead we should add an optional 3rd value to the dictionary: “version”.  The 
version key would be used to make the option only apply to either version 4 or 
6.  If the key is missing or null, then the option would apply to both.

mark




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [MagnetoDB] PKI tokens size performance impact

2014-09-29 Thread Illia Khudoshyn
Nice to hear that we won't implement our own 'enlightening' layer.
Main concern for me would be much more code to look through for security
issues.

Thanks,
Illia.

On Mon, Sep 29, 2014 at 2:36 PM, Ilya Sviridov isviri...@mirantis.com
wrote:

 Hello Aleksandr,

 Thank you for your efforts and sharing this.

 Looking closer to figures, I can assume that lightweight session won't
 help us a lot, but will introduce additional complexity.

 So, I'm marking the BP as Obsolete.

 Ilya Sviridov
 isviridov @ FreeNode



 On Mon, Sep 29, 2014 at 1:21 PM, Aleksandr Chudnovets 
 achudnov...@mirantis.com wrote:

 Hello team,

 As it was discussed on IRC meeting yesterday I’m glad to share the
 results of my testing of performance impact of PKI token validation.

 My research is connected with bp [1] about adding lightweight session to
 mdb for improvement overall performance.

 For my tests I used PKI tokens with and without service catalog.

 (Actually, PKI tokens is only option for MagnetoDB, because MagnetoDB

 was designed to handle huge amount of requests per second.)

 Here is my results:

 - for lightweight requests, like list_tables, we can get 5% - 8%
 performance boost using tokens without service catalog;

 - for big and slow requests, like batch_write, performance boost is
 smaller.

 - disabling keystone support in api-paste gives us 20% performance boost
 for PKI tokens :)

 So it can be good practice for MagnetoDB clients to use PKI tokens without

 service catalog.

 Another conclusion I can made, the PKI token validation works fast enough
 and adding additional session mechanism won’t give big performance boost.

 Please share your views.

 [1] https://blueprints.launchpad.net/magnetodb/+spec/light-weight-session

 Thanks,

 Aleksandr Chudnovets





-- 

Best regards,

Illia Khudoshyn,
Software Engineer, Mirantis, Inc.



38, Lenina ave. Kharkov, Ukraine

www.mirantis.com http://www.mirantis.ru/

www.mirantis.ru



Skype: gluke_work

ikhudos...@mirantis.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] concrete proposal on changing the library testing model with devstack-gate

2014-09-29 Thread Doug Hellmann

On Sep 28, 2014, at 5:00 PM, Robert Collins robe...@robertcollins.net wrote:

 On 27 September 2014 10:07, Robert Collins robe...@robertcollins.net wrote:
 
 TripleO has been running pip releases of clients in servers from the
 get go, and I've lost track of the number of bad dependency bugs we've
 encounted. We've hit many more of those than bad releases that crater
 the world [though those have happened].
 
 And yes, alpha dependencies are a mistake - if we depend on it, its a
 release. Quod erat demonstratum.
 
 Doug has pointed out to me that this is perhaps a little shallow :).
 
 So let me try and sketch a little deeper.
 
 TripleO's upstream CI looks similar (if more aggressive) to the CD
 deploy process some of our major contributors are using: we take
 trunk, wrap it up into a production config and deploy it. There are
 two major differences vis-a-vis what HP Cloud or Rackspace are doing.
 Firstly, they're running off a fork which they sync at high frequency
 - which exists to deal with the second thing, which is that they run
 deployment specific tests against their tree before deployment, and
 when that fails, they fix it simultaneously upstream and in the fork.
 
 So, more or less *every commit* in Nova, Cinder, etc is going into a
 production release in a matter of days. From our hands to our users.
 
 What TripleO does - and I don't know the exact detail for Rackspace or
 HP Cloud to say if this is the same) - is that we're driven by
 requirements.txt files + what happens when things break.
 
 So if requirements.txt specifies a thing thats not on PyPI, that
 causes things to break : we -really- don't like absolute URLs in
 requirements.txt, and we -really- don't like having stale requirements
 in requirements.txt.
 
 The current situation where (most) everything we consume is on PyPI is
 a massive step forwards. Yay.
 
 But if requirements.txt specifies a library thats not released, thats
 a little odd. It's odd because we're saying that each commit of the
 API servers is at release quality (but we don't release because for
 these big projects a release is much more than just quality - its
 ecosystem, its documentation, its deployment support)...
 
 The other way things can be odd is if requirements.txt is stale: e.g.
 say cinderclient adds an API to make life easier in Nova. If Nova
 wants to use that, they could just use it - it will pass the
 integrated gate which tests tip vs tip. But it will fail if any user
 just 'pip install' installs Nova. It will fail for developers too. So
 I think its far better to publish that cinderclient on PyPI so that
 things do work.
 
 And here is where the discussion about alphas comes in.
 
 Should we publish that cinderclient as a release, or as a pre-release preview?
 
 If we publish it as a pre-release preview, we're saying that we
 reserve the right to change the API anyway we want. We're not saying
 that its not release quality: because the API servers can't be at
 release quality if they depend on non-release quality components.
 
 And yet, we /cannot/ change that new cinderclient API anyway we want.
 Deployers have already deployed the API server that depends on it;
 they are pulling from pypi: if we push up a breaking cinderclient
 alpha-2 or whatever, it will break our deployers.
 
 Thats why I say that if we depend on it, its released: because in all
 ways that matter, the constraints that one expects to see on a full
 release, apply to *every commit* we do within the transitive
 dependency tree that is the integrated gate.
 
 And this isn't because we test together - its because the top level
 drivers for that gate are the quality of the API server trees, which
 are CD deployed. The testing strategy doesn't matter so much compared
 to that basic starting point.
 
 To summarise the requirements I believe we have are:
 - every commit of API servers is production quality and release-ready
 [for existing features]
 - we don't break public APIs in projects at the granularity of consumption
   - Thats per commit for API servers, and
 per-whatever-pip-grabs-when-installing-api-servers for library
 projects
  (e.g. per-release)
 - requirements.txt should be pip installable at all times
 - and faithfully represent actual dependencies: nothing should break
 if one pip installs e.g. nova from git
 
 Requirements we *do not have*:
 - new features within a project have to be production quality on day one
   That is, projects are allowed to have code that isn't yet
 supported, though for instance we don't have a good way to annotate
 that X 'will be a public API but is not yet'.
 
 
 So the alpha thing is a mistake IMO not because we're pushing to PyPI,
 but because we're calling them alphas, which I don't believe
 represents the actual state of the code nor our ability to alter
 things.
 
 Right now we test tip vs tip, so some of this is hidden until it
 breaks TripleO [which is more often than we'd like!] but the
 incremental, don't break things 

Re: [openstack-dev] [neutron] Limitation of permissions on modification some resources

2014-09-29 Thread Mark McClain

On Sep 29, 2014, at 7:09 AM, Andrey Epifanov aepifa...@mirantis.com wrote:

 Hi All,
 
 I started working on the the https://bugs.launchpad.net/neutron/+bug/1339028
 and realized that we have the same issue with other connected resources in 
 Neutron.

The is a bug in how we’re implementing the logic to manage routes on the router 
instance in the l3-agent implementation.  There are other implementations of 
the logical router that do not need this restriction. 

 
 The problem is that we have API for the modification of any resources without
 limitations, for example, we can modify Router IP and connected to this subnet
 VMs never will know about it and lose the default router. The same situation
 with routes and IP for DHCP/DNS ports.
  
 https://bugs.launchpad.net/neutron/+bug/1374398
 https://bugs.launchpad.net/neutron/+bug/1267310

I don’t see any of these as a bug.  If tenant wants to make changes to their 
network (even ill advised ones), we should allow it.  Restricting these API 
operations to admin’s means we’re inhibiting users from making changes that 
could be regular maintenance operations of a tenant.

mark

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] (OperationalError) (1040, 'Too many connections') None None

2014-09-29 Thread Jay Pipes
On Sun, Sep 28, 2014 at 5:56 PM, Nader Lahouti nader.laho...@gmail.com
wrote:

 Hi All,

 I am seeing 'Too many connections' error in nova api and cinder when when
 installing openstack using the latest..
 The error happens when launching couple of VMs (in this test around 5 VMs).

 Here are the logs when error happens:

 (1) nova-api logs/traceback:

 http://paste.openstack.org/show/116414/

 (2) cinder api logs/traceback:

 http://paste.openstack.org/show/hbaomc5IVS3mig8z2BWq/

 (3) Stats of some connections:
 http://paste.openstack.org/show/116425/

 As it shown in (3) the issue is not seen with icehouse release.

 Can somebody please let me know if it is a known issue?


Hi Nader,

Would you mind pastebin'ing the contents of:

 SHOW FULL PROCESSLIST\G

when executed from the mysql command line client?

That will help to show us which threads are stuck executing what in MySQL.

Best,
-jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] (OperationalError) (1040, 'Too many connections') None None

2014-09-29 Thread Mike Bayer

On Sep 28, 2014, at 5:56 PM, Nader Lahouti nader.laho...@gmail.com wrote:

 Hi All,
 
 I am seeing 'Too many connections' error in nova api and cinder when when 
 installing openstack using the latest..
 The error happens when launching couple of VMs (in this test around 5 VMs).
 
 Here are the logs when error happens:
 
 (1) nova-api logs/traceback:
 http://paste.openstack.org/show/116414/
 
 (2) cinder api logs/traceback:
 http://paste.openstack.org/show/hbaomc5IVS3mig8z2BWq/
 
 (3) Stats of some connections:
 http://paste.openstack.org/show/116425/
 
 As it shown in (3) the issue is not seen with icehouse release.
 
 Can somebody please let me know if it is a known issue?

I’ve not been alerted to this before, the stats on (3) look pretty alarming.

anyone else seeing things like this?



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo] Fate of xmlutils

2014-09-29 Thread Julien Danjou
Hi,

I was looking at xmlutils today, and I took a look at the history of
this file that seems to come from a CVE almost 2 years ago.

What is surprising is that, unless I missed something, the only user of
that lib is Nova. Other projects such as Keystone or Neutron implemented
things in a different way.

It seems that Python fixed that issue with 2 modules released on PyPI:

  https://pypi.python.org/pypi/defusedxml
  https://pypi.python.org/pypi/defusedexpat

I'm no XML expert, and I've only a shallow understanding of the issue,
but I wonder if we should put some efforts to drop xmlutils and our
custom XML fixes to used instead these 2 modules.

Hint appreciated.

-- 
Julien Danjou
/* Free Software hacker
   http://julien.danjou.info */


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Fate of xmlutils

2014-09-29 Thread Doug Hellmann

On Sep 29, 2014, at 12:03 PM, Julien Danjou jul...@danjou.info wrote:

 Hi,
 
 I was looking at xmlutils today, and I took a look at the history of
 this file that seems to come from a CVE almost 2 years ago.
 
 What is surprising is that, unless I missed something, the only user of
 that lib is Nova. Other projects such as Keystone or Neutron implemented
 things in a different way.
 
 It seems that Python fixed that issue with 2 modules released on PyPI:
 
  https://pypi.python.org/pypi/defusedxml
  https://pypi.python.org/pypi/defusedexpat
 
 I'm no XML expert, and I've only a shallow understanding of the issue,
 but I wonder if we should put some efforts to drop xmlutils and our
 custom XML fixes to used instead these 2 modules.
 
 Hint appreciated.

I thought those fixes were also eventually rolled into language releases, and 
we had planned to stop worrying about using xmlutils after we drop python 2.6 
support for master. Am I mistaken about those being rolled into the release?

The defused* packages may have been created/released at the same time as, or 
after, the module in the incubator. If we do need to continue carrying support 
for the fix I agree that moving to the 3rd party libraries would make sense.

Doug


 
 -- 
 Julien Danjou
 /* Free Software hacker
   http://julien.danjou.info */
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][swift] Has anybody considered storing tokens in Swift?

2014-09-29 Thread Julien Danjou
On Mon, Sep 29 2014, Jay Pipes wrote:

 What if we wrote a token driver in Keystone that uses Swift for backend 
 storage?

Yay! I already wrote a PoC to that:

  https://review.openstack.org/#/c/86016/

It has been rejected because this patch didn't use the generic approach
that Keystone tries to use to all the storage backends. Otherwise, I've
tested it a bit with devstack and it worked fine.

I didn't continue to work on it by lack of time, and because the effort
to use Swift as a generic cache mechanism seemed a bit tricky to me.

-- 
Julien Danjou
// Free Software hacker
// http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][swift] Has anybody considered storing tokens in Swift?

2014-09-29 Thread Jay Pipes

On 09/29/2014 12:15 PM, Julien Danjou wrote:

On Mon, Sep 29 2014, Jay Pipes wrote:


What if we wrote a token driver in Keystone that uses Swift for backend storage?


Yay! I already wrote a PoC to that:

   https://review.openstack.org/#/c/86016/


Sweet! :)


It has been rejected because this patch didn't use the generic approach
that Keystone tries to use to all the storage backends. Otherwise, I've
tested it a bit with devstack and it worked fine.

I didn't continue to work on it by lack of time, and because the effort
to use Swift as a generic cache mechanism seemed a bit tricky to me.


Any objection to me taking up the work? Was there any associated 
blueprint for it?


Best,
-jay



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone][swift] Has anybody considered storing tokens in Swift?

2014-09-29 Thread Jay Pipes

Hey Stackers,

So, I had a thought this morning (uh-oh, I know...).

What if we wrote a token driver in Keystone that uses Swift for backend 
storage?


I have long been an advocate of the memcache token driver versus the SQL 
driver for performance reasons. However, the problem with the memcache 
token driver is that if you want to run multiple OpenStack regions, you 
could share the identity data in Keystone using replicated database 
technology (mysql galera/PXC, pgpool II, or even standard mysql 
master/slave), but each region needs to have its own memcache service 
for tokens. This means that tokens are not shared across regions, which 
means that users have to log in separately to each region's dashboard.


I personally considered this a tradeoff worth accepting. But then, 
today, I thought... what about storing tokens in a globally-distributed 
Swift cluster? That would take care of the replication needs 
automatically, since Swift would do the needful. And, add to that, Swift 
was designed for storing lots of small objects, which tokens are...


Thoughts? I think it would be a cool dogfooding effort if nothing else, 
and give users yet another choice in how they handle multi-region tokens.


Best,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] (OperationalError) (1040, 'Too many connections') None None

2014-09-29 Thread Nader Lahouti
Hi Jay,

Thanks for your reply.

I'm not able to use mysql command line.
$ mysql
ERROR 1040 (HY000): Too many connections
$
Is there any other way to collect the information?


Thanks,
Nader.


On Mon, Sep 29, 2014 at 8:42 AM, Jay Pipes jaypi...@gmail.com wrote:

 On Sun, Sep 28, 2014 at 5:56 PM, Nader Lahouti nader.laho...@gmail.com
 wrote:

 Hi All,

 I am seeing 'Too many connections' error in nova api and cinder when when
 installing openstack using the latest..
 The error happens when launching couple of VMs (in this test around 5
 VMs).

 Here are the logs when error happens:

 (1) nova-api logs/traceback:

 http://paste.openstack.org/show/116414/

 (2) cinder api logs/traceback:

 http://paste.openstack.org/show/hbaomc5IVS3mig8z2BWq/

 (3) Stats of some connections:
 http://paste.openstack.org/show/116425/

 As it shown in (3) the issue is not seen with icehouse release.

 Can somebody please let me know if it is a known issue?


 Hi Nader,

 Would you mind pastebin'ing the contents of:

  SHOW FULL PROCESSLIST\G

 when executed from the mysql command line client?

 That will help to show us which threads are stuck executing what in MySQL.

 Best,
 -jay

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] Nominating Anita Kuno for project-config-core

2014-09-29 Thread Zaro
+1

On Sun, Sep 28, 2014 at 4:25 PM, Joshua Hesketh 
joshua.hesk...@rackspace.com wrote:

 Absolutely. +1.

 Rackspace Australia


 On 9/27/14 1:34 AM, James E. Blair wrote:

 I'm pleased to nominate Anito Kuno to the project-config core team.

 The project-config repo is a constituent of the Infrastructure Program
 and has a core team structured to be a superset of infra-core with
 additional reviewers who specialize in the area.

 Anita has been reviewing new projects in the config repo for some time
 and I have been treating her approval as required for a while.  She has
 an excellent grasp of the requirements and process for creating new
 projects and is very helpful to the people proposing them (who are often
 creating their first commit to any OpenStack repository).

 She also did most of the work in actually creating the project-config
 repo from the config repo.

 Please respond with support or concerns and if the consensus is in
 favor, we will add her next week.

 Thanks,

 Jim

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] (OperationalError) (1040, 'Too many connections') None None

2014-09-29 Thread Amrith Kumar
What do you see if you run

mysqladmin processlist

I hope it doesn’t also give the same error (but that may be what you see).

-amrith

From: Nader Lahouti [mailto:nader.laho...@gmail.com]
Sent: Monday, September 29, 2014 12:31 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova][cinder] (OperationalError) (1040, 'Too many 
connections') None None

Hi Jay,

Thanks for your reply.

I'm not able to use mysql command line.
$ mysql
ERROR 1040 (HY000): Too many connections
$
Is there any other way to collect the information?


Thanks,
Nader.


On Mon, Sep 29, 2014 at 8:42 AM, Jay Pipes 
jaypi...@gmail.commailto:jaypi...@gmail.com wrote:
On Sun, Sep 28, 2014 at 5:56 PM, Nader Lahouti 
nader.laho...@gmail.commailto:nader.laho...@gmail.com wrote:
Hi All,

I am seeing 'Too many connections' error in nova api and cinder when when 
installing openstack using the latest..
The error happens when launching couple of VMs (in this test around 5 VMs).

Here are the logs when error happens:

(1) nova-api logs/traceback:

http://paste.openstack.org/show/116414/

(2) cinder api logs/traceback:

http://paste.openstack.org/show/hbaomc5IVS3mig8z2BWq/

(3) Stats of some connections:
http://paste.openstack.org/show/116425/

As it shown in (3) the issue is not seen with icehouse release.

Can somebody please let me know if it is a known issue?

Hi Nader,

Would you mind pastebin'ing the contents of:

 SHOW FULL PROCESSLIST\G

when executed from the mysql command line client?

That will help to show us which threads are stuck executing what in MySQL.

Best,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] (OperationalError) (1040, 'Too many connections') None None

2014-09-29 Thread Mike Bayer




On Sep 29, 2014, at 12:31 PM, Nader Lahouti nader.laho...@gmail.com wrote:

 Hi Jay,
 
 Thanks for your reply. 
 
 I'm not able to use mysql command line.
 $ mysql
 ERROR 1040 (HY000): Too many connections
 $
 Is there any other way to collect the information?


you can try stopping everything, getting on the command line *first* and 
leaving it open, then rerunning your whole environment, so that you’ve reserved 
that spot at least to do testing queries.




 
 
 Thanks,
 Nader.
 
 
 On Mon, Sep 29, 2014 at 8:42 AM, Jay Pipes jaypi...@gmail.com wrote:
 On Sun, Sep 28, 2014 at 5:56 PM, Nader Lahouti nader.laho...@gmail.com 
 wrote:
 Hi All,
 
 I am seeing 'Too many connections' error in nova api and cinder when when 
 installing openstack using the latest..
 The error happens when launching couple of VMs (in this test around 5 VMs).
 
 Here are the logs when error happens:
 
 (1) nova-api logs/traceback:
 http://paste.openstack.org/show/116414/
 
 (2) cinder api logs/traceback:
 http://paste.openstack.org/show/hbaomc5IVS3mig8z2BWq/
 
 (3) Stats of some connections:
 http://paste.openstack.org/show/116425/
 
 As it shown in (3) the issue is not seen with icehouse release.
 
 Can somebody please let me know if it is a known issue?
 
 Hi Nader,
 
 Would you mind pastebin'ing the contents of:
 
  SHOW FULL PROCESSLIST\G
 
 when executed from the mysql command line client? 
 
 That will help to show us which threads are stuck executing what in MySQL.
 
 Best,
 -jay
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][swift] Has anybody considered storing tokens in Swift?

2014-09-29 Thread Lance Bragstad
On Mon, Sep 29, 2014 at 11:25 AM, Jay Pipes jaypi...@gmail.com wrote:

 On 09/29/2014 12:15 PM, Julien Danjou wrote:

 On Mon, Sep 29 2014, Jay Pipes wrote:

  What if we wrote a token driver in Keystone that uses Swift for backend
 storage?


 Yay! I already wrote a PoC to that:

https://review.openstack.org/#/c/86016/


 Sweet! :)

  It has been rejected because this patch didn't use the generic approach
 that Keystone tries to use to all the storage backends. Otherwise, I've
 tested it a bit with devstack and it worked fine.

 I didn't continue to work on it by lack of time, and because the effort
 to use Swift as a generic cache mechanism seemed a bit tricky to me.


 Any objection to me taking up the work? Was there any associated blueprint
 for it?


To the best of my knowledge, I haven't seen a spec or blueprint proposed
for a swift token backend.



 Best,
 -jay



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] (OperationalError) (1040, 'Too many connections') None None

2014-09-29 Thread Jay Pipes

On 09/29/2014 12:31 PM, Nader Lahouti wrote:

Hi Jay,

Thanks for your reply.

I'm not able to use mysql command line.
$ mysql
ERROR 1040 (HY000): Too many connections
$
Is there any other way to collect the information?


This should allow you to change the max connections property without 
restarting the server, and then you should be able to connect to MySQL 
and run SHOW FULL PROCESSLIST for us:


http://www.percona.com/blog/2010/03/23/too-many-connections-no-problem/

Best,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][swift] Has anybody considered storing tokens in Swift?

2014-09-29 Thread Julien Danjou
On Mon, Sep 29 2014, Jay Pipes wrote:

 Any objection to me taking up the work? Was there any associated blueprint for
 it?

As said on IRC, go ahead. There's no blueprint associated AFAIK. :)

-- 
Julien Danjou
/* Free Software hacker
   http://julien.danjou.info */


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] (OperationalError) (1040, 'Too many connections') None None

2014-09-29 Thread Nader Lahouti
Hi Jay,

I login first and the recreated the problem and here is the log:
http://paste.openstack.org/show/116776/

Thanks,
Nader.


On Mon, Sep 29, 2014 at 8:42 AM, Jay Pipes jaypi...@gmail.com wrote:

 On Sun, Sep 28, 2014 at 5:56 PM, Nader Lahouti nader.laho...@gmail.com
 wrote:

 Hi All,

 I am seeing 'Too many connections' error in nova api and cinder when when
 installing openstack using the latest..
 The error happens when launching couple of VMs (in this test around 5
 VMs).

 Here are the logs when error happens:

 (1) nova-api logs/traceback:

 http://paste.openstack.org/show/116414/

 (2) cinder api logs/traceback:

 http://paste.openstack.org/show/hbaomc5IVS3mig8z2BWq/

 (3) Stats of some connections:
 http://paste.openstack.org/show/116425/

 As it shown in (3) the issue is not seen with icehouse release.

 Can somebody please let me know if it is a known issue?


 Hi Nader,

 Would you mind pastebin'ing the contents of:

  SHOW FULL PROCESSLIST\G

 when executed from the mysql command line client?

 That will help to show us which threads are stuck executing what in MySQL.

 Best,
 -jay

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Infra] Meeting Tuesday September 30th at 19:00 UTC

2014-09-29 Thread Elizabeth K. Joseph
Hi everyone,

The OpenStack Infrastructure (Infra) team is hosting our weekly
meeting on Tuesday September 30th, at 19:00 UTC in #openstack-meeting

Meeting agenda available here:
https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting (anyone is
welcome to to add agenda items)

Everyone interested in infrastructure and process surrounding
automated testing and deployment is encouraged to attend.

-- 
Elizabeth Krumbach Joseph || Lyz || pleia2
http://www.princessleia.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [mistral] Team meeting minutes 09/29/2014

2014-09-29 Thread Nikolay Makhotkin
Thanks everyone for participating the meeting today!

In case you’d like to see what we discussed, here’s

Meeting minutes:
http://eavesdrop.openstack.org/meetings/mistral/2014/mistral.2014-09-29-16.01.html
Meeting full log:
http://eavesdrop.openstack.org/meetings/mistral/2014/mistral.2014-09-29-16.01.log.html

The next meeting is scheduled for Oct 6.

-- 
Best Regards,
Nikolay
@ Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] (OperationalError) (1040, 'Too many connections') None None

2014-09-29 Thread Jay Pipes

On 09/29/2014 12:48 PM, Nader Lahouti wrote:

Hi Jay,

I login first and the recreated the problem and here is the log:
http://paste.openstack.org/show/116776/


OK. Looks like there isn't anything wrong with your setup. I'm guessing 
you have set up Keystone to run in Apache with 10 worker processes, and 
you have the workers config option setting in nova.conf, neutron.conf 
and all the other project configuration files set to 0, which will 
trigger a number of worker processes equal to the number of CPU cores on 
your box, which I'm guessing from looking at your SHOW FULL PROCESSLIST 
is around 24-32 cores.


Solution: either lower the workers configuration option from 0 to 
something like 12, or increase the max_connections setting in your 
my.cnf to something that can handle the worker processes from all the 
OpenStack services (I'd recommend something like 2000).


Best,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Fate of xmlutils

2014-09-29 Thread Joshua Harlow
Do we know that the users (keystone, neutron...) aren't vulnerable?

From https://pypi.python.org/pypi/defusedxml#python-xml-libraries it sure 
seems like we would likely still have issues if custom implementations are 
being used/created. Perhaps we should just use the defusedxml libraries until 
proven otherwise (better to be safe than sorry).

On Sep 29, 2014, at 9:03 AM, Julien Danjou jul...@danjou.info wrote:

 Hi,
 
 I was looking at xmlutils today, and I took a look at the history of
 this file that seems to come from a CVE almost 2 years ago.
 
 What is surprising is that, unless I missed something, the only user of
 that lib is Nova. Other projects such as Keystone or Neutron implemented
 things in a different way.
 
 It seems that Python fixed that issue with 2 modules released on PyPI:
 
  https://pypi.python.org/pypi/defusedxml
  https://pypi.python.org/pypi/defusedexpat
 
 I'm no XML expert, and I've only a shallow understanding of the issue,
 but I wonder if we should put some efforts to drop xmlutils and our
 custom XML fixes to used instead these 2 modules.
 
 Hint appreciated.
 
 -- 
 Julien Danjou
 /* Free Software hacker
   http://julien.danjou.info */
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] (OperationalError) (1040, 'Too many connections') None None

2014-09-29 Thread Nader Lahouti
On Mon, Sep 29, 2014 at 9:58 AM, Jay Pipes jaypi...@gmail.com wrote:

 On 09/29/2014 12:48 PM, Nader Lahouti wrote:

 Hi Jay,

 I login first and the recreated the problem and here is the log:
 http://paste.openstack.org/show/116776/


 OK. Looks like there isn't anything wrong with your setup. I'm guessing
 you have set up Keystone to run in Apache with 10 worker processes, and you
 have the workers config option setting in nova.conf, neutron.conf and all
 the other project configuration files set to 0, which will trigger a number
 of worker processes equal to the number of CPU cores on your box, which I'm
 guessing from looking at your SHOW FULL PROCESSLIST is around 24-32 cores.


I haven't modified the default values in *.conf files (I'm using devstack
for installation) for workers setting.
How  to check that keystone is using apache with 10 worker process?
And the number of CPU cores on my box is 32.



 Solution: either lower the workers configuration option from 0 to
 something like 12, or increase the max_connections setting in your my.cnf
 to something that can handle the worker processes from all the OpenStack
 services (I'd recommend something like 2000).


Just for clarification, regarding the setting of workers option in the
*.conf file:
Fore neutron:
# api_workers = 0

For nova, what option should be set?
I see  these options:
metadata_workers=None(IntOpt)Number of workers for metadata service

I'll try your solution and let you know the result.



 Best,
 -jay

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] (OperationalError) (1040, 'Too many connections') None None

2014-09-29 Thread Amrith Kumar
Yes, looks like MySQL was just configured with too low a max-connections value.

-amrith

| -Original Message-
| From: Jay Pipes [mailto:jaypi...@gmail.com]
| Sent: Monday, September 29, 2014 12:58 PM
| To: openstack-dev@lists.openstack.org
| Subject: Re: [openstack-dev] [nova][cinder] (OperationalError) (1040, 'Too
| many connections') None None
| 
| On 09/29/2014 12:48 PM, Nader Lahouti wrote:
|  Hi Jay,
| 
|  I login first and the recreated the problem and here is the log:
|  http://paste.openstack.org/show/116776/
| 
| OK. Looks like there isn't anything wrong with your setup. I'm guessing
| you have set up Keystone to run in Apache with 10 worker processes, and
| you have the workers config option setting in nova.conf, neutron.conf and
| all the other project configuration files set to 0, which will trigger a
| number of worker processes equal to the number of CPU cores on your box,
| which I'm guessing from looking at your SHOW FULL PROCESSLIST is around
| 24-32 cores.
| 
| Solution: either lower the workers configuration option from 0 to
| something like 12, or increase the max_connections setting in your my.cnf
| to something that can handle the worker processes from all the OpenStack
| services (I'd recommend something like 2000).
| 
| Best,
| -jay
| 
| ___
| OpenStack-dev mailing list
| OpenStack-dev@lists.openstack.org
| http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] (OperationalError) (1040, 'Too many connections') None None

2014-09-29 Thread Mike Bayer
what’s spooking me is the original paste at 
http://paste.openstack.org/show/116425/ which showed:

icehouse:

Fri Sep 26 17:00:50 PDT 2014
Number of open TCP:3306 - 58
Number of open TCP:3306 nova-api - 5
Number of open TCP:3306 mysqld - 29
Number of open TCP:8774 - 10
Number of nova-api - 14


fresh startup with juno:

Fri Sep 26 09:42:58 PDT 2014
Number of open TCP:3306 - 152
Number of open TCP:3306 nova-api - 7
Number of open TCP:3306 mysqld - 76
Number of open TCP:8774 - 66
Number of nova-api - 99


does that seem right that an upgrade from icehouse would cause there to be 99 
nova-api procs at startup where there were only 14 before?




On Sep 29, 2014, at 1:34 PM, Amrith Kumar amr...@tesora.com wrote:

 Yes, looks like MySQL was just configured with too low a max-connections 
 value.
 
 -amrith
 
 | -Original Message-
 | From: Jay Pipes [mailto:jaypi...@gmail.com]
 | Sent: Monday, September 29, 2014 12:58 PM
 | To: openstack-dev@lists.openstack.org
 | Subject: Re: [openstack-dev] [nova][cinder] (OperationalError) (1040, 'Too
 | many connections') None None
 | 
 | On 09/29/2014 12:48 PM, Nader Lahouti wrote:
 |  Hi Jay,
 | 
 |  I login first and the recreated the problem and here is the log:
 |  http://paste.openstack.org/show/116776/
 | 
 | OK. Looks like there isn't anything wrong with your setup. I'm guessing
 | you have set up Keystone to run in Apache with 10 worker processes, and
 | you have the workers config option setting in nova.conf, neutron.conf and
 | all the other project configuration files set to 0, which will trigger a
 | number of worker processes equal to the number of CPU cores on your box,
 | which I'm guessing from looking at your SHOW FULL PROCESSLIST is around
 | 24-32 cores.
 | 
 | Solution: either lower the workers configuration option from 0 to
 | something like 12, or increase the max_connections setting in your my.cnf
 | to something that can handle the worker processes from all the OpenStack
 | services (I'd recommend something like 2000).
 | 
 | Best,
 | -jay
 | 
 | ___
 | OpenStack-dev mailing list
 | OpenStack-dev@lists.openstack.org
 | http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] (OperationalError) (1040, 'Too many connections') None None

2014-09-29 Thread Nader Lahouti
more on to previous reply (sorry hit the send button by accident)

For nova workers option:
osapi_compute_workers=None

And what is for keystone what option need to be set?
# The number of worker processes to serve the public WSGI
# application. Defaults to number of CPUs (minimum of 2).
# (integer value)
#public_workers=None

# The number of worker processes to serve the admin WSGI
# application. Defaults to number of CPUs (minimum of 2).
# (integer value)
#admin_workers=None


I'll try your solution and let you know the result.

Thanks,
Nader.

On Mon, Sep 29, 2014 at 10:34 AM, Nader Lahouti nader.laho...@gmail.com
wrote:



 On Mon, Sep 29, 2014 at 9:58 AM, Jay Pipes jaypi...@gmail.com wrote:

 On 09/29/2014 12:48 PM, Nader Lahouti wrote:

 Hi Jay,

 I login first and the recreated the problem and here is the log:
 http://paste.openstack.org/show/116776/


 OK. Looks like there isn't anything wrong with your setup. I'm guessing
 you have set up Keystone to run in Apache with 10 worker processes, and you
 have the workers config option setting in nova.conf, neutron.conf and all
 the other project configuration files set to 0, which will trigger a number
 of worker processes equal to the number of CPU cores on your box, which I'm
 guessing from looking at your SHOW FULL PROCESSLIST is around 24-32 cores.


 I haven't modified the default values in *.conf files (I'm using devstack
 for installation) for workers setting.
 How  to check that keystone is using apache with 10 worker process?
 And the number of CPU cores on my box is 32.



 Solution: either lower the workers configuration option from 0 to
 something like 12, or increase the max_connections setting in your my.cnf
 to something that can handle the worker processes from all the OpenStack
 services (I'd recommend something like 2000).


 Just for clarification, regarding the setting of workers option in the
 *.conf file:
 Fore neutron:
 # api_workers = 0

 For nova, what option should be set?
 I see  these options:
 metadata_workers=None(IntOpt)Number of workers for metadata service

 I'll try your solution and let you know the result.



 Best,
 -jay

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] (OperationalError) (1040, 'Too many connections') None None

2014-09-29 Thread Jay Pipes

On 09/29/2014 01:34 PM, Nader Lahouti wrote:

On Mon, Sep 29, 2014 at 9:58 AM, Jay Pipes jaypi...@gmail.com
mailto:jaypi...@gmail.com wrote:

On 09/29/2014 12:48 PM, Nader Lahouti wrote:

Hi Jay,

I login first and the recreated the problem and here is the log:
http://paste.openstack.org/__show/116776/
http://paste.openstack.org/show/116776/


OK. Looks like there isn't anything wrong with your setup. I'm
guessing you have set up Keystone to run in Apache with 10 worker
processes, and you have the workers config option setting in
nova.conf, neutron.conf and all the other project configuration
files set to 0, which will trigger a number of worker processes
equal to the number of CPU cores on your box, which I'm guessing
from looking at your SHOW FULL PROCESSLIST is around 24-32 cores.


I haven't modified the default values in *.conf files (I'm using
devstack for installation) for workers setting.
How  to check that keystone is using apache with 10 worker process?
And the number of CPU cores on my box is 32.


OK, as I suspected.


Solution: either lower the workers configuration option from 0 to
something like 12, or increase the max_connections setting in your
my.cnf to something that can handle the worker processes from all
the OpenStack services (I'd recommend something like 2000).


Just for clarification, regarding the setting of workers option in the
*.conf file:
Fore neutron:
# api_workers = 0

For nova, what option should be set?
I see  these options:
metadata_workers=None   (IntOpt)Number of workers for metadata service


You can keep all those options as-is (the default of 0) and just set 
your my.cnf max_connections variable to 2000.


Best,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] (OperationalError) (1040, 'Too many connections') None None

2014-09-29 Thread Dean Troyer
On Mon, Sep 29, 2014 at 12:34 PM, Nader Lahouti nader.laho...@gmail.com
wrote:

 I haven't modified the default values in *.conf files (I'm using devstack
 for installation) for workers setting.
 How  to check that keystone is using apache with 10 worker process?
 And the number of CPU cores on my box is 32.


About two weeks ago we changed DevStack to default to setting the worker
count to NPROC / 2, with a minimum of 2.  As of today, this will set
workers for ceilometer (non-mod_wsgi), cinder, glance, keystone
(non-mod_wsgi), nova, swift and trove.

This is controlled by API_WORKERS in local.conf.

dt

-- 

Dean Troyer
dtro...@gmail.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Infra][Cinder] Request for voting permission with Pure Storage CI account

2014-09-29 Thread Patrick East
Hi All,

I am writing to request voting permissions as per the instructions for
third party CI systems[1]. The account email is cinder...@purestorage.com.


The system has been operational and stable for a little while now
building/commenting on openstack/cinder gerrit. You can view its comment
history on reviews here:
https://review.openstack.org/#/q/cinder.ci%2540purestorage.com,n,z

Please take a look and let me know if there are any issues. I will be the
primary point of contact, but the alias openstack-...@purestorage.com is
the best way for a quick response from our team. For immediate issues I can
be reached in IRC as patrickeast

I look forward to your feedback!

[1]
http://ci.openstack.org/third_party.html#permissions-on-your-third-party-system


-Patrick
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][cross-project-work] What about adding cross-project-spec repo?

2014-09-29 Thread Doug Hellmann

On Sep 29, 2014, at 5:51 AM, Thierry Carrez thie...@openstack.org wrote:

 Boris Pavlovic wrote:
 it goes without saying that working on cross-project stuff in OpenStack
 is quite hard task. 
 
 Because it's always hard to align something between a lot of people from
 different project. And when topic start being too HOT  the discussion
 goes in wrong direction and attempt to do cross project change fails, as
 a result maybe not *ideal* but *good enough* change in OpenStack will be
 abandoned. 
 
 The another issue that we have are specs. Projects are asking to make
 spec for change in their project, and in case of cross project stuff you
 need to make N similar specs (for every project). That is really hard to
 manage, and as a result you have N different specs that are describing
 the similar stuff. 
 
 To make this process more formal, clear and simple, let's reuse process
 of specs but do it in one repo /openstack/cross-project-specs.
 
 It means that every cross project topic: Unification of python clients,
 Unification of logging, profiling, debugging api, bunch of others will
 be discussed in one single place..
 
 I think it's a good idea, as long as we truly limit it to cross-project
 specs, that is, to concepts that may apply to every project. The
 examples you mention are good ones. As a counterexample, if we have to
 sketch a plan to solve communication between Nova and Neutron, I don't
 think it would belong to that repository (it should live in whatever
 project would have the most work to do).
 
 Process description of cross-project-specs:
 
  * PTL - person that mange core team members list and puts workflow +1
on accepted specs
  * Every project have 1 core position (stackforge projects are included)
  * Cores are chosen by project team, they task is to advocate project
team opinion 
  * No more veto, and -2 votes
  * If  75% cores +1 spec it's accepted. It means that all project have
to accept this change.
  * Accepted specs gret high priority blueprints in all projects
 
 So I'm not sure you can force all projects to accept the change.
 Ideally, projects should see the benefits of alignment and adopt the
 common spec. In our recent discussions we are also going towards more
 freedom to projects, rather than less : imposing common specs to
 stackforge projects sounds like a step backwards there.
 
 Finally, I see some overlap with Oslo, which generally ends up
 implementing most of the common policy into libraries it encourages
 usage of. Therefore I'm not sure having a cross-project PTL makes
 sense, as he would be stuck between the Oslo PTL and the Technical
 Committee.

There is some overlap with Oslo, and we would want to be involved in the 
discussions — especially if the plan includes any code to land in an Oslo 
library. I have so far been resisting the idea that oslo-specs is the best home 
for this, mostly because I didn’t want us to assume everything related to 
cross-project work is also related to Oslo work.

That said, our approval process looks for consensus among all of the 
participants on the review, in addition to Oslo cores, so we can use oslo-specs 
and continue incorporating the +1/-1 votes from everyone. One of the key 
challenges we’ve had is signaling buy-in for cross-project work so having some 
sort of broader review process would be good, especially to help ensure that 
all interested parties have a chance to participate in the review.

OTOH, a special repo with different voting permission settings also makes 
sense. I don’t have any good suggestions for who would decide when the voting 
on a proposal had reached consensus, or what to do if no consensus emerges. 
Having the TC manage that seems logical, but impractical. Maybe a person 
designated by the TC would oversee it?

 
 With such simple rules we will simplify cross project work: 
 
 1) Fair rules for all projects, as every project has 1 core that has 1
 vote. 
 
 A project is hardly a metric for fairness. Some projects are 50 times
 bigger than others. What is a project in your mind ? A code repository
 ? Or more like a program (a collection of code repositories being worked
 on by the same team ?)
 
 So in summary, yes we need a place to discuss truly cross-project specs,
 but I think it can't force decisions to all projects (especially
 stackforge ones), and it can live within a larger-scope Oslo effort
 and/or the Technical Committee.
 
 -- 
 Thierry Carrez (ttx)
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceph] Why performance of benchmarks with small blocks is extremely small?

2014-09-29 Thread Clay Gerrard
I also have limited experience with Ceph and rados bench - but it looks
like you're setting the number of worker threads to only 1?  (-t 1)

I think the default is 16, and most storage distributed storage systems
designed for concurrency are going to do a bit better if you exercise more
concurrent workers... so you might try turning that up until you see some
diminishing returns.  Be sure to watch for resource contention on the load
generating server.

-Clay

On Mon, Sep 29, 2014 at 4:49 AM, Pasquale Porreca 
pasquale.porr...@dektech.com.au wrote:

  Hello

 I have no experience with Ceph and this specific benchmark tool, anyway I
 have experience with several other performance benchmark tools and file
 systems and I can say it always happen to have very very low performance
 results when the file size is too small (i.e.  1MB).

 My suspect is that benchmark tools are not reliable for file size so
 small, since the time to write is so small that the overhead introduced by
 the test itself is not at all negligible.

 I saw that the default object size for rados is 4 MB, did you try your
 test without the option -b 512? I think the results should be different
 for several order of magnitude.

 BR


 On 09/27/14 17:14, Timur Nurlygayanov wrote:

   Hello all,

  I installed OpenStack with Glance + Ceph OSD with replication factor 2
 and now I can see the write operations are extremly slow.
 For example, I can see only 0.04 MB/s write speed when I run rados bench
 with 512b blocks:

  rados bench -p test 60 write --no-cleanup -t 1 -b 512

  Maintaining 1 concurrent writes of 512 bytes for up to 60 seconds or 0
 objects
  Object prefix: benchmark_data_node-17.domain.tld_15862
sec Cur ops   started  finishedavg MB/s cur MB/s   last
 lat  avg lat
  0   0 0 0  0
 0   -   0
  1   183820.0400341   0.0400391
 0.008465   0.0120985
  2   1   169   168  0.04101110.0419922
 0.080433   0.0118995
  3   1   240   239  0.03889590.034668
 0.008052   0.0125385
  4   1   356   355  0.0433309   0.0566406
 0.00837 0.0112662
  5   1   472   471  0.0459919   0.0566406
 0.008343   0.0106034
  6   1   550   549  0.0446735   0.0380859
 0.036639   0.0108791
  7   1   581   580  0.0404538   0.0151367
 0.008614   0.0120654


 My test environment configuration:
  Hardware servers with 1Gb network interfaces, 64Gb RAM and 16 CPU cores
 per node, HDDs WDC WD5003ABYX-01WERA0.
  OpenStack with 1 controller, 1 compute and 2 ceph nodes (ceph on separate
 nodes).
 CentOS 6.5, kernel 2.6.32-431.el6.x86_64.

  I tested several config options for optimizations, like in
 /etc/ceph/ceph.conf:

  [default]
 ...
 osd_pool_default_pg_num = 1024
 osd_pool_default_pgp_num = 1024
 osd_pool_default_flag_hashpspool = true
 ...
 [osd]
 osd recovery max active = 1
 osd max backfills = 1
 filestore max sync interval = 30
 filestore min sync interval = 29
 filestore flusher = false
 filestore queue max ops = 1
 filestore op threads = 16
 osd op threads = 16
 ...
 [client]
 rbd_cache = true
 rbd_cache_writethrough_until_flush = true

  and in /etc/cinder/cinder.conf:

  [DEFAULT]
  volume_tmp_dir=/tmp

 but in the result performance was increased only on ~30 % and it not looks
 like huge success.

  Non-default mount options and TCP optimization increase the speed in
 about 1%:

 [root@node-17 ~]# mount | grep ceph
 /dev/sda4 on /var/lib/ceph/osd/ceph-0 type xfs
 (rw,noexec,nodev,noatime,nodiratime,user_xattr,data=writeback,barrier=0)

 [root@node-17 ~]# cat /etc/sysctl.conf
 net.core.rmem_max = 16777216
 net.core.wmem_max = 16777216
 net.ipv4.tcp_rmem = 4096 87380 16777216
 net.ipv4.tcp_wmem = 4096 65536 16777216
 net.ipv4.tcp_window_scaling = 1
 net.ipv4.tcp_timestamps = 1
 net.ipv4.tcp_sack = 1


 Do we have other ways to significantly improve CEPH storage performance?
  Any feedback and comments are welcome!

  Thank you!


  --

  Timur,
 QA Engineer
 OpenStack Projects
 Mirantis Inc


 ___
 OpenStack-dev mailing 
 listOpenStack-dev@lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 --
 Pasquale Porreca

 DEK Technologies
 Via dei Castelli Romani, 22
 00040 Pomezia (Roma)

 Mobile +39 3394823805
 Skype paskporr


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] (OperationalError) (1040, 'Too many connections') None None

2014-09-29 Thread Nader Lahouti
Jay,

I increased the max_connection to 2000 and so far don't see any issue.
After launching 20 VMs the number of connections as follows:

Mon Sep 29 05:05:29 PDT 2014
Number of open TCP:3306 - 326
Number of open TCP:3306 nova-api - 34
Number of open TCP:3306 mysqld - 163
Number of open TCP:8774 - 66
Number of nova-api - 98

It seems the default max_connection for Juno in not enough and should be
changed as you suggested. But the question is what is correct/reliable
value?


Thanks for the help.
Nader.



On Mon, Sep 29, 2014 at 10:41 AM, Jay Pipes jaypi...@gmail.com wrote:

 On 09/29/2014 01:34 PM, Nader Lahouti wrote:

 On Mon, Sep 29, 2014 at 9:58 AM, Jay Pipes jaypi...@gmail.com
 mailto:jaypi...@gmail.com wrote:

 On 09/29/2014 12:48 PM, Nader Lahouti wrote:

 Hi Jay,

 I login first and the recreated the problem and here is the log:
 http://paste.openstack.org/__show/116776/
 http://paste.openstack.org/show/116776/


 OK. Looks like there isn't anything wrong with your setup. I'm
 guessing you have set up Keystone to run in Apache with 10 worker
 processes, and you have the workers config option setting in
 nova.conf, neutron.conf and all the other project configuration
 files set to 0, which will trigger a number of worker processes
 equal to the number of CPU cores on your box, which I'm guessing
 from looking at your SHOW FULL PROCESSLIST is around 24-32 cores.


 I haven't modified the default values in *.conf files (I'm using
 devstack for installation) for workers setting.
 How  to check that keystone is using apache with 10 worker process?
 And the number of CPU cores on my box is 32.


 OK, as I suspected.

  Solution: either lower the workers configuration option from 0 to
 something like 12, or increase the max_connections setting in your
 my.cnf to something that can handle the worker processes from all
 the OpenStack services (I'd recommend something like 2000).


 Just for clarification, regarding the setting of workers option in the
 *.conf file:
 Fore neutron:
 # api_workers = 0

 For nova, what option should be set?
 I see  these options:
 metadata_workers=None   (IntOpt)Number of workers for metadata service


 You can keep all those options as-is (the default of 0) and just set your
 my.cnf max_connections variable to 2000.


 Best,
 -jay

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][swift] Has anybody considered storing tokens in Swift?

2014-09-29 Thread Morgan Fainberg
On Monday, September 29, 2014, Julien Danjou jul...@danjou.info wrote:

 On Mon, Sep 29 2014, Jay Pipes wrote:

  Any objection to me taking up the work? Was there any associated
 blueprint for
  it?

 As said on IRC, go ahead. There's no blueprint associated AFAIK. :)

 --
 Julien Danjou
 /* Free Software hacker
http://julien.danjou.info */


There is of course a benefit to having more options for deployers to
utilize for the token persistence, especially if we solve complaints about
the current token systems in the process.

A couple thoughts:

I highly recommend making this a dogpile backend (KVS) in keystone. It
should help eliminate a bunch of the harder bits of code since it will use
the KVS house keeping code for the indexes (same way it works in memcache
or the in-memory dict backend).

The big issue you're going to run into is locking. The indexes need to have
a distributed lock that guarantees that each index is read/updated/released
atomically (similar to the SQL transaction). The way memcache and redis
handle this is by trying to add a record that is based on the index record
name and if that add fails (already exists) we assume the referenced
record is locked. We automatically timeout the lock after a period of time.

I am also curious what the performance profile of swift-as-a-token-backend
will look like. You are adding a large amount of overhead to the request
(in theory) as the swift call is another http request. And depending on how
swift is running, may require more token request work (tokens, especially
v2.0 since he token is is in the body must be considered secure data, and
therefore should not be directly accessible to the world at large), keeping
security concerns in mind.

There are some other cool ideas this can lead to, but let's talk about the
reality and limitations (and how we solve those issues) for our current use
cases before we dive into the next steps.

The first thing we will need is (of course) a spec for keystone. Let's try
and get the spec proposed soon so we can ensure that we get anything that
needs to land in kilo through the door earlier in the cycle.

Cheers,
Morgan

Sent via mobile
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Unusual behavior on committing a change

2014-09-29 Thread Sharan Kumar M
Hi all,

I am getting some weird things when I try to submit a patch for review. May
be I am doing something wrong.

I cloned the repo, setup git review and setup my gerrit username in git
config. I updated an image in the docs/source/images directory and
committed. After I commit, I get the git status report as

Your branch and 'origin/master' have diverged,
and have 1 and 1 different commit each, respectively.

When I see the log, the last commit is authored by
jenk...@review.openstack.org with the same commit message as mine. The
author name is supposed to be mine right? The last time I submitted for
review, the commit message had my name as author.

Could someone help me out with this issue?
Thanks,
Sharan Kumar M
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OSSN 0029] Neutron FWaaS rules lack port restrictions when using protocol 'any'

2014-09-29 Thread Nathan Kinder
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

An update for this Security Note has been published to clarify that
Neutron's FWaaS extension is still experimental.  The updated version
of OSSN-0029 is available here:

  https://wiki.openstack.org/wiki/OSSN/OSSN-0029

Thanks,
- -NGK


On 09/24/2014 09:58 AM, Nathan Kinder wrote:
 Neutron FWaaS rules lack port restrictions when using protocol
 'any' ---
 
 ### Summary ### A bug in the Neutron FWaaS (Firewall as a Service)
 code results in iptables rules being generated that do not reflect
 desired port restrictions. This behaviour is triggered when a
 protocol other than 'udp' or 'tcp' is specified, e.g. 'any'.
 
 The scope of this bug is limited to Neutron FWaaS and systems built
 upon it. Security Groups are not affected.
 
 ### Affected Services / Software ### Neutron FWaaS, Grizzly,
 Havana, Icehouse
 
 ### Discussion ### When specifying firewall rules using Neutron
 that should match multiple protocols, it is convenient to specify a
 protocol of 'any' in place of defining multiple specific rules.
 
 For example, in order to allow DNS (TCP and UDP) requests, the
 following rule might be defined:
 
 neutron firewall-rule-create --protocol any --destination-port 53
 \ --action allow
 
 However, this rule results in the generation of iptables firewall
 rules that do not reflect the desired port restriction. An example
 generated iptables rule might look like the following:
 
 -A neutron-l3-agent-iv441c58eb2 -j ACCEPT
 
 Note that the restriction on port 53 is missing. As a result, the 
 generated rule will match and accept any traffic being processed by
 the rule chain to which it belongs.
 
 Additionally, iptables arranges sets of rules into chains and
 processes packets entering a chain one rule at a time. Rule
 matching stops at the first matched exit condition (e.g. accept or
 drop). Since, the generated rule above will match and accept all
 packets, it will effectively short circuit any filtering rules
 lower down in the chain. Consequently, this can break other
 firewall rules regardless of the protocol specified when defining
 those rules with Neutron. They simply need to appear later in the
 generated iptables rule chain.
 
 This bug is triggered when any protocol other than 'tcp' or 'udp'
 is specified in conjunction with a source or destination port
 number.
 
 ### Recommended Actions ### Avoid the use of 'any' when specifying
 the protocol for Neutron FWaaS rules. Instead, create multiple
 rules for both 'tcp' and 'udp' protocols independently.
 
 A fix has been submitted to Juno.
 
 ### Contacts / References ### This OSSN :
 https://wiki.openstack.org/wiki/OSSN/OSSN-0029 Original LaunchPad
 Bug : https://bugs.launchpad.net/neutron/+bug/1365961 OpenStack
 Security ML : openstack-secur...@lists.openstack.org OpenStack
 Security Group : https://launchpad.net/~openstack-ossg
 
 ___ OpenStack-dev
 mailing list OpenStack-dev@lists.openstack.org 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
-BEGIN PGP SIGNATURE-
Version: GnuPG v1

iQEcBAEBAgAGBQJUKbWMAAoJEJa+6E7Ri+EVhn4H/3i6o52SNZQsE6eofCWJag1h
GK4rECMuCw1TTe1a8mT0zrA9vigxZFlnpjfb/mXfFprQG4365VuqxxOFN1gimN+Q
xG8oFrm32RhEGi45FAbJr5g00LbemfNCVrJO+GJMSRjv3WykClwXz2HN13OAvejO
KkTaTeLKJQoPjLP1qeAb0Ihce8wR+pgAE+2g0MuaWBbZUIVZXK5CVvfU6NpBzat7
QoFLI9G+mUAHN8faGm15NvslkH+s5wzm9PDhE0ASOUzjKcaSFgLBcNiMTG2dCvjX
7SLrsYQErQop3LwQOpVlfnmhgTA96m5QsN3DaF1RAeAKSf7WU2yeYSujGtBdpes=
=tPf/
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Unusual behavior on committing a change

2014-09-29 Thread Clark Boylan
On Mon, Sep 29, 2014, at 12:34 PM, Sharan Kumar M wrote:
 Hi all,
 
 I am getting some weird things when I try to submit a patch for review.
 May
 be I am doing something wrong.
 
 I cloned the repo, setup git review and setup my gerrit username in git
 config. I updated an image in the docs/source/images directory and
 committed. After I commit, I get the git status report as
 
 Your branch and 'origin/master' have diverged,
 and have 1 and 1 different commit each, respectively.
 
 When I see the log, the last commit is authored by
 jenk...@review.openstack.org with the same commit message as mine. The
 author name is supposed to be mine right? The last time I submitted for
 review, the commit message had my name as author.
 
 Could someone help me out with this issue?
 Thanks,
 Sharan Kumar M
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

I think you may have amended the last commit that Gerrit merged rather
than creating a new commit. Did you run `git commit --amend` on top of
the last upstream commit?

If so the easiest way to fix this is probably to `git checkout master 
git reset --hard origin/master` then make your changes and finally `git
commit` without the --amend.

Clark

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara] 2014.1.3 release preparation

2014-09-29 Thread Sergey Lukjanov
Reminder - we'll have the 2014.1.3 this week.

On Tue, Sep 23, 2014 at 1:59 PM, Sergey Lukjanov slukja...@mirantis.com wrote:
 Hi sahara folks,

 if you'd like to propose something to the stable/icehouse branch to be
 included into the upcoming 2014.1.3 release, please, do it asap. If
 you think that something should be back ported, please, contact me or
 other core team members.

 The 2014.1.3 release is planned to October 2

 Thanks.

 --
 Sincerely yours,
 Sergey Lukjanov
 Sahara Technical Lead
 (OpenStack Data Processing)
 Principal Software Engineer
 Mirantis Inc.



-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] (OperationalError) (1040, 'Too many connections') None None

2014-09-29 Thread Jay Pipes

On 09/29/2014 03:12 PM, Nader Lahouti wrote:

Jay,

I increased the max_connection to 2000 and so far don't see any issue.
After launching 20 VMs the number of connections as follows:

Mon Sep 29 05:05:29 PDT 2014
Number of open TCP:3306 - 326
Number of open TCP:3306 nova-api - 34
Number of open TCP:3306 mysqld - 163
Number of open TCP:8774 - 66
Number of nova-api - 98

It seems the default max_connection for Juno in not enough and should be
changed as you suggested. But the question is what is correct/reliable
value?


max_connections is not an OpenStack configuration option (therefore 
there isn't a default max_connections for Juno. It is a MySQL-specific 
option.


Best,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] concrete proposal on changing the library testing model with devstack-gate

2014-09-29 Thread Robert Collins
On 30 September 2014 03:10, Doug Hellmann d...@doughellmann.com wrote:

 On Sep 28, 2014, at 5:00 PM, Robert Collins robe...@robertcollins.net wrote:

 As far as I know, the client libraries aren’t being released as alphas. The 
 Oslo libraries are, but they aren’t “public” in the same sense — they’re an 
 internal part of OpenStack, not something a user of a cloud is going to be 
 interacting with. The parts that affect the deployers directly are usually 
 limited to configuration options, and we have a strict deprecation policy for 
 dealing with those, even within the development cycle.

I'm now really confused. oslo.config for instance - its depended on by
the client libraries. Our strict backwards compatibility guarantees
for the clients are going to be transitive across our dependencies,
no?

 We do reserve the right to change APIs for new features being added to the 
 Oslo libraries during a development cycle. Because we want to support CD, and 
 because of the way we gate test, those changes have to be able to roll out in 
 a backwards-compatible way. (THIS is why the incubator is important for API 
 evolution, by the way, because it mean the API of a module can change and not 
 break any updates in any consuming project or CD environment.) Even if we 
 change the way we gate test the libraries, we would still want to allow for 
 backwards-compatibility, but still only within a development cycle. We do not 
 want to support every API variation of every library for all time. If we have 
 to tweak something, we try to get the consuming projects updated within a 
 cycle so the old variation of the new feature can be removed.

If we are only backwards compatible within a development cycle, we
can't use the new version of e.g. oslo.db with the stable icehouse API
servers. That means that distributors can't just distribute oslo.db...
they have to have very fixed sets of packages lockstepped together,
which seems unpleasant and fragile. Isn't that what the incubator was
for: that we release things from it once we're *ready* to do backwards
compatibility?

 Now, we’re not perfect and sometimes this doesn’t happen exactly as planned, 
 but the intent is there.

 So I think we are actually doing all of the things you are asking us to do, 
 with the exception of using the word “alpha” in the release version, and I’ve 
 already given the technical reasons for doing that.

As far as pip goes, you may not know, but tox defaults to pip --pre,
which means anyone using tox, like us all here, will be pulling the
alphas down by default: so impacting folk doing e.g. devtest on
icehouse. So I don't think the hack is working as intended.

-Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Call for testing: 2014.1.3 candidate tarballs

2014-09-29 Thread Adam Gandelman
Hi all,

We are scheduled to publish 2014.1.3 on Thurs Oct. 2nd for
Ceilometer, Cinder, Glance, Heat, Horizon, Keystone, Neutron, Nova
and Trove.

The list of issues fixed so far can be seen here:

  https://launchpad.net/ceilometer/+milestone/2014.1.3
  https://launchpad.net/cinder/+milestone/2014.1.3
  https://launchpad.net/heat/+milestone/2014.1.3
  https://launchpad.net/horizon/+milestone/2014.1.3
  https://launchpad.net/keystone/+milestone/2014.1.3
  https://launchpad.net/neutron/+milestone/2014.1.3
  https://launchpad.net/nova/+milestone/2014.1.3
  https://launchpad.net/trove/+milestone/2014.1.3

We'd appreciate anyone who could test the candidate 2014.1.3 tarballs, which
include all changes aside from a few trickling through the gate and any
pending
freeze exceptions:

  http://tarballs.openstack.org/ceilometer/ceilometer-stable-icehouse.tar.gz
  http://tarballs.openstack.org/cinder/cinder-stable-icehouse.tar.gz
  http://tarballs.openstack.org/heat/heat-stable-icehouse.tar.gz
  http://tarballs.openstack.org/horizon/horizon-stable-icehouse.tar.gz
  http://tarballs.openstack.org/keystone/keystone-stable-icehouse.tar.gz
  http://tarballs.openstack.org/neutron/neutron-stable-icehouse.tar.gz
  http://tarballs.openstack.org/nova/nova-stable-icehouse.tar.gz
  http://tarballs.openstack.org/trove/trove-stable-icehouse.tar.gz

Thanks,
Adam
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Kilo Blueprints and Specs

2014-09-29 Thread Joe Gordon
On Mon, Sep 29, 2014 at 5:23 AM, Gary Kotton gkot...@vmware.com wrote:

 Hi,
 Is the process documented anywhere? That is, if say for example I had a
 spec approved in J and its code did not land, how do we go about kicking
 the tires for K on that spec.


Specs will need be re-submitted once we open up the specs repo for Kilo.
The Kilo template will be changing a little bit, so specs will need a
little bit of reworking. But I expect the process to approve previously
approved specs to be quicker


 Thanks
 Gary

 On 9/29/14, 1:07 PM, John Garbutt j...@johngarbutt.com wrote:

 On 27 September 2014 00:31, Joe Gordon joe.gord...@gmail.com wrote:
  On Thu, Sep 25, 2014 at 9:21 AM, John Garbutt j...@johngarbutt.com
 wrote:
  On 25 September 2014 14:10, Daniel P. Berrange berra...@redhat.com
  wrote:
   The proposal is to keep kilo-1, kilo-2 much the same as juno.
 Except,
   we work harder on getting people to buy into the priorities that are
   set, and actively provoke more debate on their correctness, and we
   reduce the bar for what needs a blueprint.
  
   We can't have 50 high priority blueprints, it doesn't mean anything,
   right? We need to trim the list down to a manageable number, based
 on
   the agreed project priorities. Thats all I mean by slots / runway at
   this point.
  
   I would suggest we don't try to rank high/medium/low as that is
   too coarse, but rather just an ordered priority list. Then you
   would not be in the situation of having 50 high blueprints. We
   would instead naturally just start at the highest priority and
   work downwards.
 
  OK. I guess I was fixating about fitting things into launchpad.
 
  I guess having both might be what happens.
 
The runways
idea is just going to make me less efficient at reviewing. So I'm
very much against it as an idea.
  
   This proposal is different to the runways idea, although it
 certainly
   borrows aspects of it. I just don't understand how this proposal has
   all the same issues?
  
  
   The key to the kilo-3 proposal, is about getting better at saying
 no,
   this blueprint isn't very likely to make kilo.
  
   If we focus on a smaller number of blueprints to review, we should
 be
   able to get a greater percentage of those fully completed.
  
   I am just using slots/runway-like ideas to help pick the high
 priority
   blueprints we should concentrate on, during that final milestone.
   Rather than keeping the distraction of 15 or so low priority
   blueprints, with those poor submitters jamming up the check queue,
 and
   constantly rebasing, and having to deal with the odd stray review
   comment they might get lucky enough to get.
  
   Maybe you think this bit is overkill, and thats fine. But I still
   think we need a way to stop wasting so much of peoples time on
 things
   that will not make it.
  
   The high priority blueprints are going to end up being mostly the big
   scope changes which take alot of time to review  probably go through
   many iterations. The low priority blueprints are going to end up
 being
   the small things that don't consume significant resource to review
 and
   are easy to deal with in the time we're waiting for the big items to
   go through rebases or whatever. So what I don't like about the
 runways
   slots idea is that removes the ability to be agile and take the
   initiative
   to review  approve the low priority stuff that would otherwise never
   make it through.
 
  The idea is more around concentrating on the *same* list of things.
 
  Certainly we need to avoid the priority inversion of concentrating
  only on the big things.
 
  Its also why I suggested that for kilo-1 and kilo-2, we allow any
  blueprint to merge, and only restrict it to a specific list in kilo-3,
  the idea being to maximise the number of things that get completed,
  rather than merging some half blueprints, but not getting to the good
  bits.
 
 
  Do we have to decide this now, or can we see how project priorities go
 and
  reevaluate half way through Kilo-2?
 
 What we need to decide is not to use the runway idea for kilo-1 and
 kilo-2. At this point, I guess we have (passively) decided that now.
 
 I like the idea of waiting till mid kilo-2. Thats around Spec freeze,
 which is handy.
 
 Thanks,
 John
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] What's holding nova development back?

2014-09-29 Thread Joe Gordon
On Wed, Sep 17, 2014 at 8:03 AM, Matt Riedemann mrie...@linux.vnet.ibm.com
wrote:



 On 9/16/2014 1:01 PM, Joe Gordon wrote:


 On Sep 15, 2014 8:31 PM, Jay Pipes jaypi...@gmail.com
 mailto:jaypi...@gmail.com wrote:
  
   On 09/15/2014 08:07 PM, Jeremy Stanley wrote:
  
   On 2014-09-15 17:59:10 -0400 (-0400), Jay Pipes wrote:
   [...]
  
   Sometimes it's pretty hard to determine whether something in the
   E-R check page is due to something in the infra scripts, some
   transient issue in the upstream CI platform (or part of it), or
   actually a bug in one or more of the OpenStack projects.
  
   [...]
  
   Sounds like an NP-complete problem, but if you manage to solve it
   let me know and I'll turn it into the first line of triage for Infra
   bugs. ;)
  
  
   LOL, thanks for making me take the last hour reading Wikipedia pages
 about computational complexity theory! :P
  
   No, in all seriousness, I wasn't actually asking anyone to boil the
 ocean, mathematically. I think doing a couple things just making the
 categorization more obvious (a UI thing, really) and doing some
 (hopefully simple?) inspection of some control group of patches that we
 know do not introduce any code changes themselves and comparing to
 another group of patches that we know *do* introduce code changes to
 Nova, and then seeing if there are a set of E-R issues that consistently
 appear in *both* groups. That set of E-R issues has a higher likelihood
 of not being due to Nova, right?

 We use launchpad's affected projects listings on the elastic recheck
 page to say what may be causing the bug.  Tagging projects to bugs is a
 manual process, but one that works pretty well.

 UI: The elastic recheck UI definitely could use some improvements. I am
 very poor at writing UIs, so patches welcome!

  
   OK, so perhaps it's not the most scientific or well-thought out plan,
 but hey, it's a spark for thought... ;)
  
   Best,
   -jay
  
  
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
 mailto:OpenStack-dev@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 I'm not great with UIs either but would a dropdown of the affected
 projects be helpful and then people can filter on their favorite project
 and then the page is sorted by top offenders as we have today?

 There are times when the top bugs are infra issues (pip timeouts for
 exapmle) so you have to scroll a ways before finding something for your
 project (nova isn't the only one).



I think that would be helpful.




 --

 Thanks,

 Matt Riedemann



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] setuptools 6.0 ruins the world

2014-09-29 Thread Dolph Mathews
On Mon, Sep 29, 2014 at 6:01 AM, Sean Dague s...@dague.net wrote:

 On 09/29/2014 05:06 AM, Thierry Carrez wrote:
  Sean Dague wrote:
  Setuptools 6.0 was released Friday night. (Side note: as a service to
  others releasing major software bumps on critical python software on a
  Friday night should be avoided.)
 
  Since it's hard to prevent upstream from releasing over the weekends,
  could we somehow freeze our PyPI mirrors from Friday noon to Monday noon
  infra-team time ?

 Honestly, I'm not sure that would be very helpful. There tend to be
 people with one eye open on things over the weekend (like I was this
 weekend), and the fact that it was fixed then meant most people never
 saw the break. If we did a giant requirements release every monday
 morning it would also be *far* more difficult to figure out just based
 on release dates which upstream dependency probably just killed us.


What about a continuous delay? Like:

- never mirror a package until it's at least 72 hours old
- never mirror a package if it's not the latest release

If a broken release is produced upstream, developers might be able to
detect it before the gate consumes it. Further, if upstream produces a
fixed release within 72 hours, the second rule would mean that infra
doesn't consume the broken one, and will wait until the new one is 72 hours
old (and we don't have to blacklist it in requirements.txt just for our own
sake).


 -Sean

 --
 Sean Dague
 http://dague.net

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Contributing to docs without Docbook -- YES you can!

2014-09-29 Thread Nicholas Chase
As you know, we're always looking for ways for people to be able to 
contribute to Docs, but we do understand that there's a certain amount 
of pain involved in dealing with Docbook.  So to try and make this 
process easier, we're going to try an experiment.


What we've put together is a system where you can update a wiki with 
links to content in whatever form you've got it -- gist on github, wiki 
page, blog post, whatever -- and we have a dedicated resource that will 
turn it into actual documentation, in Docbook. If you want to be added 
as a co-author on the patch, make sure to provide us the email address 
you used to become a Foundation member.


Because we know that the networking documentation needs particular 
attention, we're starting there.  We have a Networking Guide, from which 
we will ultimately pull information to improve the networking section of 
the admin guide.  The preliminary Table of Contents is here: 
https://wiki.openstack.org/wiki/NetworkingGuide/TOC , and the 
instructions for contributing are as follows:


1. Pick an existing topic or create a new topic. For new topics, we're
   primarily interested in deployment scenarios.
2. Develop content (text and/or diagrams) in a format that supports at
   least basic markup (e.g., titles, paragraphs, lists, etc.).
3. Provide a link to the content (e.g., gist on github.com, wiki page,
   blog post, etc.) under the associated topic.
4. Send e-mail to reviewers network...@openstacknow.com.
5. A writer turns the content into an actual patch, with tracking bug,
   and docs reviewers (and the original author, we would hope) make
   sure it gets reviewed and merged.


Please let us know if you have any questions/comments.  Thanks!

  Nick
--
Nick Chase
1-650-567-5640
Technical Marketing Manager, Mirantis
Editor, OpenStack:Now
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Rally]Rally exception

2014-09-29 Thread Harshil Shah (harsshah)
I ran into below rally exception while trying to run Rally scenario.

 u'Traceback (most recent call last):\n  File 
/home/localadmin/openstack/cvg_rally/rally/rally/benchmark/runners/base.py, 
line 73, in _run_scenario_once\nmethod_name)(**kwargs) or scenario_output\n 
 File 
/home/localadmin/openstack/cvg_rally/rally/rally/benchmark/scenarios/vm_int/vm_perf.py,
 line 139, in boot_runperf_delete\nself.server.dispose()\n  File 
/home/localadmin/openstack/cvg_rally/rally/rally/benchmark/scenarios/vm_int/IperfInstance.py,
 line 68, in dispose\nInstance.dispose(self)\n  File 
/home/localadmin/openstack/cvg_rally/rally/rally/benchmark/scenarios/vm_int/instance.py,
 line 96, in dispose\nself.ssh_instance.close()\n  File 
/home/localadmin/openstack/cvg_rally/rally/rally/sshutils.py, line 136, in 
close\nself._client.close()\nAttributeError: \'bool\' object has no 
attribute \'close\'\n'],

Looking at the code this how close() is written.
= rally/sshutils.py
135 def close(self):
136 self._client.close()
137 self._client = False

 85  def __init__(self, user, host, port=22, pkey=None,
 86  key_filename=None, password=None):
 87 Initialize SSH client.
 88
 89 :param user: ssh username
 90 :param host: hostname or ip address of remote ssh server
 91 :param port: remote ssh port
 92 :param pkey: RSA or DSS private key string or file object
 93 :param key_filename: private key filename
 94 :param password: password
 95
 96 
 97
 98 self.user = user
 99 self.host = host
100 self.port = port
101 self.pkey = self._get_pkey(pkey) if pkey else None
102 self.password = password
103 self.key_filename = key_filename
104 self._client = False

==

This object _client is used at boolean and in above code its trying to invoke a 
method on it, so its this intentional. If not is this a known issue ?

Thanks,
Harshil

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Congress] PTL Non-Candidacy + Nomination

2014-09-29 Thread Peter Balland
It has been an honor to serve as the interim PTL for the Congress project 
during Juno.  Due to other commitments I have during the Kilo timeframe, I feel 
I would not be able to commit the time needed by the growing project.  In my 
place, I would like to nominate Tim Hinrichs for PTL of Congress.

Tim has been involved in the Congress project since its inception, and has been 
serving as its chief architect.  He is very active in policy research, code, 
and community, and has a great strategic vision for the project.

- Peter

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Sahara] Verbosity of Sahara overview image

2014-09-29 Thread Sergey Lukjanov
I agree with Matt -
http://docs.openstack.org/developer/sahara/architecture.html is a
better diagram, but it should be updated too

On Sat, Sep 27, 2014 at 5:51 AM, Matthew Farrellee m...@redhat.com wrote:
 On 09/26/2014 02:27 PM, Sharan Kumar M wrote:

 Hi all,

 I am trying to modify the diagram in
 http://docs.openstack.org/developer/sahara/overview.html so that it
 syncs with the contents. In the diagram, is it nice to mark the
 connections between the openstack components like, Nova with Cinder,
 Nova with Swift, components with Keystone, Nova with Neutron, etc? Or
 would it be too verbose for this diagram and should I be focusing on
 links between Sahara and other components?

 Thanks,
 Sharan Kumar M


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 http://docs.openstack.org/developer/sahara/architecture.html has a better
 diagram imho

 i think the diagram should focus on links between sahara and other
 components only.

 best,


 matt

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Congress] PTL Non-Candidacy + Nomination

2014-09-29 Thread Anita Kuno
On 09/29/2014 05:20 PM, Peter Balland wrote:
 It has been an honor to serve as the interim PTL for the Congress project 
 during Juno.  Due to other commitments I have during the Kilo timeframe, I 
 feel I would not be able to commit the time needed by the growing project.  
 In my place, I would like to nominate Tim Hinrichs for PTL of Congress.
 
 Tim has been involved in the Congress project since its inception, and has 
 been serving as its chief architect.  He is very active in policy research, 
 code, and community, and has a great strategic vision for the project.
 
 - Peter
 
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
Note:

Congress is not one of the programs or projects that is currently having
elections administered by the election process governed by the tc
charter, just to ensure readers are not confused. They can choose a
leader as suits the members involved in Congress.

Thanks,
Anita.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [RFC] Kilo release cycle schedule proposal

2014-09-29 Thread Sergey Lukjanov
+1

On Thu, Sep 25, 2014 at 7:02 AM, Morgan Fainberg
morgan.fainb...@gmail.com wrote:


 On Thursday, September 25, 2014, Thierry Carrez thie...@openstack.org
 wrote:

 Thierry Carrez wrote:
  Kilo Design Summit: Nov 4-7
  Kilo-1 milestone: Dec 11
  Kilo-2 milestone: Jan 29
  Kilo-3 milestone, feature freeze: March 12
  2015.1 (Kilo) release: Apr 23
  L Design Summit: May 18-22

 Following feedback on the mailing-list and at the cross-project meeting,
 there is growing consensus that shifting one week to the right would be
 better. It makes for a short L cycle, but avoids losing 3 weeks between
 Kilo release and L design summit. That gives:

 Kilo Design Summit: Nov 4-7
 Kilo-1 milestone: Dec 18
 Kilo-2 milestone: Feb 5
 Kilo-3 milestone, feature freeze: Mar 19
 2015.1 (Kilo) release: Apr 30
 L Design Summit: May 18-22

 If you prefer a picture, see attached PDF.

 --
 Thierry Carrez (ttx)


 +1

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] setuptools 6.0 ruins the world

2014-09-29 Thread Sean Dague
On 09/29/2014 04:48 PM, Dolph Mathews wrote:
 
 
 On Mon, Sep 29, 2014 at 6:01 AM, Sean Dague s...@dague.net
 mailto:s...@dague.net wrote:
 
 On 09/29/2014 05:06 AM, Thierry Carrez wrote:
  Sean Dague wrote:
  Setuptools 6.0 was released Friday night. (Side note: as a service to
  others releasing major software bumps on critical python software on a
  Friday night should be avoided.)
 
  Since it's hard to prevent upstream from releasing over the weekends,
  could we somehow freeze our PyPI mirrors from Friday noon to Monday noon
  infra-team time ?
 
 Honestly, I'm not sure that would be very helpful. There tend to be
 people with one eye open on things over the weekend (like I was this
 weekend), and the fact that it was fixed then meant most people never
 saw the break. If we did a giant requirements release every monday
 morning it would also be *far* more difficult to figure out just based
 on release dates which upstream dependency probably just killed us.
 
 
 What about a continuous delay? Like:
 
 - never mirror a package until it's at least 72 hours old
 - never mirror a package if it's not the latest release
 
 If a broken release is produced upstream, developers might be able to
 detect it before the gate consumes it. Further, if upstream produces a
 fixed release within 72 hours, the second rule would mean that infra
 doesn't consume the broken one, and will wait until the new one is 72
 hours old (and we don't have to blacklist it in requirements.txt just
 for our own sake).

Does our mirroring software support this (we are using upstream
mirroring software)? Would this hold true for stackforge libraries? (so
people would have to wait 3 days after their release to test with us).
How about when fixes land?

Honestly, I know it's painful, but the realities are that most people
don't find the bugs we find. They just don't test enough.

There are ideas about getting out ahead of these kinds of things, but
sniff testing upstream source trees before their release, however that's
a bunch of work that no one has signed up for. If someone wants to sign
up to do that work, I'm happy to point you in the right directions.

-Sean

-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Congress] PTL Non-Candidacy + Nomination

2014-09-29 Thread Sean Roberts
+1

~sean

 On Sep 29, 2014, at 2:20 PM, Peter Balland pball...@vmware.com wrote:
 
 It has been an honor to serve as the interim PTL for the Congress project 
 during Juno.  Due to other commitments I have during the Kilo timeframe, I 
 feel I would not be able to commit the time needed by the growing project.  
 In my place, I would like to nominate Tim Hinrichs for PTL of Congress.
 
 Tim has been involved in the Congress project since its inception, and has 
 been serving as its chief architect.  He is very active in policy research, 
 code, and community, and has a great strategic vision for the project.
 
 - Peter
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Congress] PTL Non-Candidacy + Nomination

2014-09-29 Thread Peter Balland
Yes, I should have mentioned, Congress is currently a StackForge project,
but we follow many of the standard timelines and processes.  Sorry if I
caused any confusion.

- Peter

On 9/29/14, 2:28 PM, Anita Kuno ante...@anteaya.info wrote:

On 09/29/2014 05:20 PM, Peter Balland wrote:
 It has been an honor to serve as the interim PTL for the Congress
project during Juno.  Due to other commitments I have during the Kilo
timeframe, I feel I would not be able to commit the time needed by the
growing project.  In my place, I would like to nominate Tim Hinrichs for
PTL of Congress.
 
 Tim has been involved in the Congress project since its inception, and
has been serving as its chief architect.  He is very active in policy
research, code, and community, and has a great strategic vision for the
project.
 
 - Peter
 
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
Note:

Congress is not one of the programs or projects that is currently having
elections administered by the election process governed by the tc
charter, just to ensure readers are not confused. They can choose a
leader as suits the members involved in Congress.

Thanks,
Anita.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][swift] Has anybody considered storing tokens in Swift?

2014-09-29 Thread Dmitry Mescheryakov
Hey Jay,

Did you consider Swift's eventual consistency? The general use case for
many OpenStack application is:
 1. obtain the token from Keystone
 2. perform some operation in OpenStack providing token as credentials.

As a result of operation #1 the token will be saved into Swift by the
Keystone. But due to eventual consistency it could happen that validation
of token in operation #2 will not see the saved token. Probability depends
on time gap between ops #1 and #2: the smaller the gap, the higher is
probability (less time to sync). Also it depends on Swift installation
size: the bigger is installation, the higher is probability (bigger 'space'
for inconsistency).

I believe that I've seen such inconsistency in Rackspace Cloud Files a
couple of years ago. We uploaded a file using an application into the
Files, but saw it in browser only a couple minutes later.

It is my understanding that Ceph exposing Swift API is not affected though,
as it is strongly consistent.

Thanks,

Dmitry


2014-09-29 20:12 GMT+04:00 Jay Pipes jaypi...@gmail.com:

 Hey Stackers,

 So, I had a thought this morning (uh-oh, I know...).

 What if we wrote a token driver in Keystone that uses Swift for backend
 storage?

 I have long been an advocate of the memcache token driver versus the SQL
 driver for performance reasons. However, the problem with the memcache
 token driver is that if you want to run multiple OpenStack regions, you
 could share the identity data in Keystone using replicated database
 technology (mysql galera/PXC, pgpool II, or even standard mysql
 master/slave), but each region needs to have its own memcache service for
 tokens. This means that tokens are not shared across regions, which means
 that users have to log in separately to each region's dashboard.

 I personally considered this a tradeoff worth accepting. But then, today,
 I thought... what about storing tokens in a globally-distributed Swift
 cluster? That would take care of the replication needs automatically, since
 Swift would do the needful. And, add to that, Swift was designed for
 storing lots of small objects, which tokens are...

 Thoughts? I think it would be a cool dogfooding effort if nothing else,
 and give users yet another choice in how they handle multi-region tokens.

 Best,
 -jay

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][swift] Has anybody considered storing tokens in Swift?

2014-09-29 Thread Chmouel Boudjnah
On Mon, Sep 29, 2014 at 11:47 PM, Dmitry Mescheryakov 
dmescherya...@mirantis.com wrote:

 As a result of operation #1 the token will be saved into Swift by the
 Keystone. But due to eventual consistency it could happen that validation
 of token in operation #2 will not see the saved token. Probability depends
 on time gap between ops #1 and #2: the smaller the gap, the higher is
 probability (less time to sync). Also it depends on Swift installation
 size: the bigger is installation, the higher is probability (bigger 'space'
 for inconsistency).



eventual consistency will only affect container listing  and I don't think
there is a need for container listing in that driver.

Chmouel
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] concrete proposal on changing the library testing model with devstack-gate

2014-09-29 Thread Doug Hellmann

On Sep 29, 2014, at 4:22 PM, Robert Collins robe...@robertcollins.net wrote:

 On 30 September 2014 03:10, Doug Hellmann d...@doughellmann.com wrote:
 
 On Sep 28, 2014, at 5:00 PM, Robert Collins robe...@robertcollins.net 
 wrote:
 
 As far as I know, the client libraries aren’t being released as alphas. The 
 Oslo libraries are, but they aren’t “public” in the same sense — they’re an 
 internal part of OpenStack, not something a user of a cloud is going to be 
 interacting with. The parts that affect the deployers directly are usually 
 limited to configuration options, and we have a strict deprecation policy 
 for dealing with those, even within the development cycle.
 
 I'm now really confused. oslo.config for instance - its depended on by
 the client libraries. Our strict backwards compatibility guarantees
 for the clients are going to be transitive across our dependencies,
 no?

I would expect anyone using an experimental feature to be working with us on 
it, and to avoid releasing something “experimental into a “stable release 
stream. We do that automatically with the server projects because, irrespective 
of the fact that some people deploy from trunk, we create a formal release of 
the server and libraries at the end of a development cycle at which point the 
library’s API is declared stable.

 
 We do reserve the right to change APIs for new features being added to the 
 Oslo libraries during a development cycle. Because we want to support CD, 
 and because of the way we gate test, those changes have to be able to roll 
 out in a backwards-compatible way. (THIS is why the incubator is important 
 for API evolution, by the way, because it mean the API of a module can 
 change and not break any updates in any consuming project or CD 
 environment.) Even if we change the way we gate test the libraries, we would 
 still want to allow for backwards-compatibility, but still only within a 
 development cycle. We do not want to support every API variation of every 
 library for all time. If we have to tweak something, we try to get the 
 consuming projects updated within a cycle so the old variation of the new 
 feature can be removed.
 
 If we are only backwards compatible within a development cycle, we
 can't use the new version of e.g. oslo.db with the stable icehouse API
 servers. That means that distributors can't just distribute oslo.db...
 they have to have very fixed sets of packages lockstepped together,
 which seems unpleasant and fragile. Isn't that what the incubator was
 for: that we release things from it once we're *ready* to do backwards
 compatibility?

We do not want anyone to use development versions of libraries for stable 
branches, right? We have processes in place to allow us to backport changes to 
patch releases of libraries, so that bug and security fixes can be released 
without the features being developed within the cycle. But deployers using 
stable branches should avoid mixing in development versions of libraries.

As far as the incubator goes, we’ve been baking things there for some time. 
Much of the code is now ready to graduate, and some of it will require API 
changes as part of that process. Because we’re seeing more and more people just 
flatly refuse to work with incubated code at all, we are going to make some of 
those changes as we create the libraries. That means application changes during 
library adoption, but it also means fewer syncs, and those seem to be the rage 
trigger. We are trying to minimize API changes, but some are just unavoidable 
(circular dependencies, hidden globals, etc. are things we won’t support). 
Based on our experiences in Juno, I expect we’ll continue to get a few things 
wrong in the API changes, but that they will settle down after 1-2 cycles after 
a library graduates, depending on how well early adoption goes.

In the future, I hope we can continue to use the incubator to avoid a lot of 
breaking churn. It does not seem like a community preference, though, and so 
the trade-off is less stability within each library as we figure out what the 
new APIs should look like. Again, we’re trying to minimize that, but the level 
of stability you seem to be asking for within a development cycle is beyond 
what I think we’re practically capable of providing while still maintaining the 
ability to actually graduate and improve the libraries.

 
 Now, we’re not perfect and sometimes this doesn’t happen exactly as planned, 
 but the intent is there.
 
 So I think we are actually doing all of the things you are asking us to do, 
 with the exception of using the word “alpha” in the release version, and 
 I’ve already given the technical reasons for doing that.
 
 As far as pip goes, you may not know, but tox defaults to pip --pre,
 which means anyone using tox, like us all here, will be pulling the
 alphas down by default: so impacting folk doing e.g. devtest on
 icehouse. So I don't think the hack is working as intended.

I dislike tox more 

Re: [openstack-dev] [keystone][swift] Has anybody considered storing tokens in Swift?

2014-09-29 Thread Clay Gerrard
On Mon, Sep 29, 2014 at 2:53 PM, Chmouel Boudjnah chmo...@enovance.com
wrote:



 eventual consistency will only affect container listing  and I don't think
 there is a need for container listing in that driver.


well now hold on...

if you're doing an overwrite in the face of server failures you could still
get a stale read if a server with an old copy comes back into the fray and
you read before replication sorts it out, or read a old version of a key
you deleted

-Clay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][swift] Has anybody considered storing tokens in Swift?

2014-09-29 Thread Chmouel Boudjnah


On 30/09/2014 01:05, Clay Gerrard wrote:

eventual consistency will only affect container listing  and I don't think
there is a need for container listing in that driver.


well now hold on...

if you're doing an overwrite in the face of server failures you could still
get a stale read if a server with an old copy comes back into the fray and
you read before replication sorts it out, or read a old version of a key
you deleted
yeah sure thanks for clarifying but from my understanding all tokens are 
new keys/object there is not overwriting going on,


Chmouel


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][swift] Has anybody considered storing tokens in Swift?

2014-09-29 Thread Clint Byrum
Excerpts from Clay Gerrard's message of 2014-09-29 16:05:14 -0700:
 On Mon, Sep 29, 2014 at 2:53 PM, Chmouel Boudjnah chmo...@enovance.com
 wrote:
 
 
 
  eventual consistency will only affect container listing  and I don't think
  there is a need for container listing in that driver.
 
 
 well now hold on...
 
 if you're doing an overwrite in the face of server failures you could still
 get a stale read if a server with an old copy comes back into the fray and
 you read before replication sorts it out, or read a old version of a key
 you deleted

For tokens, there are really only two answers that matter:

* does ID==X exist?  * has ID==X been revoked?

I think as long as you have a separate container for revocations and
tokens, then resurrections would be fine. The records themselves would
be immutable so edits aren't a problem.

It would, however, be bad to get a 404 for something that is otherwise
present.. as that will result in an erroneous failure for the client.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Get rid of the sample config file

2014-09-29 Thread Devananda van der Veen
On Thu, Sep 25, 2014 at 7:13 PM, Tom Fifield t...@openstack.org wrote:
 On 26/09/14 03:35, Morgan Fainberg wrote:
 -Original Message-
 From: John Griffith john.griffi...@gmail.com
 Reply: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Date: September 25, 2014 at 12:27:52
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Subject:  Re: [openstack-dev] [Ironic] Get rid of the sample config file

 On Thu, Sep 25, 2014 at 12:34 PM, Devdatta Kulkarni 
 devdatta.kulka...@rackspace.com wrote:

 Hi,

 We have faced this situation in Solum several times. And in fact this was
 one of the topics
 that we discussed in our last irc meeting.

 We landed on separating the sample check from pep8 gate into a non-voting
 gate.
 One reason to keep the sample check is so that when say a feature in your
 code fails
 due to some upstream changes and for which you don't have coverage in your
 functional tests then
 a non-voting but failing sample check gate can be used as a starting point
 of the failure investigation.

 More details about the discussion can be found here:

 http://eavesdrop.openstack.org/meetings/solum_team_meeting/2014/solum_team_meeting.2014-09-23-16.00.log.txt

 - Devdatta

 --
 *From:* David Shrewsbury [shrewsbury.d...@gmail.com]
 *Sent:* Thursday, September 25, 2014 12:42 PM
 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] [Ironic] Get rid of the sample config file

 Hi!

 On Thu, Sep 25, 2014 at 12:23 PM, Lucas Alvares Gomes 
 lucasago...@gmail.com wrote:

 Hi,

 Today we have hit the problem of having an outdated sample
 configuration file again[1]. The problem of the sample generation is
 that it picks up configuration from other projects/libs
 (keystoneclient in that case) and this break the Ironic gate without
 us doing anything.

 So, what you guys think about removing the test that compares the
 configuration files and makes it no longer gate[2]?

 We already have a tox command to generate the sample configuration
 file[3], so folks that needs it can generate it locally.

 Does anyone disagree?


 +1 to this, but I think we should document how to generate the sample
 config
 in our documentation (install guide?).

 -Dave
 --
 David Shrewsbury (Shrews)

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 I tried this in Cinder a while back and was actually rather surprised by
 the overwhelming push-back I received from the Operator community, and
 whether I agreed with all of it or not, the last thing I want to do is
 ignore the Operators that are actually standing up and maintaining what
 we're building.

 Really at the end of the day this isn't really that big of a deal. It's
 relatively easy to update the config in most of the projects tox
 -egenconfig see my posting back in May [1]. For all the more often this
 should happen I'm not sure why we can't have enough contributors that are
 just pro-active enough to fix it up when they see it falls out of date.

 John

 [1]: http://lists.openstack.org/pipermail/openstack-dev/2014-May/036438.html

 +1 to what John just said.

 I know in Keystone we update the sample config (usually) whenever we notice 
 it out of date. Often we ask developers making config changes to run `tox 
 -esample_config` and re-upload their patch. If someone misses we (the cores) 
 will do a patch that just updates the sample config along the way. Ideally 
 we should have a check job that just reports the config is out of date 
 (instead of blocking the review).

 The issue is the premise that there are 2 options:

 1) Gate on the sample config being current
 2) Have no sample config in the tree.

 The missing third option is the proactive approach (plus having something 
 convenient like `tox -egenconfig` or `tox -eupdate_sample_config` to make it 
 convenient to update the sample config) is the approach that covers both 
 sides nicely. The Operators/deployers have the sample config in tree, the 
 developers don’t get patched rejected in the gate because the sample config 
 doesn’t match new options in an external library.

 I know a lot of operators and deployers appreciate the sample config being 
 in-tree.

 Just confirming this is definitely the case.


Thanks, all. I agree with the points made here and really appreciate
the feedback from other projects and operators. I've proposed a change
to Ironic to
- remove check_uptodate from our pep8 test env
- updated the genconfig target to make it even easier to build the
sample config file

https://review.openstack.org/124919

Cheers,
Devananda

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org

Re: [openstack-dev] [nova] Kilo Blueprints and Specs

2014-09-29 Thread Christopher Yeoh
On Mon, 29 Sep 2014 13:32:57 -0700
Joe Gordon joe.gord...@gmail.com wrote:

 On Mon, Sep 29, 2014 at 5:23 AM, Gary Kotton gkot...@vmware.com
 wrote:
 
  Hi,
  Is the process documented anywhere? That is, if say for example I
  had a spec approved in J and its code did not land, how do we go
  about kicking the tires for K on that spec.
 
 
 Specs will need be re-submitted once we open up the specs repo for
 Kilo. The Kilo template will be changing a little bit, so specs will
 need a little bit of reworking. But I expect the process to approve
 previously approved specs to be quicker

Am biased given I have a spec approved for Juno which we didn't quite
fully merge which we want to finish off early in Kilo (most of the
patches are very close already to being ready to merge), but I think we
should give priority to reviewing specs already approved in Juno and
perhaps only require one +2 for re-approval. 

Otherwise we'll end up wasting weeks of development time just when
there is lots of review bandwidth available and the CI system is
lightly loaded. Honestly, ideally I'd like to just start merging as
soon as Kilo opens. Nothing has changed between Juno FF and Kilo opening
so there's really no reason that an approved Juno spec should not be
reapproved.

Chris

 
 
  Thanks
  Gary
 
  On 9/29/14, 1:07 PM, John Garbutt j...@johngarbutt.com wrote:
 
  On 27 September 2014 00:31, Joe Gordon joe.gord...@gmail.com
  wrote:
   On Thu, Sep 25, 2014 at 9:21 AM, John Garbutt
   j...@johngarbutt.com
  wrote:
   On 25 September 2014 14:10, Daniel P. Berrange
   berra...@redhat.com wrote:
The proposal is to keep kilo-1, kilo-2 much the same as juno.
  Except,
we work harder on getting people to buy into the priorities
that are set, and actively provoke more debate on their
correctness, and we reduce the bar for what needs a
blueprint.
   
We can't have 50 high priority blueprints, it doesn't mean
anything, right? We need to trim the list down to a
manageable number, based
  on
the agreed project priorities. Thats all I mean by slots /
runway at this point.
   
I would suggest we don't try to rank high/medium/low as that
is too coarse, but rather just an ordered priority list. Then
you would not be in the situation of having 50 high
blueprints. We would instead naturally just start at the
highest priority and work downwards.
  
   OK. I guess I was fixating about fitting things into launchpad.
  
   I guess having both might be what happens.
  
 The runways
 idea is just going to make me less efficient at reviewing.
 So I'm very much against it as an idea.
   
This proposal is different to the runways idea, although it
  certainly
borrows aspects of it. I just don't understand how this
proposal has all the same issues?
   
   
The key to the kilo-3 proposal, is about getting better at
saying
  no,
this blueprint isn't very likely to make kilo.
   
If we focus on a smaller number of blueprints to review, we
should
  be
able to get a greater percentage of those fully completed.
   
I am just using slots/runway-like ideas to help pick the high
  priority
blueprints we should concentrate on, during that final
milestone. Rather than keeping the distraction of 15 or so
low priority blueprints, with those poor submitters jamming
up the check queue,
  and
constantly rebasing, and having to deal with the odd stray
review comment they might get lucky enough to get.
   
Maybe you think this bit is overkill, and thats fine. But I
still think we need a way to stop wasting so much of peoples
time on
  things
that will not make it.
   
The high priority blueprints are going to end up being mostly
the big scope changes which take alot of time to review 
probably go through many iterations. The low priority
blueprints are going to end up
  being
the small things that don't consume significant resource to
review
  and
are easy to deal with in the time we're waiting for the big
items to go through rebases or whatever. So what I don't like
about the
  runways
slots idea is that removes the ability to be agile and take
the initiative
to review  approve the low priority stuff that would
otherwise never make it through.
  
   The idea is more around concentrating on the *same* list of
   things.
  
   Certainly we need to avoid the priority inversion of
   concentrating only on the big things.
  
   Its also why I suggested that for kilo-1 and kilo-2, we allow
   any blueprint to merge, and only restrict it to a specific list
   in kilo-3, the idea being to maximise the number of things that
   get completed, rather than merging some half blueprints, but
   not getting to the good bits.
  
  
   Do we have to decide this now, or can we see how project
   priorities go
  and
   reevaluate half way through Kilo-2?
  
  What we need to decide is 

Re: [openstack-dev] [keystone][swift] Has anybody considered storing tokens in Swift?

2014-09-29 Thread Clay Gerrard
On Mon, Sep 29, 2014 at 4:15 PM, Clint Byrum cl...@fewbar.com wrote:


 It would, however, be bad to get a 404 for something that is otherwise
 present.. as that will result in an erroneous failure for the client.


That almost never happens, but is possible if all the primaries are down*,
a system than leans harder on the C a similar failure would be expected to
treat a similar impossible question as a failure/error.

* It's actually if all the same nodes that answered the previous write are
down; there's some trixies with error-limiting and stable handoffs that
help with subsequent read-your-writes behavior that actually make it fairly
difficult to write data that you can't then read back out unless you
basically track where all of the writes go and then shut down *exactly*
those nodes and make a read before replication beats you to it.  Just
shutting down all three primary locations will just write and then read
from the same handoff locations, even if the primaries subsequently come
back online (unless the primaries have an old copy - but it sounds like
that's not going on in your application).

Again, all of this has to do with under failure edge cases.  A healthy
swift system; or even one that's only moderately degraded won't really see
much of this.

Depending on the deployment latencies may be a concern if you're using this
as a cache - have you looked at Swauth [1] already?

-Clay

1. https://github.com/gholt/swauth
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Fate of xmlutils

2014-09-29 Thread Christopher Yeoh
On Mon, 29 Sep 2014 18:03:20 +0200
Julien Danjou jul...@danjou.info wrote:
 
 It seems that Python fixed that issue with 2 modules released on PyPI:
 
   https://pypi.python.org/pypi/defusedxml
   https://pypi.python.org/pypi/defusedexpat
 
 I'm no XML expert, and I've only a shallow understanding of the issue,
 but I wonder if we should put some efforts to drop xmlutils and our
 custom XML fixes to used instead these 2 modules.

Nova XML API support is marked as deprecated in Juno. So hopefully
we'll be able to just drop XML and associated helper modules within a
couple of cycles.

Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] when will the spec repo for kilo be opened up for nova

2014-09-29 Thread John Zhang
Hi,

We wrote a spec (for fast booting a large number of VMs) but failed to
catch up with Juno, so I was wondering when the spec repo for kilo will be
opened up for nova?

Thanks!
John
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] when will the spec repo for kilo be opened up for nova

2014-09-29 Thread Michael Still
We agreed a few nova team meetings ago that this would be done in
early October, so I expect it will happen sometime this week.

Cheers,
Michael

On Tue, Sep 30, 2014 at 10:09 AM, John Zhang
zhang.john.vmthun...@gmail.com wrote:
 Hi,

 We wrote a spec (for fast booting a large number of VMs) but failed to catch
 up with Juno, so I was wondering when the spec repo for kilo will be opened
 up for nova?

 Thanks!
 John


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] No one replying on tempest issue?Please share your experience

2014-09-29 Thread GHANSHYAM MANN
Its present as you mentioned. you can look screen-n-cpu.*.log. All running
services logs files will be @ /opt/stack/logs/screen/, which you can
analyze and find where issue is.

Such query can be asked on IRC ( https://wiki.openstack.org/wiki/IRC) for
quick reply instead of waiting on mail.

For further discussion on mail please change the subject now :).

On Mon, Sep 29, 2014 at 7:51 PM, Nikesh Kumar Mahalka 
nikeshmaha...@vedams.com wrote:

 How to get nova-compute logs in juno devstack?
 Below are nova services:
 vedams@vedams-compute-fc:/opt/stack/tempest$ ps -aef | grep nova
 vedams   15065 14812  0 10:56 pts/10   00:00:52 /usr/bin/python
 /usr/local/bin/nova-compute --config-file /etc/nova/nova.conf
 vedams   15077 14811  0 10:56 pts/900:02:06 /usr/bin/python
 /usr/local/bin/nova-api
 vedams   15086 14818  0 10:56 pts/12   00:00:09 /usr/bin/python
 /usr/local/bin/nova-cert --config-file /etc/nova/nova.conf
 vedams   15095 14836  0 10:56 pts/17   00:00:09 /usr/bin/python
 /usr/local/bin/nova-consoleauth --config-file /etc/nova/nova.conf
 vedams   15096 14821  0 10:56 pts/13   00:00:09 /usr/bin/python
 /usr/local/bin/nova-network --config-file /etc/nova/nova.conf
 vedams   15100 14844  0 10:56 pts/18   00:00:00 /usr/bin/python
 /usr/local/bin/nova-objectstore --config-file /etc/nova/nova.conf
 vedams   15101 14826  0 10:56 pts/15   00:00:05 /usr/bin/python
 /usr/local/bin/nova-novncproxy --config-file /etc/nova/nova.conf --web
 /opt/stack/noVNC
 vedams   15103 14814  0 10:56 pts/11   00:02:02 /usr/bin/python
 /usr/local/bin/nova-conductor --config-file /etc/nova/nova.conf
 vedams   15104 14823  0 10:56 pts/14   00:00:11 /usr/bin/python
 /usr/local/bin/nova-scheduler --config-file /etc/nova/nova.conf
 vedams   15117 14831  0 10:56 pts/16   00:00:00 /usr/bin/python
 /usr/local/bin/nova-xvpvncproxy --config-file /etc/nova/nova.conf
 vedams   15195 15103  0 10:56 pts/11   00:00:24 /usr/bin/python
 /usr/local/bin/nova-conductor --config-file /etc/nova/nova.conf
 vedams   15196 15103  0 10:56 pts/11   00:00:25 /usr/bin/python
 /usr/local/bin/nova-conductor --config-file /etc/nova/nova.conf
 vedams   15197 15103  0 10:56 pts/11   00:00:24 /usr/bin/python
 /usr/local/bin/nova-conductor --config-file /etc/nova/nova.conf
 vedams   15198 15103  0 10:56 pts/11   00:00:24 /usr/bin/python
 /usr/local/bin/nova-conductor --config-file /etc/nova/nova.conf
 vedams   15208 15077  0 10:56 pts/900:00:00 /usr/bin/python
 /usr/local/bin/nova-api
 vedams   15209 15077  0 10:56 pts/900:00:00 /usr/bin/python
 /usr/local/bin/nova-api
 vedams   15238 15077  0 10:56 pts/900:00:03 /usr/bin/python
 /usr/local/bin/nova-api
 vedams   15239 15077  0 10:56 pts/900:00:01 /usr/bin/python
 /usr/local/bin/nova-api
 vedams   15240 15077  0 10:56 pts/900:00:03 /usr/bin/python
 /usr/local/bin/nova-api
 vedams   15241 15077  0 10:56 pts/900:00:03 /usr/bin/python
 /usr/local/bin/nova-api
 vedams   15248 15077  0 10:56 pts/900:00:00 /usr/bin/python
 /usr/local/bin/nova-api
 vedams   15249 15077  0 10:56 pts/900:00:00 /usr/bin/python
 /usr/local/bin/nova-api
 vedams   21850 14712  0 16:16 pts/000:00:00 grep --color=auto nova


 Below are nova logs files:
 vedams@vedams-compute-fc:/opt/stack/tempest$ ls
 /opt/stack/logs/screen/screen-n-
 screen-n-api.2014-09-28-101810.log
 screen-n-cond.log screen-n-net.2014-09-28-101810.log
 screen-n-obj.log
 screen-n-api.log
 screen-n-cpu.2014-09-28-101810.logscreen-n-net.log
 screen-n-sch.2014-09-28-101810.log
 screen-n-cauth.2014-09-28-101810.log
 screen-n-cpu.log  screen-n-novnc.2014-09-28-101810.log
 screen-n-sch.log
 screen-n-cauth.log
 screen-n-crt.2014-09-28-101810.logscreen-n-novnc.log
 screen-n-xvnc.2014-09-28-101810.log
 screen-n-cond.2014-09-28-101810.log
 screen-n-crt.log  screen-n-obj.2014-09-28-101810.log
 screen-n-xvnc.log


 Below  are nova screen-seesions:
 6-$(L) n-api  7$(L) n-cpu  8$(L) n-cond  9$(L) n-crt  10$(L) n-net  11$(L)
 n-sch  12$(L) n-novnc  13$(L) n-xvnc  14$(L) n-cauth  15$(L) n-obj




 Regards
 Nikesh


 On Tue, Sep 23, 2014 at 3:10 PM, Nikesh Kumar Mahalka 
 nikeshmaha...@vedams.com wrote:

 Hi,
 I am able to do all volume operations through dashboard and cli commands.
 But when i am running tempest tests,some tests are getting failed.
 For contributing cinder volume driver for my client in cinder,do all
 tempest tests should passed?

 Ex:
 1)
 ./run_tempest.sh tempest.api.volume.test_volumes_snapshots : 1or 2 tests
 are getting failed

 But when i am running individual tests in test_volumes_snapshots,all
 tests are getting passed.

 2)
 ./run_tempest.sh
 tempest.api.volume.test_volumes_actions.VolumesV2ActionsTest.test_volume_upload:
 This is also getting failed.



 Regards
 Nikesh

 On Mon, Sep 22, 2014 at 4:12 PM, Ken'ichi Ohmichi ken1ohmi...@gmail.com
 wrote:

 Hi Nikesh,

  -Original Message-
  From: Nikesh Kumar Mahalka [mailto:nikeshmaha...@vedams.com]
  

Re: [openstack-dev] [infra] Nominating Sean Dague for project-config-core

2014-09-29 Thread Clark Boylan
On September 26, 2014 8:35:18 AM PDT, cor...@inaugust.com wrote:
I'm pleased to nominate Sean Dague to the project-config core team.

The project-config repo is a constituent of the Infrastructure Program
and has a core team structured to be a superset of infra-core with
additional reviewers who specialize in the area.

For some time, Sean has been the person we consult to make sure that
changes to the CI system are testing what we think we should be testing
(and just as importantly, not testing what we think we should not be
testing).  His knowledge of devstack, devstack-gate, tempest, and nova
is immensely helpful in making sense of what we're actually trying to
accomplish.

Please respond with support or concerns and if the consensus is in
favor, we will add him next week.

Thanks,

Jim

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

+1

Clark


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] Nominating Andreas Jaeger for project-config-core

2014-09-29 Thread Clark Boylan
On September 26, 2014 8:35:02 AM PDT, cor...@inaugust.com wrote:
I'm pleased to nominate Andreas Jaeger to the project-config core team.

The project-config repo is a constituent of the Infrastructure Program
and has a core team structured to be a superset of infra-core with
additional reviewers who specialize in the area.

Andreas has been doing an incredible amount of work simplifying the
Jenkins and Zuul configuration for some time.  He's also been making it
more complicated where it needs to be -- making the documentation jobs
in particular a model of efficient re-use that is far easier to
understand than what he replaced.  In short, he's an expert in Jenkins
and Zuul configuration and both his patches and reviews are immensely
helpful.

Please respond with support or concerns and if the consensus is in
favor, we will add him next week.

Thanks,

Jim

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

+1

Clark


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] Nominating Anita Kuno for project-config-core

2014-09-29 Thread Clark Boylan
On September 26, 2014 8:34:48 AM PDT, cor...@inaugust.com wrote:
I'm pleased to nominate Anito Kuno to the project-config core team.

The project-config repo is a constituent of the Infrastructure Program
and has a core team structured to be a superset of infra-core with
additional reviewers who specialize in the area.

Anita has been reviewing new projects in the config repo for some time
and I have been treating her approval as required for a while.  She has
an excellent grasp of the requirements and process for creating new
projects and is very helpful to the people proposing them (who are
often
creating their first commit to any OpenStack repository).

She also did most of the work in actually creating the project-config
repo from the config repo.

Please respond with support or concerns and if the consensus is in
favor, we will add her next week.

Thanks,

Jim

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

+1

Clark


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Kilo Blueprints and Specs

2014-09-29 Thread Joe Gordon
On Mon, Sep 29, 2014 at 4:46 PM, Christopher Yeoh cbky...@gmail.com wrote:

 On Mon, 29 Sep 2014 13:32:57 -0700
 Joe Gordon joe.gord...@gmail.com wrote:

  On Mon, Sep 29, 2014 at 5:23 AM, Gary Kotton gkot...@vmware.com
  wrote:
 
   Hi,
   Is the process documented anywhere? That is, if say for example I
   had a spec approved in J and its code did not land, how do we go
   about kicking the tires for K on that spec.
  
 
  Specs will need be re-submitted once we open up the specs repo for
  Kilo. The Kilo template will be changing a little bit, so specs will
  need a little bit of reworking. But I expect the process to approve
  previously approved specs to be quicker

 Am biased given I have a spec approved for Juno which we didn't quite
 fully merge which we want to finish off early in Kilo (most of the
 patches are very close already to being ready to merge), but I think we
 should give priority to reviewing specs already approved in Juno and
 perhaps only require one +2 for re-approval.


I like the idea of prioritizing specs that were previously approved and
only requiring a single +2 for re-approval if there are no major changes to
them.



 Otherwise we'll end up wasting weeks of development time just when
 there is lots of review bandwidth available and the CI system is
 lightly loaded. Honestly, ideally I'd like to just start merging as
 soon as Kilo opens. Nothing has changed between Juno FF and Kilo opening
 so there's really no reason that an approved Juno spec should not be
 reapproved.

 Chris

 
 
   Thanks
   Gary
  
   On 9/29/14, 1:07 PM, John Garbutt j...@johngarbutt.com wrote:
  
   On 27 September 2014 00:31, Joe Gordon joe.gord...@gmail.com
   wrote:
On Thu, Sep 25, 2014 at 9:21 AM, John Garbutt
j...@johngarbutt.com
   wrote:
On 25 September 2014 14:10, Daniel P. Berrange
berra...@redhat.com wrote:
 The proposal is to keep kilo-1, kilo-2 much the same as juno.
   Except,
 we work harder on getting people to buy into the priorities
 that are set, and actively provoke more debate on their
 correctness, and we reduce the bar for what needs a
 blueprint.

 We can't have 50 high priority blueprints, it doesn't mean
 anything, right? We need to trim the list down to a
 manageable number, based
   on
 the agreed project priorities. Thats all I mean by slots /
 runway at this point.

 I would suggest we don't try to rank high/medium/low as that
 is too coarse, but rather just an ordered priority list. Then
 you would not be in the situation of having 50 high
 blueprints. We would instead naturally just start at the
 highest priority and work downwards.
   
OK. I guess I was fixating about fitting things into launchpad.
   
I guess having both might be what happens.
   
  The runways
  idea is just going to make me less efficient at reviewing.
  So I'm very much against it as an idea.

 This proposal is different to the runways idea, although it
   certainly
 borrows aspects of it. I just don't understand how this
 proposal has all the same issues?


 The key to the kilo-3 proposal, is about getting better at
 saying
   no,
 this blueprint isn't very likely to make kilo.

 If we focus on a smaller number of blueprints to review, we
 should
   be
 able to get a greater percentage of those fully completed.

 I am just using slots/runway-like ideas to help pick the high
   priority
 blueprints we should concentrate on, during that final
 milestone. Rather than keeping the distraction of 15 or so
 low priority blueprints, with those poor submitters jamming
 up the check queue,
   and
 constantly rebasing, and having to deal with the odd stray
 review comment they might get lucky enough to get.

 Maybe you think this bit is overkill, and thats fine. But I
 still think we need a way to stop wasting so much of peoples
 time on
   things
 that will not make it.

 The high priority blueprints are going to end up being mostly
 the big scope changes which take alot of time to review 
 probably go through many iterations. The low priority
 blueprints are going to end up
   being
 the small things that don't consume significant resource to
 review
   and
 are easy to deal with in the time we're waiting for the big
 items to go through rebases or whatever. So what I don't like
 about the
   runways
 slots idea is that removes the ability to be agile and take
 the initiative
 to review  approve the low priority stuff that would
 otherwise never make it through.
   
The idea is more around concentrating on the *same* list of
things.
   
Certainly we need to avoid the priority inversion of
concentrating only on the big things.
   
Its also why I suggested that for kilo-1 and kilo-2, we allow
any blueprint to merge, and only 

[openstack-dev] [TripleO] Summit topics

2014-09-29 Thread James Polley
The Summit Planning wiki page[1] links to an etherpad[2] for planning
topics for us to discuss at summit.

So far there's only one topic listed - which makes me happy, as it's a
topic I proposed (and will email about again next week after I'm back from
leave). Are there other things we want to talk about in Paris? Is there
somewhere else we're using for planning?

I'm hoping to see some other topics listed on the etherpad over the next
few days :)


[1] https://wiki.openstack.org/wiki/Summit/Planning
[2] https://etherpad.openstack.org/p/kilo-tripleo-summit-topics
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Kilo Blueprints and Specs

2014-09-29 Thread Michael Still
It seems like a no-brainer to me to prioritise people who have been patient
with us.

How about we tag these re-proposals with a commit message tag people can
search for when they review? Perhaps Previously-approved: Juno?

Michael

On Tue, Sep 30, 2014 at 11:06 AM, Joe Gordon joe.gord...@gmail.com wrote:



 On Mon, Sep 29, 2014 at 4:46 PM, Christopher Yeoh cbky...@gmail.com
 wrote:

 On Mon, 29 Sep 2014 13:32:57 -0700
 Joe Gordon joe.gord...@gmail.com wrote:

  On Mon, Sep 29, 2014 at 5:23 AM, Gary Kotton gkot...@vmware.com
  wrote:
 
   Hi,
   Is the process documented anywhere? That is, if say for example I
   had a spec approved in J and its code did not land, how do we go
   about kicking the tires for K on that spec.
  
 
  Specs will need be re-submitted once we open up the specs repo for
  Kilo. The Kilo template will be changing a little bit, so specs will
  need a little bit of reworking. But I expect the process to approve
  previously approved specs to be quicker

 Am biased given I have a spec approved for Juno which we didn't quite
 fully merge which we want to finish off early in Kilo (most of the
 patches are very close already to being ready to merge), but I think we
 should give priority to reviewing specs already approved in Juno and
 perhaps only require one +2 for re-approval.


 I like the idea of prioritizing specs that were previously approved and
 only requiring a single +2 for re-approval if there are no major changes to
 them.



 Otherwise we'll end up wasting weeks of development time just when
 there is lots of review bandwidth available and the CI system is
 lightly loaded. Honestly, ideally I'd like to just start merging as
 soon as Kilo opens. Nothing has changed between Juno FF and Kilo opening
 so there's really no reason that an approved Juno spec should not be
 reapproved.

 Chris

 
 
   Thanks
   Gary
  
   On 9/29/14, 1:07 PM, John Garbutt j...@johngarbutt.com wrote:
  
   On 27 September 2014 00:31, Joe Gordon joe.gord...@gmail.com
   wrote:
On Thu, Sep 25, 2014 at 9:21 AM, John Garbutt
j...@johngarbutt.com
   wrote:
On 25 September 2014 14:10, Daniel P. Berrange
berra...@redhat.com wrote:
 The proposal is to keep kilo-1, kilo-2 much the same as juno.
   Except,
 we work harder on getting people to buy into the priorities
 that are set, and actively provoke more debate on their
 correctness, and we reduce the bar for what needs a
 blueprint.

 We can't have 50 high priority blueprints, it doesn't mean
 anything, right? We need to trim the list down to a
 manageable number, based
   on
 the agreed project priorities. Thats all I mean by slots /
 runway at this point.

 I would suggest we don't try to rank high/medium/low as that
 is too coarse, but rather just an ordered priority list. Then
 you would not be in the situation of having 50 high
 blueprints. We would instead naturally just start at the
 highest priority and work downwards.
   
OK. I guess I was fixating about fitting things into launchpad.
   
I guess having both might be what happens.
   
  The runways
  idea is just going to make me less efficient at reviewing.
  So I'm very much against it as an idea.

 This proposal is different to the runways idea, although it
   certainly
 borrows aspects of it. I just don't understand how this
 proposal has all the same issues?


 The key to the kilo-3 proposal, is about getting better at
 saying
   no,
 this blueprint isn't very likely to make kilo.

 If we focus on a smaller number of blueprints to review, we
 should
   be
 able to get a greater percentage of those fully completed.

 I am just using slots/runway-like ideas to help pick the high
   priority
 blueprints we should concentrate on, during that final
 milestone. Rather than keeping the distraction of 15 or so
 low priority blueprints, with those poor submitters jamming
 up the check queue,
   and
 constantly rebasing, and having to deal with the odd stray
 review comment they might get lucky enough to get.

 Maybe you think this bit is overkill, and thats fine. But I
 still think we need a way to stop wasting so much of peoples
 time on
   things
 that will not make it.

 The high priority blueprints are going to end up being mostly
 the big scope changes which take alot of time to review 
 probably go through many iterations. The low priority
 blueprints are going to end up
   being
 the small things that don't consume significant resource to
 review
   and
 are easy to deal with in the time we're waiting for the big
 items to go through rebases or whatever. So what I don't like
 about the
   runways
 slots idea is that removes the ability to be agile and take
 the initiative
 to review  approve the low priority stuff that would
 

[openstack-dev] django-pyscss failing with Django 1.7

2014-09-29 Thread Thomas Goirand
Since the latest commit before the release of version 1.0.3,
django-pyscss fails in Sid:

https://github.com/fusionbox/django-pyscss/commit/187a7a72bf72370c739f3675bef84532e524eaf1

The issue is that storage.prefix doesn't seem to exist anymore in Django
1.7.

Does anyone have an idea how to fix this? Would it be ok to just revert
that commit in my Debian package?

Cheers,

Thomas Goirand (zigo)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] concrete proposal on changing the library testing model with devstack-gate

2014-09-29 Thread Richard Jones
On 30 September 2014 08:14, Doug Hellmann d...@doughellmann.com wrote:


 On Sep 29, 2014, at 4:22 PM, Robert Collins robe...@robertcollins.net
 wrote:

  As far as pip goes, you may not know, but tox defaults to pip --pre,
  which means anyone using tox, like us all here, will be pulling the
  alphas down by default: so impacting folk doing e.g. devtest on
  icehouse. So I don't think the hack is working as intended.

 I dislike tox more every day. Is this why we have the installation command
 override set in tox.ini?


Yes. This issue in the tox tracker has some background - feel free to
upvote it!

https://bitbucket.org/hpk42/tox/issue/193/remove-the-pre-pip-option-by-default


Richard
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Keystone] Summit Topics

2014-09-29 Thread Morgan Fainberg
The summit planning etherpad[1] is up and available for discussing Keystone
summit sessions. Other etherpad links (such as cross-project topics) can be
found on the Summit Planning wiki page[2].

Please do not hesitate to jump in and start talking about the Identity
sessions.

For those who contributed to the other etherpad, I am in the process of
moving topics over (if it looks a little sparse comparatively), please
don't let that stop you from discussing or adding a topic that is not yet
on the new etherpad.

[1] https://etherpad.openstack.org/p/kilo-keystone-summit-topics
[2] https://wiki.openstack.org/wiki/Summit/Planning
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][swift] Has anybody considered storing tokens in Swift?

2014-09-29 Thread Morgan Fainberg
-Original Message-
From: Clint Byrum cl...@fewbar.com
Reply: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Date: September 29, 2014 at 16:17:39
To: openstack-dev openstack-dev@lists.openstack.org
Subject:  Re: [openstack-dev] [keystone][swift] Has anybody considered storing 
tokens in Swift?

 Excerpts from Clay Gerrard's message of 2014-09-29 16:05:14 -0700:
  On Mon, Sep 29, 2014 at 2:53 PM, Chmouel Boudjnah  
  wrote:
 
  
  
   eventual consistency will only affect container listing and I don't think
   there is a need for container listing in that driver.
  
  
  well now hold on...
 
  if you're doing an overwrite in the face of server failures you could still
  get a stale read if a server with an old copy comes back into the fray and
  you read before replication sorts it out, or read a old version of a key
  you deleted
  
 For tokens, there are really only two answers that matter:
  
 * does ID==X exist? * has ID==X been revoked?

 I think as long as you have a separate container for revocations and
 tokens, then resurrections would be fine. The records themselves would
 be immutable so edits aren't a problem.

This isn’t exactly true. In the case of certain actions 
(user/project/domain/role delete, user password change, and a number of other 
ones) you need to be able to find all tokens associated to that entity so they 
can be revoked (if you’re not using revocation events). This likely means for 
this type of backend revocation events are a requirement (eliminates the need 
to list out each token id/provide a way to lookup a token based upon the entity 
being acted upon) and the ‘enumerated’ token indexes should not be supported. 

—Morgan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][swift] Has anybody considered storing tokens in Swift?

2014-09-29 Thread Adam Young

On 09/29/2014 12:12 PM, Jay Pipes wrote:

Hey Stackers,

So, I had a thought this morning (uh-oh, I know...).

What if we wrote a token driver in Keystone that uses Swift for 
backend storage?


I have long been an advocate of the memcache token driver versus the 
SQL driver for performance reasons. However, the problem with the 
memcache token driver is that if you want to run multiple OpenStack 
regions, you could share the identity data in Keystone using 
replicated database technology (mysql galera/PXC, pgpool II, or even 
standard mysql master/slave), but each region needs to have its own 
memcache service for tokens. This means that tokens are not shared 
across regions, which means that users have to log in separately to 
each region's dashboard.


I personally considered this a tradeoff worth accepting. But then, 
today, I thought... what about storing tokens in a 
globally-distributed Swift cluster? That would take care of the 
replication needs automatically, since Swift would do the needful. 
And, add to that, Swift was designed for storing lots of small 
objects, which tokens are...


Thoughts? I think it would be a cool dogfooding effort if nothing 
else, and give users yet another choice in how they handle 
multi-region tokens.


Um...I hate all persisted tokens.  This takes them to a new level of 
badness.


Do we really need this?





Best,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][swift] Has anybody considered storing tokens in Swift?

2014-09-29 Thread Clint Byrum
Excerpts from Adam Young's message of 2014-09-29 20:22:35 -0700:
 On 09/29/2014 12:12 PM, Jay Pipes wrote:
  Hey Stackers,
 
  So, I had a thought this morning (uh-oh, I know...).
 
  What if we wrote a token driver in Keystone that uses Swift for 
  backend storage?
 
  I have long been an advocate of the memcache token driver versus the 
  SQL driver for performance reasons. However, the problem with the 
  memcache token driver is that if you want to run multiple OpenStack 
  regions, you could share the identity data in Keystone using 
  replicated database technology (mysql galera/PXC, pgpool II, or even 
  standard mysql master/slave), but each region needs to have its own 
  memcache service for tokens. This means that tokens are not shared 
  across regions, which means that users have to log in separately to 
  each region's dashboard.
 
  I personally considered this a tradeoff worth accepting. But then, 
  today, I thought... what about storing tokens in a 
  globally-distributed Swift cluster? That would take care of the 
  replication needs automatically, since Swift would do the needful. 
  And, add to that, Swift was designed for storing lots of small 
  objects, which tokens are...
 
  Thoughts? I think it would be a cool dogfooding effort if nothing 
  else, and give users yet another choice in how they handle 
  multi-region tokens.
 
 Um...I hate all persisted tokens.  This takes them to a new level of 
 badness.
 
 Do we really need this?
 

FWIW I'm 100% with you Adam. I would like to see a world without a token
storage problem in Keystone.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Contributing to docs without Docbook -- YES you can!

2014-09-29 Thread Akilesh K
Hi,
I saw the table of contents. I have posted documents on configuring
openstack neutron-openvswitch-plugin, comparison between networking devices
and thier linux software components and also about the working principles
of neutron-ovs-plugin at layer 2 and neutron-l3-agent at layer 3 . My
intention with the posts was to aid begginers in debugging neutron issues.

The problem is that I am not sure where exactly these posts fit in the
topic of contents. Anyone with suggestions please reply to me. Below are
the link to the blog posts

1. Comparison between networking devices and linux software components
https://fosskb.wordpress.com/wp-admin/post.php?post=2781action=edit
2. Openstack ovs plugin configuration for single/multi machine setup
https://fosskb.wordpress.com/wp-admin/post.php?post=2605action=edit
3. Neutron ovs plugin layer 2 connectivity
https://fosskb.wordpress.com/wp-admin/post.php?post=2755action=edit
4. Layer 3 connectivity using neutron-l3-agent
https://fosskb.wordpress.com/wp-admin/post.php?post=2910action=edit

I would be glad to include sub sections in any of these posts if that helps.

Thank you,
Ageeleshwar K

On Tue, Sep 30, 2014 at 2:36 AM, Nicholas Chase nch...@mirantis.com wrote:

  As you know, we're always looking for ways for people to be able to
 contribute to Docs, but we do understand that there's a certain amount of
 pain involved in dealing with Docbook.  So to try and make this process
 easier, we're going to try an experiment.

 What we've put together is a system where you can update a wiki with links
 to content in whatever form you've got it -- gist on github, wiki page,
 blog post, whatever -- and we have a dedicated resource that will turn it
 into actual documentation, in Docbook. If you want to be added as a
 co-author on the patch, make sure to provide us the email address you used
 to become a Foundation member.

 Because we know that the networking documentation needs particular
 attention, we're starting there.  We have a Networking Guide, from which we
 will ultimately pull information to improve the networking section of the
 admin guide.  The preliminary Table of Contents is here:
 https://wiki.openstack.org/wiki/NetworkingGuide/TOC , and the
 instructions for contributing are as follows:


1. Pick an existing topic or create a new topic. For new topics, we're
primarily interested in deployment scenarios.
2. Develop content (text and/or diagrams) in a format that supports at
least basic markup (e.g., titles, paragraphs, lists, etc.).
3. Provide a link to the content (e.g., gist on github.com, wiki page,
blog post, etc.) under the associated topic.
4. Send e-mail to reviewers network...@openstacknow.com.
5. A writer turns the content into an actual patch, with tracking bug,
and docs reviewers (and the original author, we would hope) make sure it
gets reviewed and merged.


 Please let us know if you have any questions/comments.  Thanks!

   Nick
 --
 Nick Chase
 1-650-567-5640
 Technical Marketing Manager, Mirantis
 Editor, OpenStack:Now

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][swift] Has anybody considered storing tokens in Swift?

2014-09-29 Thread Joshua Harlow
+1 Lets not continue to expand the usage of persisted tokens :-/

We should be trying to move away from such types of persistence and its 
associated complexity IMHO. Most major websites don't need to have tokens that 
are saved around regions in there internal backend databases, just so people 
can use a REST webservice (or a website...) so I would hope that we don't need 
to (if it works for major websites why doesn't it work for us?).

My 2 cents is that we should really think about why this is needed and why we 
can't operate using signed-cookie like mechanisms (after all it works for 
everyone else). If cross-region tokens are a problem, then maybe we should 
solve the root of the issue (having a token that works across regions) so that 
no replication is needed at all...

On Sep 29, 2014, at 8:22 PM, Adam Young ayo...@redhat.com wrote:

 On 09/29/2014 12:12 PM, Jay Pipes wrote:
 Hey Stackers,
 
 So, I had a thought this morning (uh-oh, I know...).
 
 What if we wrote a token driver in Keystone that uses Swift for backend 
 storage?
 
 I have long been an advocate of the memcache token driver versus the SQL 
 driver for performance reasons. However, the problem with the memcache token 
 driver is that if you want to run multiple OpenStack regions, you could 
 share the identity data in Keystone using replicated database technology 
 (mysql galera/PXC, pgpool II, or even standard mysql master/slave), but each 
 region needs to have its own memcache service for tokens. This means that 
 tokens are not shared across regions, which means that users have to log in 
 separately to each region's dashboard.
 
 I personally considered this a tradeoff worth accepting. But then, today, I 
 thought... what about storing tokens in a globally-distributed Swift 
 cluster? That would take care of the replication needs automatically, since 
 Swift would do the needful. And, add to that, Swift was designed for storing 
 lots of small objects, which tokens are...
 
 Thoughts? I think it would be a cool dogfooding effort if nothing else, and 
 give users yet another choice in how they handle multi-region tokens.
 
 Um...I hate all persisted tokens.  This takes them to a new level of badness.
 
 Do we really need this?
 
 
 
 
 Best,
 -jay
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [gantt] Scheduler group meeting - Agenda 9/30

2014-09-29 Thread Dugger, Donald D
1) Forklift status
2) Opens

--
Don Dugger
Censeo Toto nos in Kansa esse decisse. - D. Gale
Ph: 303/443-3786


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [IPv6] New API format for extra_dhcp_opts

2014-09-29 Thread Xu Han Peng

Robert,

I think the CLI will look something like based on Mark's suggestion:

neutron port-create extra_dhcp_opts 
opt_name=dhcp_option_name,opt_value=value,version=4(or 6) network


This extra_dhcp_opts can be repeated and version is optional (no version 
means version=4).


Xu Han

On 09/29/2014 08:51 PM, Robert Li (baoli) wrote:

Hi Xu Han,

My question is how the CLI user interface would look like to 
distinguish between v4 and v6 dhcp options?


Thanks,
Robert

On 9/28/14, 10:29 PM, Xu Han Peng pengxu...@gmail.com 
mailto:pengxu...@gmail.com wrote:


Mark's suggestion works for me as well. If no one objects, I am
going to start the implementation.

Thanks,
Xu Han

On 09/27/2014 01:05 AM, Mark McClain wrote:


On Sep 26, 2014, at 2:39 AM, Xu Han Peng pengxu...@gmail.com
mailto:pengxu...@gmail.com wrote:


Currently the extra_dhcp_opts has the following API interface on
a port:

{
port:
{
extra_dhcp_opts: [
{opt_value: testfile.1,opt_name: bootfile-name},
{opt_value: 123.123.123.123, opt_name:
tftp-server},
{opt_value: 123.123.123.45, opt_name:
server-ip-address}
],

 }
}

During the development of DHCPv6 function for IPv6 subnets, we
found this format doesn't work anymore because an port can have
both IPv4 and IPv6 address. So we need to find a new way to
specify extra_dhcp_opts for DHCPv4 and DHCPv6, respectively. (
https://bugs.launchpad.net/neutron/+bug/1356383
https://bugs.launchpad.net/neutron/+bug/1356383)

Here are some thoughts about the new format:

Option1: Change the opt_name in extra_dhcp_opts to add a prefix
(v4 or v6) so we can distinguish opts for v4 or v6 by parsing
the opt_name. For backward compatibility, no prefix means IPv4
dhcp opt.

extra_dhcp_opts: [
{opt_value: testfile.1,opt_name: bootfile-name},
{opt_value: 123.123.123.123, opt_name:
*v4:*tftp-server},
{opt_value: [2001:0200:feed:7ac0::1],
opt_name: *v6:*dns-server}
]

Option2: break extra_dhcp_opts into IPv4 opts and IPv6 opts. For
backward compatibility, both old format and new format are
acceptable, but old format means IPv4 dhcp opts.

extra_dhcp_opts: {
 ipv4: [
{opt_value: testfile.1,opt_name:
bootfile-name},
{opt_value: 123.123.123.123, opt_name:
tftp-server},
 ],
 ipv6: [
{opt_value: [2001:0200:feed:7ac0::1],
opt_name: dns-server}
 ]
}

The pro of Option1 is there is no need to change API structure
but only need to add validation and parsing to opt_name. The con
of Option1 is that user need to input prefix for every opt_name
which can be error prone. The pro of Option2 is that it's
clearer than Option1. The con is that we need to check two
formats for backward compatibility.

We discussed this in IPv6 sub-team meeting and we think Option2
is preferred. Can I also get community's feedback on which one
is preferred or any other comments?



I'm -1 for both options because neither is properly backwards
compatible.  Instead we should add an optional 3rd value to the
dictionary: version.  The version key would be used to make the
option only apply to either version 4 or 6.  If the key is
missing or null, then the option would apply to both.

mark



___
OpenStack-dev mailing list

OpenStack-dev@lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev