Re: [openstack-dev] [neutron] high dhcp lease times in neutron deployments considered harmful (or not???)

2015-01-30 Thread Brian Haley
Kevin,

The only thing this discussion has convinced me of is that allowing users to
change the fixed IP address on a neutron port leads to a bad user-experience.
Even with an 8-minute renew time you're talking up to a 7-minute blackout (87.5%
of lease time before using broadcast).  This is time that customers are paying
for.  Most would have rebooted long before then, true?  Cattle not pets, right?

Changing the lease time is just papering-over the real bug - neutron doesn't
support seamless changes in IP addresses on ports, since it totally relies on
the dhcp configuration settings a deployer has chosen.  Bickering over the lease
time doesn't fix that non-deterministic recovery for the VM.  Documenting a VM
reboot is necessary, or even deprecating this (you won't like that) are sounding
better to me by the minute.

Is there anyone else that has used, or has customers using, this part of the
neutron API?  Can they share their experiences?

-Brian


On 01/30/2015 07:26 AM, Kevin Benton wrote:
But they will if we document it well, which is what Salvatore suggested.
 
 I don't think this is a good approach, and it's a big part of why I started 
 this
 thread. Most of the deployers/operators I have worked with only read the bare
 minimum documentation to get a Neutron deployment working and they only adjust
 the settings necessary for basic functionality.
 
 We have an overwhelming amount of configuration options and adding a note
 specifying that a particular setting for DHCP leases has been optimized to
 reduce logging at the cost of long downtimes during port IP address updates 
 is a
 waste of time and effort on our part. 
 
I think the current default value is also more indicative of something
 you'd find in your house, or at work - i.e. stable networks.
 
 Tenants don't care what the DHCP lease time is or that it matches what they
 would see from a home router. They only care about connectivity. 
 
One solution is to disallow this operation.
 
 I want this feature to be useful in deployments by default, not strip it
 away. You can probably do this with /etc/neutron/policy.json without a code
 change if you wanted to block it in a deployment like yours where you have 
 such
 a high lease time.
 
Perhaps letting the user set it, but allow the admin to set the valid range
 for min/max?  And if they don't specify they get the default?
 
 Tenants wouldn't have any reason to adjust this default. They would be even 
 less
 likely than the operator to know about this weird relationship between a DHCP
 setting and the amount of time they lose connectivity after updating their
 ports' IPs.
 
It impacts anyone that hasn't changed from the default since July 2013 and 
later
 (Havana), since if they don't notice, they might get bitten by it.
 
 Keep in mind that what I am suggesting with the lease-renewal-time would be
 separate from the lease expiration time. The only difference that an operator
 would see on upgrade (if using the defaults) is increased DHCP traffic and 
 more
 logs to syslog from dnsmasq. The lease time would still be the same so the
 downtime windows for DHCP agents would be maintained. That is much less of an
 impact than many of the non-config changes we make between cycles.
 
 To clarify, even with an option for dhcp-renewal-time I am proposing, you are
 still opposed to setting it to anything low because of logging and the ~24 bps
 background DHCP traffic per VM?
 
 On Thu, Jan 29, 2015 at 7:11 PM, Brian Haley brian.ha...@hp.com
 mailto:brian.ha...@hp.com wrote:
 
 On 01/29/2015 05:28 PM, Kevin Benton wrote:
 How is Neutron breaking this?  If I move a port on my physical switch 
 to a
  different subnet, can you still communicate with the host sitting on it?
  Probably not since it has a view of the world (next-hop router) that no 
 longer
  exists, and the network won't route packets for it's old IP address to 
 the new
  location.  It has to wait for it's current DHCP lease to tick down to 
 the point
  where it will use broadcast to get a new one, after which point it will 
 work.
 
  That's not just moving to a different subnet. That's moving to a 
 different
  broadcast domain. Neutron supports multiple subnets per network 
 (broadcast
  domain). An address on either subnet will work. The router has two 
 interfaces
  into the network, one on each subnet.[2]
 
 
 Does it work on Windows VMs too?  People run those in clouds too.  The 
 point is
  that if we don't know if all the DHCP clients will support it then it's 
 a
  non-starter since there's no way to tell from the server side.
 
  It appears they do.[1] Even for clients that don't, the worst case 
 scenario is
  just that they are stuck where we are now.
 
 ... then the deployer can adjust the value upwards..., hmm, can they 
 adjust it
  downwards as well?  :)
 
  Yes, but most people doing initial openstack deployments don't and 

Re: [openstack-dev] [oslo.db] PyMySQL review

2015-01-30 Thread Doug Hellmann


On Thu, Jan 29, 2015, at 07:03 PM, Angus Lees wrote:
 The other factor I haven't seen mentioned here is that our usual
 eventlet+mysqldb setup can deadlock rather easily(*), resulting in the
 entire python process sitting idle for 30s until the mysqldb deadlock
 timer
 goes off and raises an exception.  At this point (in nova at least), the
 request is usually retried and almost certainly succeeds.  In neutron (as
 another example), the requests weren't so easy to retry and code examples
 where this was noticed had to be explicitly restructured to defer work
 that
 might yield the greenlet.
 ie: it's a big cost on coding, and deployments, as well as just being
 plain
 incorrect code.
 
 (*) eg: send simultaneous requests that modify the same resource.
 Eventually you'll have two greenlets within the same api server executing
 conflicting database operations.  Entire python process freezes.   (Note
 this is not a regular db transaction block, neither greenlet will ever
 exit
 their transactions without the deadlock timer).
 
 Bug here: https://bugs.launchpad.net/oslo.db/+bug/1350149
 Test that reproduces the issue here:
 https://review.openstack.org/#/c/104436/
 
 I'm dismayed that the conversation always skews towards discussing
 performance when the current choice isn't even correct yet, and that as a
 community we seem to be unwilling/unable to get behind what should be
 quite
 an obvious technical issue with straightforward solutions.

My impression of this thread is that we're reaching consensus that we
should move to PyMySQL, but that we want to make that move carefully
because we expect issues due to the changes in performance and context
switching behaviors. We've seen in the past that even slight changes in
some timings expose race conditions, so while raw performance isn't
critical the timing can be as tests run in the gate. Spending some time
on a migration  test plan, and making sure that people are ready to
help debug those sorts of issues, are good next steps.

The issue of moving off of eventlet can be addressed separately, after
the driver change.

So, do we need a cross-project spec to write down those details?

Doug

 
 If we absolutely can't switch to another mysql driver, another option
 that
 was suggested recently (and passes the above test) is using
 eventlet.monkey_patch(MySQLdb=True).  I haven't done the investigation to
 find out why that isn't the default, or what the downsides are.  This
 obviously doesn't help us with other factors, like python3-ness either.
 
  - Gus
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo.db][nova] Deprecating use_slave in Nova

2015-01-30 Thread Matthew Booth
At some point in the near future, hopefully early in L, we're intending
to update Nova to use the new database transaction management in
oslo.db's enginefacade.

Spec:
http://git.openstack.org/cgit/openstack/oslo-specs/plain/specs/kilo/make-enginefacade-a-facade.rst

Implementation:
https://review.openstack.org/#/c/138215/

One of the effects of this is that we will always know when we are in a
read-only transaction, or a transaction which includes writes. We intend
to use this new contextual information to make greater use of read-only
slave databases. We are currently proposing that if an admin has
configured a slave database, we will use the slave for *all* read-only
transactions. This would make the use_slave parameter passed to some
Nova apis redundant, as we would always use the slave where the context
allows.

However, using a slave database has a potential pitfall when mixed with
separate write transactions. A caller might currently:

1. start a write transaction
2. update the database
3. commit the transaction
4. start a read transaction
5. read from the database

The client might expect data written in step 2 to be reflected in data
read in step 5. I can think of 3 cases here:

1. A short-lived RPC call is using multiple transactions

This is a bug which the new enginefacade will help us eliminate. We
should not be using multiple transactions in this case. If the reads are
in the same transaction as the write: they will be on the master, they
will be consistent, and there is no problem. As a bonus, lots of these
will be race conditions, and we'll fix at least some.

2. A long-lived task is using multiple transactions between long-running
sub-tasks

In this case, for example creating a new instance, we genuinely want
multiple transactions: we don't want to hold a database transaction open
while we copy images around. However, I can't immediately think of a
situation where we'd write data, then subsequently want to read it back
from the db in a read-only transaction. I think we will typically be
updating state, meaning it's going to be a succession of write transactions.

3. Separate RPC calls from a remote client

This seems potentially problematic to me. A client makes an RPC call to
create a new object. The client subsequently tries to retrieve the
created object, and gets a 404.

Summary: 1 is a class of bugs which we should be able to find fairly
mechanically through unit testing. 2 probably isn't a problem in
practise? 3 seems like a problem, unless consumers of cloud services are
supposed to expect that sort of thing.

I understand that slave databases can occasionally get very behind. How
behind is this in practise?

How do we use use_slave currently? Why do we need a use_slave parameter
passed in via rpc, when it should be apparent to the developer whether a
particular task is safe for out-of-date data.

Any chance they have some kind of barrier mechanism? e.g. block until
the current state contains transaction X.

General comments on the usefulness of slave databases, and the
desirability of making maximum use of them?

Thanks,

Matt
-- 
Matthew Booth
Red Hat Engineering, Virtualisation Team

Phone: +442070094448 (UK)
GPG ID:  D33C3490
GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat][Keystone] Native keystone resources in Heat

2015-01-30 Thread Fox, Kevin M
I was asking earlier this week about keystone resources on the irc channel...

We're thinking about having a tenant per user on one of our clouds. We're using 
neutron. So setting this up involves:

 * Creating a User
 * Creating a Tenant
 * Assigning Roles
 * Creating the Tenants default Private network. (owned by the tenant)
 * Creating a Neutron Router. (owned by the tenant)
 * Setting the Router gateway.
 * Plugging in the Router to the Private network.
 * Setting some additional security group rules on the users default group. 
(Out of the box we want icmp and port 22 open)

We'd like to have the heat stack maintained by the admin's tenant so they are 
protected.

I tried but some of this stuff can't be done in heat today. I ended up having 
to write a shell script.

I'd love to be able to use heat for this.

Thanks,
Kevin

From: Zane Bitter [zbit...@redhat.com]
Sent: Thursday, January 29, 2015 8:41 AM
To: openstack Development Mailing List
Subject: [openstack-dev] [Heat][Keystone] Native keystone resources in Heat

I got a question today about creating keystone users/roles/tenants in
Heat templates. We currently support creating users via the
AWS::IAM::User resource, but we don't have a native equivalent.

IIUC keystone now allows you to add users to a domain that is otherwise
backed by a read-only backend (i.e. LDAP). If this means that it's now
possible to configure a cloud so that one need not be an admin to create
users then I think it would be a really useful thing to expose in Heat.
Does anyone know if that's the case?

I think roles and tenants are likely to remain admin-only, but we have
precedent for including resources like that in /contrib... this seems
like it would be comparably useful.

Thoughts?

cheers,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][gate][stable] How eventlet 0.16.1 broke the gate

2015-01-30 Thread Joshua Harlow

Cool,

I've got to try that out today to see what it's doing.

I've also shoved my little program up @ 
https://github.com/harlowja/pippin (the pip-tools one is definitely more 
elegantly coded than mine, haha).


Feel free to fork it (modify, run, or ...)

Basic instructions to use it:

https://github.com/harlowja/pippin#pippin

-Josh

Bailey, Darragh wrote:

You may find the code for pip-compile
https://github.com/nvie/pip-tools/tree/future of interest for this, as I
think they may already have a solution for the deep dependency analysis.


I've started experimenting with it for git-upstream cause GitPython have
a habbit of breaking stuff through a couple of releases now :-(


What I like is:
* Doesn't require an extra tool before using 'pip install'
** Some may want to regen the dependencies, but it's optional and the
common python dev approach is retained
* Stable releases are guaranteed to use the versions of dependencies
they were released and verified against
* Improves on the guarantee of gated branch CI
** The idea that if you sync with upstream any test failures are due to
your local changes
** Which is not always true if updated deps can break stuff


On the flip side:
* You remain exposed to security issues in python code until you
manually update
* Development cycle doesn't move forward automatically, may not see
compatibility issues until late when forced to move forward one of the deps


Think the cons can be handled by some additional CI jobs to update the
pins on a regular basis and pass it through the standard gates and
potentially to auto approve during development cycles if they pass
(already getting the latest matching ones so no big diff here). Some
decisions on trade off around whether this should be done for stable
releases automatically or periodically requiring manual approval would
have to be made.


Did I say how much I like the fact that it doesn't require another tool
before just being able to use 'pip install'?


To experiment with it:
virtualenv .venv/pip-tools
source .venv/pip-tools/bin/activate
pip install git+https://github.com/nvie/pip-tools.git@future

Regards,
Darragh Bailey

Nothing is foolproof to a sufficiently talented fool - Unknown

On 22/01/15 03:45, Joshua Harlow wrote:

A slightly better version that starts to go deeper (and downloads
dependencies of dependencies and extracts there egg_info to get at
these dependencies...)

https://gist.github.com/harlowja/555ea019aef4e901897b

Output @ http://paste.ubuntu.com/9813919/

When ran on the same 'test.txt' mentioned below...

Happy hacking!

-Josh

Joshua Harlow wrote:

A run that shows more of the happy/desired path:

$ cat test.txt
six1
taskflow0.5
$ python pippin.py -r test.txt
Initial package set:
- six ['1']
- taskflow ['0.5']
Deep package set:
- six ['==1.9.0']
- taskflow ['==0.4.0']

-Josh

Joshua Harlow wrote:

Another thing that I just started whipping together:

https://gist.github.com/harlowja/5e39ec5ca9e3f0d9a21f

The idea for the above is to use pip to download dependencies, but
figure out what versions will work using our own resolver (and our own
querying of 'http://pypi.python.org/pypi/%s/json') that just does a
very
deep search of all requirements (and requirements of requirements...).

The idea for that is that the probe() function in that gist will
'freeze' a single requirement then dive down into further requirements
and ensure compatibility while that 'diving' (aka, recursion into
further requirements) is underway. If a incompatibility is found then
the recursion will back-track and try a to freeze a different
version of
a desired package (and repeat...).

To me this kind of deep finding would be a potential way of making this
work in a way that basically only uses pip for downloading (and does
the
deep matching/probing) on our own since once the algorithm above
doesn't
backtrack and finds a matching set of requirements that will all work
together the program can exit (and this set can then be used as the
master set for openstack; at that point we might have to tell people to
not use pip, or to only use pip --download to fetch the compatible
versions).

It's not completed but it could be complementary to what others are
working on; feel free to hack away :)

So far the following works:

$ cat test.txt
six1
taskflow1

$ python pippin.py -r test.txt
Initial package set:
- six ['1']
- taskflow ['1']
Traceback (most recent call last):
File pippin.py, line 168, inmodule
main()
File pippin.py, line 162, in main
matches = probe(initial, {})
File pippin.py, line 139, in probe
result = probe(requirements, gathered)
File pippin.py, line 129, in probe
m = find_match(pkg_name, req)
File pippin.py, line 112, in find_match
return match_available(req.req, find_versions(pkg_name))
File pippin.py, line 108, in match_available
 matches '%s' (tried %s) % (req, looked_in))
__main__.NotFoundException: No requirement found that matches
'taskflow1' (tried ['0.6.1', '0.6.0', '0.5.0', '0.4.0', '0.3.21',
'0.2', 

[openstack-dev] [nova][libvirt] RFC: ensuring live migration ends

2015-01-30 Thread Daniel P. Berrange
In working on a recent Nova migration bug

  https://bugs.launchpad.net/nova/+bug/1414065

I had cause to refactor the way the nova libvirt driver monitors live
migration completion/failure/progress. This refactor has opened the
door for doing more intelligent active management of the live migration
process.

As it stands today, we launch live migration, with a possible bandwidth
limit applied and just pray that it succeeds eventually. It might take
until the end of the universe and we'll happily wait that long. This is
pretty dumb really and I think we really ought to do better. The problem
is that I'm not really sure what better should mean, except for ensuring
it doesn't run forever.

As a demo, I pushed a quick proof of concept showing how we could easily
just abort live migration after say 10 minutes

  https://review.openstack.org/#/c/151665/

There are a number of possible things to consider though...

First how to detect when live migration isn't going to succeeed.

 - Could do a crude timeout, eg allow 10 minutes to succeeed or else.

 - Look at data transfer stats (memory transferred, memory remaining to
   transfer, disk transferred, disk remaining to transfer) to determine
   if it is making forward progress.

 - Leave it upto the admin / user to decided if it has gone long enough

The first is easy, while the second is harder but probably more reliable
and useful for users.

Second is a question of what todo when it looks to be failing

 - Cancel the migration - leave it running on source. Not good if the
   admin is trying to evacuate a host.

 - Pause the VM - make it complete as non-live migration. Not good if
   the guest workload doesn't like being paused

 - Increase the bandwidth permitted. There is a built-in rate limit in
   QEMU overridable via nova.conf. Could argue that the admin should just
   set their desired limit in nova.conf and be done with it, but perhaps
   there's a case for increasing it in special circumstances. eg emergency
   evacuate of host it is better to waste bandwidth  complete the job,
   but for non-urgent scenarios better to limit bandwidth  accept failure ?

 - Increase the maximum downtime permitted. This is the small time window
   when the guest switches from source to dest. To small and it'll never
   switch, too large and it'll suffer unacceptable interuption.

We could do some of these things automatically based on some policy
or leave them upto the cloud admin/tenant user via new APIs

Third there's question of other QEMU features we could make use of to
stop problems in the first place

 - Auto-converge flag - if you set this QEMU throttles back the CPUs
   so the guest cannot dirty ram pages as quickly. This is nicer than
   pausing CPUs altogether, but could still be an issue for guests
   which have strong performance requirements

 - Page compression flag - if you set this QEMU does compression of
   pages to reduce data that has to be sent. This is basically trading
   off network bandwidth vs CPU burn. Probably a win unless you are
   already highly overcomit on CPU on the host

Fourth there's a question of whether we should give the tenant user or
cloud admin further APIs for influencing migration

 - Add an explicit API for cancelling migration ?

 - Add APIs for setting tunables like downtime, bandwidth on the fly ?

 - Or drive some of the tunables like downtime, bandwidth, or policies
   like cancel vs paused from flavour or image metadata properties ?

 - Allow operations like evacuate to specify a live migration policy
   eg switch non-live migrate after 5 minutes ?

The current code is so crude and there's a hell of alot of options we
can take. I'm just not sure which is the best direction for us to go
in.

What kind of things would be the biggest win from Operators' or tenants'
POV ?

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api][nova] Openstack HTTP error codes

2015-01-30 Thread Brant Knudson
On Thu, Jan 29, 2015 at 5:01 PM, Jay Faulkner j...@jvf.cc wrote:


  On Jan 29, 2015, at 2:52 PM, Kevin Benton blak...@gmail.com wrote:

  Oh, I understood it a little differently. I took parsing of error
 messages here is not the way we’d like to solve this problem as meaning
 that parsing them in their current ad-hoc, project-specific format is not
 the way we want to solve this (e.g. the way tempest does it). But if we had
 a structured way like the EC2 errors, it would be a much easier problem to
 solve.

  So either way we are still parsing the body, the only difference is that
 the parser no longer has to understand how to parse Neutron errors vs. Nova
 errors. It just needs to parse the standard OpenStack error format that
 we come up with.


  This would be especially helpful for things like haproxy or other load
 balancers, as you could then have them put up a static, openstack-formatted
 JSON error page for their own errors and trust the clients could parse them
 properly.

  -Jay



It shouldn't be necessary for proxies to generate openstack-formatted error
pages. A proxy can send a response where the content-type is text/plain and
the client can show the message, treating it as just some text to display.
I think that's all we're expecting a client to do in general, especially
when it doesn't have enough information to actually take some sort of
useful action in response to the error. Clients that get an
openstack-formatted message (with content-type: application/json) can parse
out the message and display that, or look at the error ID and do something
useful.

- Brant
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [infra] Nominating Elizabeth K. Joseph for infra-core and root

2015-01-30 Thread James E. Blair
Hi,

The Infrastructure program has a unique three-tier team structure:
contributors (that's all of us!), core members (people with +2 ability
on infra projects in Gerrit) and root members (people with
administrative access).  Read all about it here:

  http://ci.openstack.org/project.html#team

Elizabeth K. Joseph has been reviewing a significant number of infra
patches for some time now.  She has taken on a number of very large
projects, including setting up our Git server farm, adding support for
infra servers running on CentOS, and setting up the Zanata translation
system (and all of this without shell access to production machines).

She understands all of our servers, regardless of function, size, or
operating system.  She has frequently spoken publicly about the unique
way in which we perform systems administration, articulating what we are
doing and why in a way that inspires us as much as others.

Due to her strong systems administration background, I am nominating her
for both infra-core and infra-root simultaneously.  I expect many of us
are looking forward to seeing her insight and direction applied with +2s
but also equally excited for her to be able to troubleshoot things when
our best-laid plans meet reality.

Please respond with any comments or concerns.

Thanks, Elizabeth, for all your work!

-Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Question about import module

2015-01-30 Thread Doug Hellmann


On Thu, Jan 29, 2015, at 09:10 PM, liuxinguo wrote:
 * I have seen that the module 'oslo.config' have changed to
 'oslo_config' in Kilo but in Juno it is still 'oslo.config'.
 
 I want my code work compatibly both for Juno and Kilo so I import this
 module in this way:
 
 try:
 from oslo_config import cfg
 except:
 from oslo.config import cfg
 
 * Does this way of importing module will be accepted by the
 communityapp:ds:community? Or is there any other better
 wayapp:ds:way?

There's no need to do this. The config library still supports using
oslo.config for now, and we are going to cap the versions of the
libraries used in Juno to avoid issues in the future.

Please update master branches of any projects using Oslo libraries to
import from the new name only.

Doug

 
 Thanks and regards,
 Liu
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.db] PyMySQL review

2015-01-30 Thread Chris Dent

On Fri, 30 Jan 2015, Doug Hellmann wrote:


My impression of this thread is that we're reaching consensus that we
should move to PyMySQL, but that we want to make that move carefully
because we expect issues due to the changes in performance and context
switching behaviors. We've seen in the past that even slight changes in
some timings expose race conditions, so while raw performance isn't
critical the timing can be as tests run in the gate. Spending some time
on a migration  test plan, and making sure that people are ready to
help debug those sorts of issues, are good next steps.


If changing drivers is going to expose bugs and then lead to fixing
them, clearly we should be changing drivers frequently.

(Or to be less indirect: fragility in the face of timing changes is
a bug, let's look upon these things as positive opportunities to fix
things what lurk below.)

--
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] Nominating Elizabeth K. Joseph for infra-core and root

2015-01-30 Thread Clark Boylan
On Fri, Jan 30, 2015, at 09:20 AM, James E. Blair wrote:
 Hi,
 
 The Infrastructure program has a unique three-tier team structure:
 contributors (that's all of us!), core members (people with +2 ability
 on infra projects in Gerrit) and root members (people with
 administrative access).  Read all about it here:
 
   http://ci.openstack.org/project.html#team
 
 Elizabeth K. Joseph has been reviewing a significant number of infra
 patches for some time now.  She has taken on a number of very large
 projects, including setting up our Git server farm, adding support for
 infra servers running on CentOS, and setting up the Zanata translation
 system (and all of this without shell access to production machines).
 
 She understands all of our servers, regardless of function, size, or
 operating system.  She has frequently spoken publicly about the unique
 way in which we perform systems administration, articulating what we are
 doing and why in a way that inspires us as much as others.
 
 Due to her strong systems administration background, I am nominating her
 for both infra-core and infra-root simultaneously.  I expect many of us
 are looking forward to seeing her insight and direction applied with +2s
 but also equally excited for her to be able to troubleshoot things when
 our best-laid plans meet reality.
 
 Please respond with any comments or concerns.
 
 Thanks, Elizabeth, for all your work!
 
 -Jim
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

+1 install the pleia2 rootkit. This is exciting, will be great to have
pleia2 join the core and admin groups.

Clark

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat][Keystone] Native keystone resources in Heat

2015-01-30 Thread Brant Knudson
On Fri, Jan 30, 2015 at 4:20 AM, Steven Hardy sha...@redhat.com wrote:

 On Thu, Jan 29, 2015 at 12:31:17PM -0500, Zane Bitter wrote:
  On 29/01/15 12:03, Steven Hardy wrote:
  On Thu, Jan 29, 2015 at 11:41:36AM -0500, Zane Bitter wrote:
  IIUC keystone now allows you to add users to a domain that is
 otherwise
  backed by a read-only backend (i.e. LDAP). If this means that it's now
  possible to configure a cloud so that one need not be an admin to
 create
  users then I think it would be a really useful thing to expose in
 Heat. Does
  anyone know if that's the case?
  
  I've not heard of that feature, but it's definitely now possible to
  configure per-domain backends, so for example you could have the heat
  domain backed by SQL and other domains containing real human users
 backed
  by a read-only directory.
 
  http://adam.younglogic.com/2014/08/getting-service-users-out-of-ldap/

 Perhaps we need to seek clarification from Adam/Henry, but my understanding
 of that feature is not that it enables you to add users to domains backed
 by a read-only directory, but rather that multiple backends are possible,
 such that one domain can be backed by a read-only backend, and another
 (different) domain can be backed by a different read/write one.

 E.g in the example above, you might have the freeipa domain backed by
 read-only LDAP which contains your directory of human users, and you might
 also have a different domain e.g services or heat backed by a
 read/write backend e.g Sql.

 Steve


You might want to think about what can be done using federation. Federation
allows keystone to talk to external identity providers, where these
identity providers have the users. What if heat was an identity provider?
Then heat would have a record of the users and they could be used with
keystone to get a token.

On a similar note, while keystone isn't going to let you create users in a
read-only LDAP backend, heat could talk directly to the LDAP server to
create users.

- Brant
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] Nominating Elizabeth K. Joseph for infra-core and root

2015-01-30 Thread Jeremy Stanley
On 2015-01-30 09:20:44 -0800 (-0800), James E. Blair wrote:
[...]
 Elizabeth K. Joseph has been reviewing a significant number of infra
 patches for some time now. She has taken on a number of very large
 projects, including setting up our Git server farm, adding support for
 infra servers running on CentOS, and setting up the Zanata translation
 system (and all of this without shell access to production machines).
 
 She understands all of our servers, regardless of function, size, or
 operating system.  She has frequently spoken publicly about the unique
 way in which we perform systems administration, articulating what we are
 doing and why in a way that inspires us as much as others.
 
 Due to her strong systems administration background, I am nominating her
 for both infra-core and infra-root simultaneously.
[...]

I would be thrilled to have Elizabeth as a fellow core reviewer and
root sysadmin as soon as possible. She'll be a welcome member to the
team as far as I'm concerned!
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [all][log] Openstack HTTP error codes

2015-01-30 Thread Everett Toews
On Jan 29, 2015, at 7:34 PM, Rochelle Grober 
rochelle.gro...@huawei.commailto:rochelle.gro...@huawei.com wrote:

Hi folks!

Changed the tags a bit because this is a discussion for all projects and 
dovetails with logging rationalization/standards/

At the Paris summit, we had a number of session on logging that kept circling 
back to Error Codes.  But, these codes would not be http codes, rather, as 
others have pointed out, codes related to the calling entities and referring 
entities and the actions that happened or didn’t.  Format suggestions were 
gathered from the Operators and from some senior developers.  The Logging 
Working Group is planning to put forth a spec for discussion on formats and 
standards before the Ops mid-cycle meetup.

Working from a Glance proposal on error codes:  
https://review.openstack.org/#/c/127482/ and discussions with operators and 
devs, we have a strawman to propose.  We also have a number of requirements 
from Ops and some Devs.

Here is the basic idea:

Code for logs would have four segments:
Project Vendor/Component  Error Catalog 
number Criticality
Def [A-Z] [A-Z] [A-Z]   -  [{0-9}|{A-Z}][A-Z] - 
[-]-   [0-9]
Ex.  CIN-   NA- 
   0001- 2
Cinder   NetApp 
   driver error no  Criticality
Ex.  GLA-  0A-  
   0051   3
Glance  Api 
error no   Criticality
Three letters for project,  Either a two letter vendor code or a number and 
letter for 0+letter for internal component of project (like API=0A, Controller 
=0C, etc),  four digit error number which could be subsetted for even finer 
granularity, and a criticality number.

This is for logging purposes and tracking down root cause faster for operators, 
but if an error is generated, why can the same codes be used internally for the 
code as externally for the logs?

I like the idea of the log error codes being aligned with the API errors codes 
but I have some thoughts/concerns.

Project: A client dealing with the API already knows what project (service) 
they’re dealing with. Including this in an API error message would be 
redundant. That’s not necessarily so bad and it could actually be convenient 
for client logging purposes to have this there.

Vendor/Component: Including any vendor information at all would be leaking 
implementation details. This absolutely cannot be exposed in an API error 
message. Even including the component would be leaking too much.

Error Catalog Number: If there could be alignment around this, that would be 
great.

Criticality: This might be useful to clients? I don’t know. I don’t feel too 
strongly about it.

Thanks,
Everett

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Deprecation of LDAP Assignment (Only Affects Project/Tenant/Role/Assignment info in LDAP)

2015-01-30 Thread Morgan Fainberg
On January 29, 2015 at 3:19:34 AM, Yuriy Taraday (yorik@gmail.com) wrote:
Hello.

On Wed Jan 28 2015 at 11:30:43 PM Morgan Fainberg morgan.fainb...@gmail.com 
wrote:
LDAP is used in Keystone as a backend for both the Identity (Users and groups) 
and assignments (assigning roles to users) backend.

Where did the LDAP Assignment backend come from? We originally had a single 
backend for Identity (users, groups, etc) and Assignment (Projects/Tenants, 
Domains, Roles, and everything else not-users-and-groups). When we did the 
split of Identity and Assignment we needed to support the organizations that 
deployed everything in the LDAP backend. This required both a driver for 
Identity and Assignment.

 We are planning on keeping support for identity while deprecating support for 
assignment.  There is only one known organization that this will impact (CERN) 
and they have a transition plan in place already.

I can (or actually can't do it here) name quite a few of our customers who do 
use LDAP assignment backend. The issue that is solved by this is data 
replication across data centers. What would be the proposed solution for them? 
MySQL multi-master replication (Galera) is feared to perform badly across DC.


A couple of thoughts on this front: If the remote systems are not updating the 
assignment data, it would be possible to use read-only replication to the 
remote datacenter. Galera performance is actually quite good with local reading 
even with replication across a WAN. 

But more importantly, what are you trying to solve with replicating the data 
across datacenters? More than very limited cases (where Galera would work) 
becomes a very, very difficult system to maintain with a highly complex 
topology that is as likely to be fragile as a mysql replication. 
Multi-master-cross-datacenter replication of LDAP is probably even scarier in 
my opinion. I’d really encourage a move towards using federated Identity 
instead as it helps to encapsulate the data by DC and limit failure domains 
(overall a better design). However, I do get that moving to Federated Identity 
is a complete re-design.

The Problem
——
The SQL Assignment backend has become significantly more feature rich and due 
to the limitations of the basic LDAP schemas available (most LDAP admins wont 
let someone load custom schemas), the LDAP assignment backend has languished 
and fallen further and further behind. It turns out almost no deployments use 
LDAP to house projects/tenants, domains, roles, etc. A lot of deployments use 
LDAP for users and groups.

We explored many options on this front and it boiled down to three:

1. Try and figure out how to wedge all the new features into a sub-optimal data 
store (basic/standard LDAP schemas)
2. Create a custom schema for LDAP Assignment. This would require convincing 
LDAP admins (or Active Directory admins) to load a custom schema. This also was 
a very large amount of work for a very small deployment base.
3. Deprecate the LDAP Assignment backend and work with the community to support 
(if desired) an out-of-tree LDAP driver (supported by those who need it).

I'd like to note that it is in fact possible to make LDAP backend work even 
with native AD schema without modifications. The only issue that has been 
hanging with LDAP schema from the very beginning of LDAP driver is usage of 
groupOfNames for projects and nesting other objects under it. With some fixes 
we managed to make it work with stock AD schema with no modifications for 
Havana and port that to Icehouse.
I hate to be blunt here, but where has the contribution of these “fixes” been? 

I am disappointed on two fronts:

1) When the surveys for LDAP assignment went out (sent to -dev, -operators, and 
main mailing lists) I received no indication you were using it (in fact I 
received specific information to the contrary). 

2) That these fixes you are speaking of are unknown to me. LDAP Assignment has 
been barely maintained. So far it has been the core team maintaining it with 
input/some help from a single large deployer who has already committed to 
moving away from LDAP-based assignments in Keystone. The maintenance from the 
core team really has been to make sure it didn’t just stop working, no feature 
parity with the other (SQL) backend has even been attempted due to the lack of 
interest.

Based upon interest, workload, and general maintainability issues, we have 
opted to deprecate the LDAP Assignment backend. What does this mean?

1. This means effective as of Kilo, the LDAP assignment backend is deprecated 
and Frozen.
1.a. No new code/features will be added to the LDAP Assignment backend.
1.b. Only exception to 1.a is security-related fixes.

2.The LDAP Assignment backend ([assignment]/driver” config option set to 
“keystone.assignment.backends.ldap.Assignment” or a subclass) will remain 
in-tree with plans to be removed in the “M”-release.
2.a. This is subject to support beyond the “M”-release based upon what 

Re: [openstack-dev] Deprecation of in tree EC2 API in Nova for Kilo release

2015-01-30 Thread Matt Riedemann



On 1/30/2015 3:16 PM, Soren Hansen wrote:

As I've said a couple of times in the past, I think the
architecturally sound approach is to keep this inside Nova.

The two main reasons are:
  * Having multiple frontend API's keeps us honest in terms of
separation between the different layers in Nova.
  * Having the EC2 API inside Nova ensures the internal data model is
rich enough to feed the EC2 API. If some field's only use is to
enable the EC2 API and the EC2 API is a separate component, it's not
hard to imagine it being deprecated as well.

I fear that deprecation is a one way street and I would like to ask
one more chance to resucitate it in its current home.

I could be open to a discussion about putting it into a separate
repository, but having it functionally remain in its current place, if
that's somehow easier to swallow.


Soren Hansen | http://linux2go.dk/
Ubuntu Developer | http://www.ubuntu.com/
OpenStack Developer  | http://www.openstack.org/


2015-01-28 20:56 GMT+01:00 Sean Dague s...@dague.net:

The following review for Kilo deprecates the EC2 API in Nova -
https://review.openstack.org/#/c/150929/

There are a number of reasons for this. The EC2 API has been slowly
rotting in the Nova tree, never was highly tested, implements a
substantially older version of what AWS has, and currently can't work
with any recent releases of the boto library (due to implementing
extremely old version of auth). This has given the misunderstanding that
it's a first class supported feature in OpenStack, which it hasn't been
in quite sometime. Deprecating honestly communicates where we stand.

There is a new stackforge project which is getting some activity now -
https://github.com/stackforge/ec2-api. The intent and hope is that is
the path forward for the portion of the community that wants this
feature, and that efforts will be focused there.

Comments are welcomed, but we've attempted to get more people engaged to
address these issues over the last 18 months, and never really had
anyone step up. Without some real maintainers of this code in Nova (and
tests somewhere in the community) it's really no longer viable.

 -Sean

--
Sean Dague
http://dague.net


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Deprecation isn't a one-way street really, nova-network was deprecated 
for a couple of releases and then undeprecated and opened up again for 
feature development (at least for a short while until the migration to 
neutron is sorted out and implemented).


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] The API WG mission statement

2015-01-30 Thread Everett Toews
Hi All,

Something we in the API WG keep bumping into are misconceptions around what our 
mission really is. There’s general agreement in the WG about our mission but we 
haven’t formalized it. 

It’s really highlighted the need for a mission statement/elevator pitch/mantra 
that we can repeat to people who are encountering us in a review or IRC meeting 
or whatever for the first time. Let’s keep it short and sweet, 2-3 sentences 
max. 

What is the API WG mission statement?

Let’s discuss.

Thanks,
Everett
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] The API WG mission statement

2015-01-30 Thread Everett Toews
On Jan 30, 2015, at 4:57 PM, Everett Toews everett.to...@rackspace.com wrote:

 Hi All,
 
 Something we in the API WG keep bumping into are misconceptions around what 
 our mission really is. There’s general agreement in the WG about our mission 
 but we haven’t formalized it. 
 
 It’s really highlighted the need for a mission statement/elevator 
 pitch/mantra that we can repeat to people who are encountering us in a review 
 or IRC meeting or whatever for the first time. Let’s keep it short and sweet, 
 2-3 sentences max. 
 
 What is the API WG mission statement?
 
 Let’s discuss.
 
 Thanks,
 Everett

Here’s my take,

To converge the OpenStack APIs to a consistent and pragmatic RESTful design by 
creating guidelines that the projects should follow. The intent is not to 
create backwards incompatible changes in existing APIs, but to have new APIs 
and future versions of existing APIs converge.

Everett


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] The API WG mission statement

2015-01-30 Thread Dean Troyer
On Fri, Jan 30, 2015 at 4:57 PM, Everett Toews everett.to...@rackspace.com
wrote:

 What is the API WG mission statement?


It's more of a mantra than a Mission Statement(TM):

Identify existing and future best practices in OpenStack REST APIs to
enable new and existing projects to evolve and converge.

Tweetable, 126 chars!

Plus, buzzword-bingo-compatibile, would score 5 in my old corporate
buzzwordlist...

dt

(Can you tell my flight has been delayed? ;)

-- 

Dean Troyer
dtro...@gmail.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack Foundation] Finding people to work on the EC2 API in Nova

2015-01-30 Thread Davanum Srinivas
Alexandre, Randy,

Are there plans afoot to add support to switch on stackforge/ec2-api
in devstack? add tempest tests etc? CI Would go a long way in
alleviating concerns i think.

thanks,
dims

On Fri, Jan 30, 2015 at 1:24 PM, Bias, Randy randy.b...@emc.com wrote:
 As you know we have been driving forward on the stack forge project and
 it¹s our intention to continue to support it over time, plus reinvigorate
 the GCE APIs when that makes sense. So we¹re supportive of deprecating
 from Nova to focus on EC2 API in Nova.  I also think it¹s good for these
 APIs to be able to iterate outside of the standard release cycle.



 --Randy

 VP, Technology, EMC Corporation
 Formerly Founder  CEO, Cloudscaling (now a part of EMC)
 +1 (415) 787-2253 [google voice]
 TWITTER: twitter.com/randybias
 LINKEDIN: linkedin.com/in/randybias
 ASSISTANT: ren...@emc.com






 On 1/29/15, 4:01 PM, Michael Still mi...@stillhq.com wrote:

Hi,

as you might have read on openstack-dev, the Nova EC2 API
implementation is in a pretty sad state. I wont repeat all of those
details here -- you can read the thread on openstack-dev for detail.

However, we got here because no one is maintaining the code in Nova
for the EC2 API. This is despite repeated calls over the last 18
months (at least).

So, does the Foundation have a role here? The Nova team has failed to
find someone to help us resolve these issues. Can the board perhaps
find resources as the representatives of some of the largest
contributors to OpenStack? Could the Foundation employ someone to help
us our here?

I suspect the correct plan is to work on getting the stackforge
replacement finished, and ensuring that it is feature compatible with
the Nova implementation. However, I don't want to preempt the design
process -- there might be other ways forward here.

I feel that a continued discussion which just repeats the last 18
months wont actually fix the situation -- its time to break out of
that mode and find other ways to try and get someone working on this
problem.

Thoughts welcome.

Michael

--
Rackspace Australia

___
Foundation mailing list
foundat...@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/foundation


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api][nova] Openstack HTTP error codes

2015-01-30 Thread Everett Toews

On Jan 29, 2015, at 11:41 AM, Sean Dague s...@dague.net wrote:

 Correct. This actually came up at the Nova mid cycle in a side
 conversation with Ironic and Neutron folks.
 
 HTTP error codes are not sufficiently granular to describe what happens
 when a REST service goes wrong, especially if it goes wrong in a way
 that would let the client do something other than blindly try the same
 request, or fail.
 
 Having a standard json error payload would be really nice.
 
 {
 fault: ComputeFeatureUnsupportedOnInstanceType,
 messsage: This compute feature is not supported on this kind of
 instance type. If you need this feature please use a different instance
 type. See your cloud provider for options.
 }
 
 That would let us surface more specific errors.
 
 Today there is a giant hodgepodge - see:
 
 https://github.com/openstack/tempest-lib/blob/master/tempest_lib/common/rest_client.py#L412-L424
 
 https://github.com/openstack/tempest-lib/blob/master/tempest_lib/common/rest_client.py#L460-L492
 
 Especially blocks like this:
 
if 'cloudServersFault' in resp_body:
message =
 resp_body['cloudServersFault']['message']
elif 'computeFault' in resp_body:
message = resp_body['computeFault']['message']
elif 'error' in resp_body:
message = resp_body['error']['message']
elif 'message' in resp_body:
message = resp_body['message']
 
 Standardization here from the API WG would be really great.

Agreed. I’m 100% for having a guideline for errors. Good error messages is one 
of the most important aspects of a good developer experience for an API. I 
suspect that once you propose an error format for one error, people will 
immediately think of a lot of valid reasons to have a formant for many errors.

I did a bit of research into prior art for error messages. The best discussion 
I found on it starts over in JSON-API [1]. It ultimately results in this error 
format [2]. An example would look like

{
  errors: [
{
  id: some-transaction-id,
  href: https://example.org/more/info/about/this/error.html;,
  code: 0054,
  title: foobar must be in the range [foo, bar)
},
{
  id: ...
}
  ]
}

Do we need to use every field in the format? No.
Can we add additional fields as we see fit? Yes.

We need to do what’s best for OpenStack so I’d like to use a format that’s 
somewhat a standard (at the very least it’s clear that the JSON-API folks have 
done a lot of thinking on it) but that’s flexible enough to meet our 
requirements.

I came across some other error formats such as [3] and [4] but found them to be 
a bit complicated or require things we don’t need.

Thoughts on the JSON-API error format or other formats?

Thanks,
Everett

[1] https://github.com/json-api/json-api/issues/7
[2] http://jsonapi.org/format/#errors
[3] https://github.com/blongden/vnd.error
[4] 
https://google-styleguide.googlecode.com/svn/trunk/jsoncstyleguide.xml#Reserved_Property_Names_in_the_error_object


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Undercloud heat version expectations

2015-01-30 Thread Gregory Haynes
Excerpts from Gregory Haynes's message of 2015-01-30 18:28:19 +:
 Excerpts from Steven Hardy's message of 2015-01-30 10:29:05 +:
  Hi all,
  
  I've had a couple of discussions lately causing me to question $subject,
  and in particular what our expectations are around tripleo-heat-templates
  working with older (e.g non trunk) versions of Heat in the undercloud.
  
  For example, in [1], we're discussing merging a template-level workaround
  for a heat bug which has been fixed for nearly 4 months (I've now proposed
  a stable/juno backport..) - this raises the question, do we actually
  support tripleo-heat-templates with a stable/juno heat in the undercloud?
  
  Related to this is discussion such as [2], where ideally I'd like us to
  start using some new-shiny features we've been landing in heat to make the
  templates cleaner - is this valid, e.g can I start proposing template
  changes to tripleo-heat-templates which will definitely require
  new-for-kilo heat functionality?
  
  Thanks,
  
  Steve
  
  [1] https://review.openstack.org/#/c/151038/
  [2] https://review.openstack.org/#/c/151389/
  
 
 Hey Steve,
 
 A while ago (last mid cycle IIRC) we decided that rather than maintain
 stable branches we would ensure that we could deploy stable openstack
 releases from trunk. I believe Heat falls under this umbrella, and we
 need to make sure that we support deploying at least the latest stable
 heat release.
 
 That being said, were lacking in this plan ATM. We *really* should have
 a stable release CI job. We do have a spec though[1].
 
 Cheers,
 Greg
 
 
 [1] 
 http://git.openstack.org/cgit/openstack/tripleo-specs/tree/specs/juno/backwards-compat-policy.rst

We had a discussion in IRC about this and I wanted to bring up the points
that were made on the ML. By the end of the discussion I think the
consensus there was that we should resurrect the stable branches.
Therefore, I am especially seeking input from people who have arguments
for keeping our current 'deploy stable openstack from master' goals.

Our goal of being able to deploy stable openstack branches using HEAD of
tripleo tools makes some new feature development more difficult on
master than it needs to be. Specifically, dprince has been feeling this
pain in the tripleo/puppet integration work he is doing. There is also
some new heat feature work we could benefit from (like the patches
above) that were going to have to wait multiple cycles for or maintain
multiple implementations of. Therefore we should look into resurreting
our stable branches.

The backwards compat spec specifies that tripleo-image-elements and
tripleo-heat-templates are co-dependent WRT backwards compat. This
probably made some sense at the time of the spec writing since
alternatives to tripleo-image-elements did not exist, but with the
tripleo/puppet work we need to revisit this.

Thoughts? Comments?

Cheers,
Greg

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] 1 or 2 repos for the API WG

2015-01-30 Thread Everett Toews
The suggestion of whether to use 1 or 2 repos for the API WG surfaced on the ML 
here [1]. That thread then morphed into a discussion on whether to use 1 or 2 
repos. I believe it’s correct to say that the consensus on that thread was for 
1 repo.

We also discussed the question of 1 or 2 repos during the API WG meeting [2] 
and, of the 7 attendees that voted, there was unanimous agreement [3] for 1 
repo.

Unless there’s a strong objection or disagreement with my analysis of the 
above, the API WG will move forward and use only 1 repo.

Now the question becomes, which repo?

Everett

[1] 
http://eavesdrop.openstack.org/meetings/api_wg/2015/api_wg.2015-01-29-16.00.html
[2] http://lists.openstack.org/pipermail/openstack-dev/2015-January/055524.html
[3] 
http://eavesdrop.openstack.org/meetings/api_wg/2015/api_wg.2015-01-29-16.00.log.html#l-67


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack Foundation] Finding people to work on the EC2 API in Nova

2015-01-30 Thread Matt Riedemann



On 1/29/2015 7:42 PM, Michael Still wrote:

On Thu, Jan 29, 2015 at 5:15 PM, Michael Still mi...@stillhq.com wrote:

There is an ec2 bug tag in launchpad. I would link to it except I am writing
this offline. I will fix that later today.


Ok, now that the 40 seater turbo prop has landed, here we go:

https://bugs.launchpad.net/nova/+bugs?field.tag=ec2


However I think what we've shown is that moving this code out of nova is the
future. I would like to see someone come up with a plan to transition users
to the stackforge project. That seems the best way forward at this point.

Thanks,
Michael

On 30 Jan 2015 11:11 am, matt m...@nycresistor.com wrote:


Is there a blue print or some set of bugs tagged in some way to tackle?

-matt

On Thu, Jan 29, 2015 at 7:01 PM, Michael Still mi...@stillhq.com wrote:


Hi,

as you might have read on openstack-dev, the Nova EC2 API
implementation is in a pretty sad state. I wont repeat all of those
details here -- you can read the thread on openstack-dev for detail.

However, we got here because no one is maintaining the code in Nova
for the EC2 API. This is despite repeated calls over the last 18
months (at least).

So, does the Foundation have a role here? The Nova team has failed to
find someone to help us resolve these issues. Can the board perhaps
find resources as the representatives of some of the largest
contributors to OpenStack? Could the Foundation employ someone to help
us our here?

I suspect the correct plan is to work on getting the stackforge
replacement finished, and ensuring that it is feature compatible with
the Nova implementation. However, I don't want to preempt the design
process -- there might be other ways forward here.

I feel that a continued discussion which just repeats the last 18
months wont actually fix the situation -- its time to break out of
that mode and find other ways to try and get someone working on this
problem.

Thoughts welcome.

Michael

--
Rackspace Australia

___
Foundation mailing list
foundat...@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/foundation











Also the spec here:

https://review.openstack.org/#/c/147882/

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] volunteer to be rep for third-party CI

2015-01-30 Thread Adam Gandelman
Hi-

I believe I'm the one who was volunteered for this on IRC. I'm still fine
with being the contact for these matters, but making the meetings at
0800/1500 UTC will be difficult for me.   Feel free to sign me up if that
is not a blocker. I've been driving the ironic CI/QA stuff over the last
couple of cycles and know it and the rest of the upstream gate well enough
to be able to provide guidance to any other Ironic dev's hoping to get
their own up and running.

adam_g

On Thu, Jan 29, 2015 at 1:28 PM, Anita Kuno ante...@anteaya.info wrote:

 On 01/29/2015 02:32 PM, Adam Lawson wrote:
  Hi ruby, I'd be interested in this. Let me know next steps when ready?
 
  Thanks!
 Hi Adam:

 It requires someone who knows the code base really well. While core
 review permissions are not required, the person fulfilling this role
 needs to have the confidence of the cores for support of decisions they
 make.

 Since folks with these abilities spend much of their time in irc and
 read backscroll I had brought the subject up in channel. I hadn't
 expected a post to the mailing list as folks in the larger community may
 not have the skill set that would make them effective in this role.

 Which is not to say they can't learn the role. The starting place would
 be to contribute to the code base as a contributor
 (http://docs.openstack.org/infra/manual/developers.html) and earn the
 trust of the program's cores through participation in channel and in
 reviews.

 To be honest, I had thought someone already had said they would do this
 but since Ironic doesn't have much third party ci activity, I have
 forgotten who said they would. Mostly I was asking if anyone else
 remembered who this was.

 Thanks Adam,
 Anita.
  On Jan 29, 2015 11:14 AM, Ruby Loo rlooya...@gmail.com wrote:
 
  Hi,
 
  Want to contribute even more to the Ironic community? Here's your
  opportunity!
 
  Anita Kuno (anteaya) would like someone to be the Ironic representative
  for third party CIs. What would you have to do? In her own words:
 mostly
  I need to know who they are so that when someone has questions I can
 work
  with that person to learn the answers so that they can learn to answer
 the
  questions
 
  There are regular third party meetings [1] and it would be great if you
  would attend them, but that isn't necessary.
 
  Let us know if you're interested. No resumes need to be submitted. In
 case
  there is a lot of interest, hmm..., the PTL, Devananda, will decide.
  (That's what he gets for not being around now. ;))
 
  Thanks in advance for all your interest,
  --ruby
 
  [1] https://wiki.openstack.org/wiki/Meetings#Third_Party_Meeting
 
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] Nominating Elizabeth K. Joseph for infra-core and root

2015-01-30 Thread Andreas Jaeger
On 01/30/2015 06:20 PM, James E. Blair wrote:
 [...]
 Please respond with any comments or concerns.
 
 Thanks, Elizabeth, for all your work!

It's a pleasure seeing her thorough reviews and I agree she'll be a
great addition,

Andreas
-- 
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Jennifer Guild, Dilip Upmanyu,
   Graham Norton, HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [all][log] Openstack HTTP error codes

2015-01-30 Thread Everett Toews
On Jan 30, 2015, at 3:17 PM, Jesse Keating j...@bluebox.net wrote:

 On 1/30/15 1:08 PM, Everett Toews wrote:
 Project: A client dealing with the API already knows what project
 (service) they’re dealing with. Including this in an API error message
 would be redundant. That’s not necessarily so bad and it could actually
 be convenient for client logging purposes to have this there.
 
 
 Is this really true though? When your interaction with nova is being thwarted 
 by a problem with keystone, wouldn't the end user want to see the keystone 
 name in there as a helpful breadcrumb as to where the problem actually lies?

Once I have the token from Keystone, I’ll be talking directly to the services. 
So either something goes wrong with Keystone and I get no token or I get a 
token and talk directly to a service. Either way a client knows who it's 
talking to.

I suppose one possible case outside of that is token revocation. If I’m talking 
to a service and the token gets revoked, does the error originate in Keystone? 
I’m not really sure.

Everett


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] Nominating Elizabeth K. Joseph for infra-core and root

2015-01-30 Thread Morgan Fainberg
A Huge +1 from me.
—Morgan

-- 
Morgan Fainberg

On January 30, 2015 at 9:20:06 AM, James E. Blair (cor...@inaugust.com) wrote:

Hi,  

The Infrastructure program has a unique three-tier team structure:  
contributors (that's all of us!), core members (people with +2 ability  
on infra projects in Gerrit) and root members (people with  
administrative access). Read all about it here:  

http://ci.openstack.org/project.html#team  

Elizabeth K. Joseph has been reviewing a significant number of infra  
patches for some time now. She has taken on a number of very large  
projects, including setting up our Git server farm, adding support for  
infra servers running on CentOS, and setting up the Zanata translation  
system (and all of this without shell access to production machines).  

She understands all of our servers, regardless of function, size, or  
operating system. She has frequently spoken publicly about the unique  
way in which we perform systems administration, articulating what we are  
doing and why in a way that inspires us as much as others.  

Due to her strong systems administration background, I am nominating her  
for both infra-core and infra-root simultaneously. I expect many of us  
are looking forward to seeing her insight and direction applied with +2s  
but also equally excited for her to be able to troubleshoot things when  
our best-laid plans meet reality.  

Please respond with any comments or concerns.  

Thanks, Elizabeth, for all your work!  

-Jim  

__  
OpenStack Development Mailing List (not for usage questions)  
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe  
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev  
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] 1 or 2 repos for the API WG

2015-01-30 Thread Dean Troyer
On Fri, Jan 30, 2015 at 4:03 PM, Everett Toews everett.to...@rackspace.com
wrote:

 Now the question becomes, which repo?


I think the current one serves the community best.  AIUI the reason for
thinking about a change was for visibility.  As before, I think that is an
easier problem to solve than the ones created by moving to -specs.
Besides, even if TC approves our guidelines, they are still guidelines not
specs.

One way to address visibility by doing visible things: writing and
completing credible guidelines, providing input and other involvement in
projects directly.

FWIW, in Paris I got more input from other projects WRT OpenStackClient by
attending other project's client-related sessions than any previous OSC
session, except maybe the first one...

dt

-- 

Dean Troyer
dtro...@gmail.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Deprecation of in tree EC2 API in Nova for Kilo release

2015-01-30 Thread Soren Hansen
As I've said a couple of times in the past, I think the
architecturally sound approach is to keep this inside Nova.

The two main reasons are:
 * Having multiple frontend API's keeps us honest in terms of
separation between the different layers in Nova.
 * Having the EC2 API inside Nova ensures the internal data model is
rich enough to feed the EC2 API. If some field's only use is to
enable the EC2 API and the EC2 API is a separate component, it's not
hard to imagine it being deprecated as well.

I fear that deprecation is a one way street and I would like to ask
one more chance to resucitate it in its current home.

I could be open to a discussion about putting it into a separate
repository, but having it functionally remain in its current place, if
that's somehow easier to swallow.


Soren Hansen | http://linux2go.dk/
Ubuntu Developer | http://www.ubuntu.com/
OpenStack Developer  | http://www.openstack.org/


2015-01-28 20:56 GMT+01:00 Sean Dague s...@dague.net:
 The following review for Kilo deprecates the EC2 API in Nova -
 https://review.openstack.org/#/c/150929/

 There are a number of reasons for this. The EC2 API has been slowly
 rotting in the Nova tree, never was highly tested, implements a
 substantially older version of what AWS has, and currently can't work
 with any recent releases of the boto library (due to implementing
 extremely old version of auth). This has given the misunderstanding that
 it's a first class supported feature in OpenStack, which it hasn't been
 in quite sometime. Deprecating honestly communicates where we stand.

 There is a new stackforge project which is getting some activity now -
 https://github.com/stackforge/ec2-api. The intent and hope is that is
 the path forward for the portion of the community that wants this
 feature, and that efforts will be focused there.

 Comments are welcomed, but we've attempted to get more people engaged to
 address these issues over the last 18 months, and never really had
 anyone step up. Without some real maintainers of this code in Nova (and
 tests somewhere in the community) it's really no longer viable.

 -Sean

 --
 Sean Dague
 http://dague.net


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack Foundation] Finding people to work on the EC2 API in Nova

2015-01-30 Thread Bias, Randy
As you know we have been driving forward on the stack forge project and
it¹s our intention to continue to support it over time, plus reinvigorate
the GCE APIs when that makes sense. So we¹re supportive of deprecating
from Nova to focus on EC2 API in Nova.  I also think it¹s good for these
APIs to be able to iterate outside of the standard release cycle.



--Randy

VP, Technology, EMC Corporation
Formerly Founder  CEO, Cloudscaling (now a part of EMC)
+1 (415) 787-2253 [google voice]
TWITTER: twitter.com/randybias
LINKEDIN: linkedin.com/in/randybias
ASSISTANT: ren...@emc.com






On 1/29/15, 4:01 PM, Michael Still mi...@stillhq.com wrote:

Hi,

as you might have read on openstack-dev, the Nova EC2 API
implementation is in a pretty sad state. I wont repeat all of those
details here -- you can read the thread on openstack-dev for detail.

However, we got here because no one is maintaining the code in Nova
for the EC2 API. This is despite repeated calls over the last 18
months (at least).

So, does the Foundation have a role here? The Nova team has failed to
find someone to help us resolve these issues. Can the board perhaps
find resources as the representatives of some of the largest
contributors to OpenStack? Could the Foundation employ someone to help
us our here?

I suspect the correct plan is to work on getting the stackforge
replacement finished, and ensuring that it is feature compatible with
the Nova implementation. However, I don't want to preempt the design
process -- there might be other ways forward here.

I feel that a continued discussion which just repeats the last 18
months wont actually fix the situation -- its time to break out of
that mode and find other ways to try and get someone working on this
problem.

Thoughts welcome.

Michael

-- 
Rackspace Australia

___
Foundation mailing list
foundat...@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/foundation


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [all][log] Openstack HTTP error codes

2015-01-30 Thread Kevin L. Mitchell
On Fri, 2015-01-30 at 21:08 +, Everett Toews wrote:
 Project: A client dealing with the API already knows what project
 (service) they’re dealing with. Including this in an API error message
 would be redundant. That’s not necessarily so bad and it could
 actually be convenient for client logging purposes to have this there.

Do they?  We boot a server and interact with Cinder and Neutron, right?
What if the nova API is simply forwarding an error that originally came
from Cinder?

 Vendor/Component: Including any vendor information at all would be
 leaking implementation details. This absolutely cannot be exposed in
 an API error message. Even including the component would be leaking
 too much.

While I agree with you from a security standpoint, this is probably
coming in due to a desire to namespace the errors.  Ideally, we'd have a
set of common error codes to cover conditions that the API user could
rectify (You picked a nic type we don't support or something like
that), but I fear there may always be errors that are things the API
user could rectify but which don't fit into any of those buckets…

 Error Catalog Number: If there could be alignment around this, that
 would be great.
[snip]
 Criticality: This might be useful to clients? I don’t know. I don’t
 feel too strongly about it.

I feel this part of the code needs more thought to properly round out.
Is it intended to convey information similar to the distinction between
4xx and 5xx errors in HTTP?  (You made an error vs. The server messed
up.)  Is it intended to convey a retryable condition?  (If you retry
this, it may succeed.)  If it's intended to convey that the server
messed up spectacularly and that everything's broken now, well… :)
-- 
Kevin L. Mitchell kevin.mitch...@rackspace.com
Rackspace


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] volunteer to be rep for third-party CI

2015-01-30 Thread Anita Kuno
On 01/30/2015 12:08 PM, Adam Gandelman wrote:
 Hi-
 
 I believe I'm the one who was volunteered for this on IRC.
Ah yes, it was you. Thank you Adam and sorry I had mind blanked on your
name.

 I'm still fine
 with being the contact for these matters, but making the meetings at
 0800/1500 UTC will be difficult for me.
It is helpful but not required. Don't worry about it, any time you can
make a meeting your presence is welcome.

   Feel free to sign me up if that
 is not a blocker.
Consider yourself signed (or re-signed).

 I've been driving the ironic CI/QA stuff over the last
 couple of cycles and know it and the rest of the upstream gate well enough
 to be able to provide guidance to any other Ironic dev's hoping to get
 their own up and running.
 
 adam_g
Wonderful thank you, Adam. And sorry again for my mind blank. Look, now
I have an archived document to remind me. :)

Thanks,
Anita.
 
 On Thu, Jan 29, 2015 at 1:28 PM, Anita Kuno ante...@anteaya.info wrote:
 
 On 01/29/2015 02:32 PM, Adam Lawson wrote:
 Hi ruby, I'd be interested in this. Let me know next steps when ready?

 Thanks!
 Hi Adam:

 It requires someone who knows the code base really well. While core
 review permissions are not required, the person fulfilling this role
 needs to have the confidence of the cores for support of decisions they
 make.

 Since folks with these abilities spend much of their time in irc and
 read backscroll I had brought the subject up in channel. I hadn't
 expected a post to the mailing list as folks in the larger community may
 not have the skill set that would make them effective in this role.

 Which is not to say they can't learn the role. The starting place would
 be to contribute to the code base as a contributor
 (http://docs.openstack.org/infra/manual/developers.html) and earn the
 trust of the program's cores through participation in channel and in
 reviews.

 To be honest, I had thought someone already had said they would do this
 but since Ironic doesn't have much third party ci activity, I have
 forgotten who said they would. Mostly I was asking if anyone else
 remembered who this was.

 Thanks Adam,
 Anita.
 On Jan 29, 2015 11:14 AM, Ruby Loo rlooya...@gmail.com wrote:

 Hi,

 Want to contribute even more to the Ironic community? Here's your
 opportunity!

 Anita Kuno (anteaya) would like someone to be the Ironic representative
 for third party CIs. What would you have to do? In her own words:
 mostly
 I need to know who they are so that when someone has questions I can
 work
 with that person to learn the answers so that they can learn to answer
 the
 questions

 There are regular third party meetings [1] and it would be great if you
 would attend them, but that isn't necessary.

 Let us know if you're interested. No resumes need to be submitted. In
 case
 there is a lot of interest, hmm..., the PTL, Devananda, will decide.
 (That's what he gets for not being around now. ;))

 Thanks in advance for all your interest,
 --ruby

 [1] https://wiki.openstack.org/wiki/Meetings#Third_Party_Meeting


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev






 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Which repo should the API WG use?

2015-01-30 Thread Everett Toews
It was suggested that the API WG use the openstack-specs [1] and/or the api-wg 
[2] repo to publish its guidelines. We’ve already arrived at the consensus that 
we should only use 1 repo [3]. So the purpose of this thread is to decide...

Should the API WG use the openstack-specs repo or the api-wg repo?

Let’s discuss.

Thanks,
Everett

[1] http://git.openstack.org/cgit/openstack/openstack-specs/
[2] http://git.openstack.org/cgit/openstack/api-wg/
[3] http://lists.openstack.org/pipermail/openstack-dev/2015-January/055687.html


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Which repo should the API WG use?

2015-01-30 Thread Kevin L. Mitchell
On Fri, 2015-01-30 at 22:33 +, Everett Toews wrote:
 It was suggested that the API WG use the openstack-specs [1] and/or
 the api-wg [2] repo to publish its guidelines. We’ve already arrived
 at the consensus that we should only use 1 repo [3]. So the purpose of
 this thread is to decide...
 
 Should the API WG use the openstack-specs repo or the api-wg repo?
 
 Let’s discuss.

Well, the guidelines are just that: guidelines.  They don't implicitly
propose changes to any OpenStack projects, just provide guidance for
future API changes.  Thus, I think they should go in a repo separate
from any of our *-specs repos; to me, a spec provides documentation of a
change, and is thus independent of the guidelines.
-- 
Kevin L. Mitchell kevin.mitch...@rackspace.com
Rackspace


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] What's Up Doc? Jan 30 2015

2015-01-30 Thread Anne Gentle
We have a new Sphinx theme for docs.openstack.org, called
openstackdocstheme. It should be available on pipy as soon as we can get it
released. Once that's released I'll write instructions for how to apply it
to migrated content. A huge thanks to Doug Hellman for getting it to
releasable!

We are working through migrations for the End User Guide and Admin User
Guide, sign up for a section at
https://wiki.openstack.org/wiki/Documentation/Migrate.

While migrating a chapter, I found I wanted some guidelines for how to
migrate, so I wrote some here:
https://wiki.openstack.org/wiki/Documentation/Migrate#Migration_Conventions
Feel free to disagree with my notes so far, but my main theme is simplify.

The Foundation is working on letting people know about the new home page
design for docs.openstack.org probably through a Superuser post so that we
can dig into the techie details. :) Please let me know if you have
questions, and realize we're working through the kinks as we go.

Official OpenStack Debian images for VMs are now available and added to the
VM Image Guide:
http://docs.openstack.org/image-guide/content/ch_obtaining_images.html#debian-images

60 patches merged in various docs repos in the last week including updates
to the End User Guide to include more step-by-step tutorials for Object
Storage. Nice work everyone!

The service-api repos are now moved to openstack-attic. I'd still like
some of that content to be reflected in specs.openstack.org for both nova
and neutron and I'll continue to work on that.

Thanks,
Anne
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [all][log] Openstack HTTP error codes

2015-01-30 Thread Dean Troyer
On Fri, Jan 30, 2015 at 3:08 PM, Everett Toews everett.to...@rackspace.com
wrote:

 I like the idea of the log error codes being aligned with the API errors
 codes but I have some thoughts/concerns.

  Project: A client dealing with the API already knows what project
 (service) they’re dealing with. Including this in an API error message
 would be redundant. That’s not necessarily so bad and it could actually be
 convenient for client logging purposes to have this there.


Agreed that this is not necessary, but it is not objectionable if that
simplifies coding the server side.


 Vendor/Component: Including any vendor information at all would be leaking
 implementation details. This absolutely cannot be exposed in an API error
 message. Even including the component would be leaking too much.


++


 Error Catalog Number: If there could be alignment around this, that would
 be great.


I think the important alignment here is being able to trace a client-side
API error back to the service log for further research.  This might not be
a high-volume use, but I have to do this all the time for chasing down
client-side dev issues.  Its easy in DevStack, but in a deployed cloud of
any size not so much.  A timestamp and _anything_ that can map the
user-visible error into a log file is all that is really needed.  We often
can't even do that today.

dt

-- 

Dean Troyer
dtro...@gmail.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Deprecation of in tree EC2 API in Nova for Kilo release

2015-01-30 Thread Matt Riedemann



On 1/29/2015 5:52 AM, Alexandre Levine wrote:

Thomas,

I'm the lead of the team working on it.
The project is in a release-candidate state and the EC2 (non-VPC) part
is just being finished, so there are no tags or branches yet. Also we
were not sure about what should we do with it since we were told that
it'll have a chance of going as a part of nova eventually. So we've
created a spec and blueprint and only now the discussion has started.
Whatever the decisions we're ready to follow. If the first thing to get
it closer to customers is to create a package (now it can be only
installed from sources obviously) and a tag is required for it, then
that's what we should do.

So bottom line - we're not sure ourselves what the best way to move. Do
we put a tag (in what format? 1.0? m1? 2015.1.rc1?)? Or do we create a
branch?
My thinking now is to just put a tag - something like 1.0.rc1.
What do you think?

Best regards,
   Alex Levine

On 1/29/15 2:13 AM, Thomas Goirand wrote:

On 01/28/2015 08:56 PM, Sean Dague wrote:

There is a new stackforge project which is getting some activity now -
https://github.com/stackforge/ec2-api. The intent and hope is that is
the path forward for the portion of the community that wants this
feature, and that efforts will be focused there.

I'd be happy to provide a Debian package for this, however, there's not
even a single git tag there. That's not so nice for tracking issues.
Who's working on it?

Also, is this supposed to be branch-less? Or will it follow
juno/kilo/l... ?

Cheers,

Thomas Goirand (zigo)


__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



How dependent is this code on current nova master?  For example, is 
there a rebase or something that happens or things in nova on master 
that change which affect this repo and it has to adjust, like what 
happens with the nova-docker driver repo in stackforge?


If so, then I'd think it more closely aligns with the openstack release 
schedule and tagging/branching scheme, at least until it's completely 
independent.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [python-novaclient][nova] future of --os-compute-api-version option and whole api versioning

2015-01-30 Thread Andrey Kurilin
Thanks for the answer. Can I help with implementation of novaclient part?

On Wed, Jan 28, 2015 at 11:50 AM, Christopher Yeoh cbky...@gmail.com
wrote:

 On Fri, 23 Jan 2015 15:51:54 +0200
 Andrey Kurilin akuri...@mirantis.com wrote:

  Hi everyone!
  After removing nova V3 API from novaclient[1], implementation of v1.1
  client is used for v1.1, v2 and v3 [2].
  Since we moving to micro versions, I wonder, do we need such
  mechanism of choosing api version(os-compute-api-version) or we can
  simply remove it, like in proposed change - [3]?
  If we remove it, how micro version should be selected?
 

 So since v3 was never officially released I think we can re-use
 os-compute-api-version for microversions which will map to the
 X-OpenStack-Compute-API-Version header. See here for details on what
 the header will look like. We need to also modify novaclient to handle
 errors when a version requested is not supported by the server.

 If the user does not specify a version number then we should not send
 anything at all. The server will run the default behaviour which for
 quite a while will just be v2.1 (functionally equivalent to v.2)


 http://specs.openstack.org/openstack/nova-specs/specs/kilo/approved/api-microversions.html


 
  [1] - https://review.openstack.org/#/c/138694
  [2] -
 
 https://github.com/openstack/python-novaclient/blob/master/novaclient/client.py#L763-L769
  [3] - https://review.openstack.org/#/c/149006
 




-- 
Best regards,
Andrey Kurilin.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Undercloud heat version expectations

2015-01-30 Thread Gregory Haynes
Excerpts from Steven Hardy's message of 2015-01-30 10:29:05 +:
 Hi all,
 
 I've had a couple of discussions lately causing me to question $subject,
 and in particular what our expectations are around tripleo-heat-templates
 working with older (e.g non trunk) versions of Heat in the undercloud.
 
 For example, in [1], we're discussing merging a template-level workaround
 for a heat bug which has been fixed for nearly 4 months (I've now proposed
 a stable/juno backport..) - this raises the question, do we actually
 support tripleo-heat-templates with a stable/juno heat in the undercloud?
 
 Related to this is discussion such as [2], where ideally I'd like us to
 start using some new-shiny features we've been landing in heat to make the
 templates cleaner - is this valid, e.g can I start proposing template
 changes to tripleo-heat-templates which will definitely require
 new-for-kilo heat functionality?
 
 Thanks,
 
 Steve
 
 [1] https://review.openstack.org/#/c/151038/
 [2] https://review.openstack.org/#/c/151389/
 

Hey Steve,

A while ago (last mid cycle IIRC) we decided that rather than maintain
stable branches we would ensure that we could deploy stable openstack
releases from trunk. I believe Heat falls under this umbrella, and we
need to make sure that we support deploying at least the latest stable
heat release.

That being said, were lacking in this plan ATM. We *really* should have
a stable release CI job. We do have a spec though[1].

Cheers,
Greg


[1] 
http://git.openstack.org/cgit/openstack/tripleo-specs/tree/specs/juno/backwards-compat-policy.rst

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano] SQLite support - drop or not?

2015-01-30 Thread Mike Bayer


Andrew Pashkin apash...@mirantis.com wrote:

 Working on this issue I encountered another problem.
 
 Most indices in the project has no names and because of that,
 developer must reverse-engineer them in every migration.
 Read about that also here [1].
 
 SQLAlchemy and Alembic provide feature for generation constraint
 names by pattern, specifically to resolve that kind of issues [1].
 
 I decided to introduce usage of this feature in Murano.
 
 I've implemented solution that preserves backward-compatibility
 for migration and allows to rename all constraints according
 to patterns safely [2]. With it user, that have already deployed Murano
 will be able to upgrade to new version of Murano without issues.
 
 There are downsides in this solution:
 - It assumes that all versions of Postgres and MySQL uses the
  same patterns for constraints names generation.
 - It is hard to implement a test for this solution and it will be slow.
  Because there is need to reproduce such situation when user has old
  versions of migrations applied, and then tries to upgrade.

The patch seems to hardcode the conventions for MySQL and Postgresql.   The 
first thought I had was that in order to remove the dependence on them here, 
you’d need to instead simply turn off the “naming_convention” in the MetaData 
if you detect that you’re on one of those two databases.   That would be a 
safer idea than trying to hardcode these conventions (and would also work for 
other kinds of backends).

However, I’m not actually sure that you even need special behavior for these 
two backends.  If an operator runs these migrations on a clean database, then 
the constraints are generated with the consistent names on all backends.   if a 
target database already has these schema constructs present, then these 
migrations are never run; it doesn’t matter that they have the right or wrong 
names already.

I suppose then that the fear is that some PG/MySQL databases will have 
constraints that are named in one convention, and others will have constraints 
using the native conventions.However, the case now is that all deployments 
are using native conventions, and being able to DROP these constraints is 
already not very feasible unless you again were willing to hardcode those 
naming conventions up forward.The constraints in these initial migrations, 
assuming you don’t regenerate them, might just need to be left alone, and the 
project proceeds in the future with a consistent convention.

However, it’s probably worthwhile to introduce a migration that does in fact 
rename existing constraints on MySQL and Postgresql.  This would be a migration 
script that emits DROP CONSTRAINT and CREATE CONSTRAINT for all the above 
constraints that have an old name and a new name.  The script would need to 
check the backend, as you’re doing now, in order to run, and yes it would 
hardcode the names of those conventions, but at least it would just be a 
one-time run against only currently deployed databases.   Since your migrations 
are run “live”, the script can make itself a “conditional” run by checking for 
the “old” names and skipping those that don’t exist.  

 
 Another possible solution is to drop all current migrations and
 introduce new one with correct names.

you definitely shouldn’t need to do that.


 This brings us to new problem - migrations and models are out of sync
 right now in multiple places - there are different field types in
 migrations and models, migrations introduces indices that is absent
 in models, etc.
 
 And this solution has great downside - it is not backward-compatible,
 so all old users will lost their data.
 
 We (Murano team) should decide, what solution we want to use.
 
 
 [1]
 http://alembic.readthedocs.org/en/latest/naming.html#tutorial-constraint-names
 [2] https://review.openstack.org/150818
 
 -- 
 With kind regards, Andrew Pashkin.
 cell phone - +7 (985) 898 57 59
 Skype - waves_in_fluids
 e-mail - apash...@mirantis.com
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.db][nova] Deprecating use_slave in Nova

2015-01-30 Thread Mike Bayer


Matthew Booth mbo...@redhat.com wrote:

 At some point in the near future, hopefully early in L, we're intending
 to update Nova to use the new database transaction management in
 oslo.db's enginefacade.
 
 Spec:
 http://git.openstack.org/cgit/openstack/oslo-specs/plain/specs/kilo/make-enginefacade-a-facade.rst
 
 Implementation:
 https://review.openstack.org/#/c/138215/
 
 One of the effects of this is that we will always know when we are in a
 read-only transaction, or a transaction which includes writes. We intend
 to use this new contextual information to make greater use of read-only
 slave databases. We are currently proposing that if an admin has
 configured a slave database, we will use the slave for *all* read-only
 transactions. This would make the use_slave parameter passed to some
 Nova apis redundant, as we would always use the slave where the context
 allows.
 
 However, using a slave database has a potential pitfall when mixed with
 separate write transactions. A caller might currently:
 
 1. start a write transaction
 2. update the database
 3. commit the transaction
 4. start a read transaction
 5. read from the database
 
 The client might expect data written in step 2 to be reflected in data
 read in step 5. I can think of 3 cases here:
 
 1. A short-lived RPC call is using multiple transactions
 
 This is a bug which the new enginefacade will help us eliminate. We
 should not be using multiple transactions in this case. If the reads are
 in the same transaction as the write: they will be on the master, they
 will be consistent, and there is no problem. As a bonus, lots of these
 will be race conditions, and we'll fix at least some.
 
 2. A long-lived task is using multiple transactions between long-running
 sub-tasks
 
 In this case, for example creating a new instance, we genuinely want
 multiple transactions: we don't want to hold a database transaction open
 while we copy images around. However, I can't immediately think of a
 situation where we'd write data, then subsequently want to read it back
 from the db in a read-only transaction. I think we will typically be
 updating state, meaning it's going to be a succession of write transactions.
 
 3. Separate RPC calls from a remote client
 
 This seems potentially problematic to me. A client makes an RPC call to
 create a new object. The client subsequently tries to retrieve the
 created object, and gets a 404.
 
 Summary: 1 is a class of bugs which we should be able to find fairly
 mechanically through unit testing. 2 probably isn't a problem in
 practise? 3 seems like a problem, unless consumers of cloud services are
 supposed to expect that sort of thing.
 
 I understand that slave databases can occasionally get very behind. How
 behind is this in practise?
 
 How do we use use_slave currently? Why do we need a use_slave parameter
 passed in via rpc, when it should be apparent to the developer whether a
 particular task is safe for out-of-date data.
 
 Any chance they have some kind of barrier mechanism? e.g. block until
 the current state contains transaction X.
 
 General comments on the usefulness of slave databases, and the
 desirability of making maximum use of them?

keep in mind that the big win we get from writer()/ reader() is that writer() 
can remain pointing to one node in a Galera cluster, and reader() can point to 
the cluster as a whole.  reader() by default should definitely refer to the 
cluster as a whole, that is, “use slave”. 

As for issue #3, galera cluster is synchronous replication.   Slaves don’t get 
“behind” at all.   So to the degree that we need to transparently support some 
other kind of master/slave where slaves do get behind, perhaps there would be a 
reader(synchronous_required=True) kind of thing; based on configuration, it 
would be known that “synchronous” either means we don’t care (using galera) or 
that we should use the writer (an asynchronous replication scheme).

All of this points to the fact that I really don’t think the directives / flags 
should say anything about which specific database to use; using a “slave” or 
not due to various concerns is dependent on backend implementation and 
configuration.   The purpose of reader() / writer() is to ensure that we are 
only flagging the *intent* of the call, not the implementation.






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] Proposed Changes to Magnum Core

2015-01-30 Thread Adrian Otto
Magnum Cores,

Thanks for your votes. Hongbin has been added to the core group. Welcome!

Regards,

Adrian

On Jan 28, 2015, at 2:27 PM, Adrian Otto adrian.o...@rackspace.com wrote:

 Magnum Cores,
 
 I propose the following addition to the Magnum Core group[1]:
 
 + Hongbin Lu (hongbin034)
 
 Please let me know your votes by replying to this message.
 
 Thanks,
 
 Adrian
 
 [1] https://review.openstack.org/#/admin/groups/473,members Current Members
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [infra] Announcing 61 new infra puppet modules

2015-01-30 Thread James E. Blair
Hi,

As part of an effort to better support re-use of the Infrastructure
program's software, tooling, and systems-administration work, we have
moved all of our puppet modules out of the system-config repository.
Now each of them may be found in its own git repo, such as
openstack-infra/puppet-zuul and openstack-infra/puppet-gerrit, etc.
There are 61 puppet modules in all, currently.

This work was described in a spec here: 
http://specs.openstack.org/openstack-infra/infra-specs/specs/puppet-modules.html

In time we expect contributions to these modules to reduce their
specificity to the OpenStack project and make them more generally
useful.  Many of the newer ones are already generally useful, some of
the older ones, less so.

With the new configuration, here is where to make an
infrastructure-related change:

  * To change project-related configuration, such as adding a new
project, changing CI jobs, IRC bots, etc., look in
openstack-infra/project-config

  * To change characteristics of the actual servers run by the OpenStack
Infrastructure team (e.g., add a new server, change a config file
setting for a daemon, etc.), look in openstack-infra/system-config

  * To expose an option for a particular service so that it may be
configured in system-config, look in the individual puppet module
for that service (e.g., openstack-infra/puppet-gerrit to add the
ability to toggle a gerrit option).

All of our puppet git repos are tested in a large co-gating
configuration -- a change to any of them will run puppet apply tests on
all of our platforms to ensure that it is as easy to make and verify
changes across any of these modules as it was when they were all located
within the same repository.

-Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Finding people to work on the EC2 API in Nova

2015-01-30 Thread Alexandre Levine

Michael,

Our team can take the effort. We're the ones doing the stackforge EC2 
API and we can maintain the nova's EC2 in acceptable state for the time 
being as well.
If you can give us any permissions and leverage to not just contribute 
fixes and tests but also have a say in approval of those (maybe to just 
one of us) then it'll be fast. Otherwise it'll happen in due time but 
our previous attempts to contribute some fixes for EC2 API in nova took 
usually more than half a year to get through.


Best regards
  Alex Levine

On 1/30/15 3:01 AM, Michael Still wrote:

Hi,

as you might have read on openstack-dev, the Nova EC2 API
implementation is in a pretty sad state. I wont repeat all of those
details here -- you can read the thread on openstack-dev for detail.

However, we got here because no one is maintaining the code in Nova
for the EC2 API. This is despite repeated calls over the last 18
months (at least).

So, does the Foundation have a role here? The Nova team has failed to
find someone to help us resolve these issues. Can the board perhaps
find resources as the representatives of some of the largest
contributors to OpenStack? Could the Foundation employ someone to help
us our here?

I suspect the correct plan is to work on getting the stackforge
replacement finished, and ensuring that it is feature compatible with
the Nova implementation. However, I don't want to preempt the design
process -- there might be other ways forward here.

I feel that a continued discussion which just repeats the last 18
months wont actually fix the situation -- its time to break out of
that mode and find other ways to try and get someone working on this
problem.

Thoughts welcome.

Michael




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack Foundation] Finding people to work on the EC2 API in Nova

2015-01-30 Thread Alexandre Levine

Tim,

We sure we can fix it and we know how. The only problem is to somehow 
get a hand with reviewing and approvals speed? Is there any remedy for 
this? I've asked Michael already above in the thread, but I don't 
presume that it's possible even to allow one of us to become core 
reviewer for EC2 part of the nova? Or is it?


Best regards,
  Alex Levine

On 1/30/15 10:57 PM, Tim Bell wrote:


Alex,

Many thanks for the constructive approach. I've added an item to the 
list for the Ops meetup in March to see who would be interested to help.


As discussed on the change, it is likely that there would need to be 
some additional Nova APIs added to support the full EC2 semantics. 
Thus, there would need to support from the Nova team to enable these 
additional functions.  Having tables in the EC2 layer which get out of 
sync with those in the Nova layer would be a significant problem in 
production.


I think this would merit a good slot in the Vancouver design sessions 
so we can also discuss documentation, migration, packaging, 
configuration management, scaling, HA, etc.


Tim

*From:*matt [mailto:m...@nycresistor.com]
*Sent:* 30 January 2015 20:44
*To:* Alexandre Levine
*Cc:* foundat...@lists.openstack.org; OpenStack Development Mailing 
List (not for usage questions)
*Subject:* Re: [OpenStack Foundation] [openstack-dev] Finding people 
to work on the EC2 API in Nova


+1 cloudscaling has been pretty involved in ec2 support for openstack 
for a long while now.


On Fri, Jan 30, 2015 at 2:27 PM, Alexandre Levine 
alev...@cloudscaling.com mailto:alev...@cloudscaling.com wrote:


Michael,

Our team can take the effort. We're the ones doing the stackforge
EC2 API and we can maintain the nova's EC2 in acceptable state for
the time being as well.
If you can give us any permissions and leverage to not just
contribute fixes and tests but also have a say in approval of
those (maybe to just one of us) then it'll be fast. Otherwise
it'll happen in due time but our previous attempts to contribute
some fixes for EC2 API in nova took usually more than half a year
to get through.

Best regards
  Alex Levine

On 1/30/15 3:01 AM, Michael Still wrote:

Hi,

as you might have read on openstack-dev, the Nova EC2 API
implementation is in a pretty sad state. I wont repeat all of
those
details here -- you can read the thread on openstack-dev for
detail.

However, we got here because no one is maintaining the code in
Nova
for the EC2 API. This is despite repeated calls over the last 18
months (at least).

So, does the Foundation have a role here? The Nova team has
failed to
find someone to help us resolve these issues. Can the board
perhaps
find resources as the representatives of some of the largest
contributors to OpenStack? Could the Foundation employ someone
to help
us our here?

I suspect the correct plan is to work on getting the stackforge
replacement finished, and ensuring that it is feature
compatible with
the Nova implementation. However, I don't want to preempt the
design
process -- there might be other ways forward here.

I feel that a continued discussion which just repeats the last 18
months wont actually fix the situation -- its time to break
out of
that mode and find other ways to try and get someone working
on this
problem.

Thoughts welcome.

Michael

___
Foundation mailing list
foundat...@lists.openstack.org mailto:foundat...@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/foundation



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack Foundation] Finding people to work on the EC2 API in Nova

2015-01-30 Thread Tim Bell
Alex,



Many thanks for the constructive approach. I've added an item to the list for 
the Ops meetup in March to see who would be interested to help.



As discussed on the change, it is likely that there would need to be some 
additional Nova APIs added to support the full EC2 semantics. Thus, there would 
need to support from the Nova team to enable these additional functions.  
Having tables in the EC2 layer which get out of sync with those in the Nova 
layer would be a significant problem in production.



I think this would merit a good slot in the Vancouver design sessions so we can 
also discuss documentation, migration, packaging, configuration management, 
scaling, HA, etc.


Tim

From: matt [mailto:m...@nycresistor.com]
Sent: 30 January 2015 20:44
To: Alexandre Levine
Cc: foundat...@lists.openstack.org; OpenStack Development Mailing List (not for 
usage questions)
Subject: Re: [OpenStack Foundation] [openstack-dev] Finding people to work on 
the EC2 API in Nova

+1 cloudscaling has been pretty involved in ec2 support for openstack for a 
long while now.

On Fri, Jan 30, 2015 at 2:27 PM, Alexandre Levine 
alev...@cloudscaling.commailto:alev...@cloudscaling.com wrote:
Michael,

Our team can take the effort. We're the ones doing the stackforge EC2 API and 
we can maintain the nova's EC2 in acceptable state for the time being as well.
If you can give us any permissions and leverage to not just contribute fixes 
and tests but also have a say in approval of those (maybe to just one of us) 
then it'll be fast. Otherwise it'll happen in due time but our previous 
attempts to contribute some fixes for EC2 API in nova took usually more than 
half a year to get through.

Best regards
  Alex Levine

On 1/30/15 3:01 AM, Michael Still wrote:
Hi,

as you might have read on openstack-dev, the Nova EC2 API
implementation is in a pretty sad state. I wont repeat all of those
details here -- you can read the thread on openstack-dev for detail.

However, we got here because no one is maintaining the code in Nova
for the EC2 API. This is despite repeated calls over the last 18
months (at least).

So, does the Foundation have a role here? The Nova team has failed to
find someone to help us resolve these issues. Can the board perhaps
find resources as the representatives of some of the largest
contributors to OpenStack? Could the Foundation employ someone to help
us our here?

I suspect the correct plan is to work on getting the stackforge
replacement finished, and ensuring that it is feature compatible with
the Nova implementation. However, I don't want to preempt the design
process -- there might be other ways forward here.

I feel that a continued discussion which just repeats the last 18
months wont actually fix the situation -- its time to break out of
that mode and find other ways to try and get someone working on this
problem.

Thoughts welcome.

Michael

___
Foundation mailing list
foundat...@lists.openstack.orgmailto:foundat...@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/foundation

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack Foundation] Finding people to work on the EC2 API in Nova

2015-01-30 Thread matt
+1 cloudscaling has been pretty involved in ec2 support for openstack for a
long while now.

On Fri, Jan 30, 2015 at 2:27 PM, Alexandre Levine alev...@cloudscaling.com
wrote:

 Michael,

 Our team can take the effort. We're the ones doing the stackforge EC2 API
 and we can maintain the nova's EC2 in acceptable state for the time being
 as well.
 If you can give us any permissions and leverage to not just contribute
 fixes and tests but also have a say in approval of those (maybe to just one
 of us) then it'll be fast. Otherwise it'll happen in due time but our
 previous attempts to contribute some fixes for EC2 API in nova took usually
 more than half a year to get through.

 Best regards
   Alex Levine

 On 1/30/15 3:01 AM, Michael Still wrote:

 Hi,

 as you might have read on openstack-dev, the Nova EC2 API
 implementation is in a pretty sad state. I wont repeat all of those
 details here -- you can read the thread on openstack-dev for detail.

 However, we got here because no one is maintaining the code in Nova
 for the EC2 API. This is despite repeated calls over the last 18
 months (at least).

 So, does the Foundation have a role here? The Nova team has failed to
 find someone to help us resolve these issues. Can the board perhaps
 find resources as the representatives of some of the largest
 contributors to OpenStack? Could the Foundation employ someone to help
 us our here?

 I suspect the correct plan is to work on getting the stackforge
 replacement finished, and ensuring that it is feature compatible with
 the Nova implementation. However, I don't want to preempt the design
 process -- there might be other ways forward here.

 I feel that a continued discussion which just repeats the last 18
 months wont actually fix the situation -- its time to break out of
 that mode and find other ways to try and get someone working on this
 problem.

 Thoughts welcome.

 Michael



 ___
 Foundation mailing list
 foundat...@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/foundation

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Requirements.txt and optional requirements

2015-01-30 Thread Morgan Fainberg
As point, we are trying to move away from this model. Having to know the 
dependencies is a bad experience in general. But with the move to eliminate 
optional parts of the api, most of these become real dependencies for 
keystone (a few things will still be optional eg memcache lib). 

--Morgan 

Sent via mobile

On Jan 30, 2015, at 17:08, Alan Pevec ape...@gmail.com wrote:

 - Remove this requirement, no optional entries in requirements.txt, a
 'deployer' has to know what dependencies the components he wants to use have
 
 Keystone is documenting its optional dependencies in test-requirements.txt
 look for # Optional ... comments in
 http://git.openstack.org/cgit/openstack/keystone/tree/test-requirements.txt
 
 
 Cheers,
 Alan
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][libvirt] RFC: ensuring live migration ends

2015-01-30 Thread Vladik Romanovsky


- Original Message -
 From: Daniel P. Berrange berra...@redhat.com
 To: openstack-dev@lists.openstack.org, openstack-operat...@lists.openstack.org
 Sent: Friday, 30 January, 2015 11:47:16 AM
 Subject: [openstack-dev] [nova][libvirt] RFC: ensuring live migration ends
 
 In working on a recent Nova migration bug
 
   https://bugs.launchpad.net/nova/+bug/1414065
 
 I had cause to refactor the way the nova libvirt driver monitors live
 migration completion/failure/progress. This refactor has opened the
 door for doing more intelligent active management of the live migration
 process.
 
 As it stands today, we launch live migration, with a possible bandwidth
 limit applied and just pray that it succeeds eventually. It might take
 until the end of the universe and we'll happily wait that long. This is
 pretty dumb really and I think we really ought to do better. The problem
 is that I'm not really sure what better should mean, except for ensuring
 it doesn't run forever.
 
 As a demo, I pushed a quick proof of concept showing how we could easily
 just abort live migration after say 10 minutes
 
   https://review.openstack.org/#/c/151665/
 
 There are a number of possible things to consider though...
 
 First how to detect when live migration isn't going to succeeed.
 
  - Could do a crude timeout, eg allow 10 minutes to succeeed or else.
 
  - Look at data transfer stats (memory transferred, memory remaining to
transfer, disk transferred, disk remaining to transfer) to determine
if it is making forward progress.

I think this is a better option. We could define a timeout for the progress
and cancel if there is no progress. IIRC there were similar debates about it
in Ovirt, we could do something similar:
https://github.com/oVirt/vdsm/blob/master/vdsm/virt/migration.py#L430

 
  - Leave it upto the admin / user to decided if it has gone long enough
 
 The first is easy, while the second is harder but probably more reliable
 and useful for users.
 
 Second is a question of what todo when it looks to be failing
 
  - Cancel the migration - leave it running on source. Not good if the
admin is trying to evacuate a host.
 
  - Pause the VM - make it complete as non-live migration. Not good if
the guest workload doesn't like being paused
 
  - Increase the bandwidth permitted. There is a built-in rate limit in
QEMU overridable via nova.conf. Could argue that the admin should just
set their desired limit in nova.conf and be done with it, but perhaps
there's a case for increasing it in special circumstances. eg emergency
evacuate of host it is better to waste bandwidth  complete the job,
but for non-urgent scenarios better to limit bandwidth  accept failure ?
 
  - Increase the maximum downtime permitted. This is the small time window
when the guest switches from source to dest. To small and it'll never
switch, too large and it'll suffer unacceptable interuption.
 

In my opinion, it would be great if we could play with bandwidth and downtime
before cancelling the migration or pausing.
However, It makes sense only if there is some kind of a progress in the transfer
stats and not a complete disconnect. In that case we should just cancel it.

 We could do some of these things automatically based on some policy
 or leave them upto the cloud admin/tenant user via new APIs
 
 Third there's question of other QEMU features we could make use of to
 stop problems in the first place
 
  - Auto-converge flag - if you set this QEMU throttles back the CPUs
so the guest cannot dirty ram pages as quickly. This is nicer than
pausing CPUs altogether, but could still be an issue for guests
which have strong performance requirements
 
  - Page compression flag - if you set this QEMU does compression of
pages to reduce data that has to be sent. This is basically trading
off network bandwidth vs CPU burn. Probably a win unless you are
already highly overcomit on CPU on the host
 
 Fourth there's a question of whether we should give the tenant user or
 cloud admin further APIs for influencing migration
 
  - Add an explicit API for cancelling migration ?
 
  - Add APIs for setting tunables like downtime, bandwidth on the fly ?
 
  - Or drive some of the tunables like downtime, bandwidth, or policies
like cancel vs paused from flavour or image metadata properties ?
 
  - Allow operations like evacuate to specify a live migration policy
eg switch non-live migrate after 5 minutes ?
 
IMHO, an explicit API for cancelling migration is very much needed.
I remember cases when migrations took about 8 or hours, leaving the
admins helpless :)

Also, I very much like the idea of having tunables and policy to set
in the flavours and image properties.
To allow the administrators to set these as a template in the flavour
and also to let the users to update/override or request these options
as they should know the best (hopefully) what is running in their guests.
 


 The 

Re: [openstack-dev] [tripleo] Undercloud heat version expectations

2015-01-30 Thread Robert Collins
On 31 January 2015 at 10:25, Gregory Haynes g...@greghaynes.net wrote:
 Excerpts from Gregory Haynes's message of 2015-01-30 18:28:19 +:
 Excerpts from Steven Hardy's message of 2015-01-30 10:29:05 +:
  Hi all,
 
  I've had a couple of discussions lately causing me to question $subject,
  and in particular what our expectations are around tripleo-heat-templates
  working with older (e.g non trunk) versions of Heat in the undercloud.
 
  For example, in [1], we're discussing merging a template-level workaround
  for a heat bug which has been fixed for nearly 4 months (I've now proposed
  a stable/juno backport..) - this raises the question, do we actually
  support tripleo-heat-templates with a stable/juno heat in the undercloud?
 
  Related to this is discussion such as [2], where ideally I'd like us to
  start using some new-shiny features we've been landing in heat to make the
  templates cleaner - is this valid, e.g can I start proposing template
  changes to tripleo-heat-templates which will definitely require
  new-for-kilo heat functionality?
 
  Thanks,
 
  Steve
 
  [1] https://review.openstack.org/#/c/151038/
  [2] https://review.openstack.org/#/c/151389/
 

 Hey Steve,

 A while ago (last mid cycle IIRC) we decided that rather than maintain
 stable branches we would ensure that we could deploy stable openstack
 releases from trunk. I believe Heat falls under this umbrella, and we
 need to make sure that we support deploying at least the latest stable
 heat release.

 That being said, were lacking in this plan ATM. We *really* should have
 a stable release CI job. We do have a spec though[1].

 Cheers,
 Greg


 [1] 
 http://git.openstack.org/cgit/openstack/tripleo-specs/tree/specs/juno/backwards-compat-policy.rst

 We had a discussion in IRC about this and I wanted to bring up the points
 that were made on the ML. By the end of the discussion I think the
 consensus there was that we should resurrect the stable branches.
 Therefore, I am especially seeking input from people who have arguments
 for keeping our current 'deploy stable openstack from master' goals.

 Our goal of being able to deploy stable openstack branches using HEAD of
 tripleo tools makes some new feature development more difficult on
 master than it needs to be. Specifically, dprince has been feeling this
 pain in the tripleo/puppet integration work he is doing. There is also
 some new heat feature work we could benefit from (like the patches
 above) that were going to have to wait multiple cycles for or maintain
 multiple implementations of. Therefore we should look into resurreting
 our stable branches.

 The backwards compat spec specifies that tripleo-image-elements and
 tripleo-heat-templates are co-dependent WRT backwards compat. This
 probably made some sense at the time of the spec writing since
 alternatives to tripleo-image-elements did not exist, but with the
 tripleo/puppet work we need to revisit this.

 Thoughts? Comments?

How will upgrade work? Since we deploy the new stack, which then
upgrades the heat that is executing that stack. That was one of the
big drivers if I remember correctly.

Secondly, one of the big bits of feedback we had from folk *using* the
tripleo image elements etc was that backwards incompatible churn was a
major pain point, which stable branches don't help with at all (since
6 monthly pain windows are still pain).

-Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [barbican] python-barbicanclient 3.0.2 released

2015-01-30 Thread Joe Gordon
On Fri, Jan 30, 2015 at 2:17 AM, Alan Pevec ape...@gmail.com wrote:

 2015-01-29 19:31 GMT+01:00 Joe Gordon joe.gord...@gmail.com:
  That means clients need overlapping dependencies with the stable
 branches.
  I don't think this is a reasonable requirement, and am not sure what we
 gain from it.

 Capping all Oslo and clients on stable/juno was reverted[1] due to
 issue with upgrades when you don't have overlap between master and
 previous release, has that been resolved?


Non overlapping dependencies has been resolved as in grenade supports them.


To quote Doug's commit message from the linked patch:

Capping the requirements in the stable branch to a version lower than
the minimum used in master means we can't do rolling updates any more. We
should undo the caps and fix problems when we encounter them (either by
fixing the libraries or by capping with some overlap to allow updates).

By repining stable/juno we are saying we cannot upgrade a service without
upgrading its python dependencies. If you are not using virtual
environments this means you cannot upgrade half the services on the box,
you must upgrade all or nothing.

As for the 'fix the problems when we encounter them part,' this turns out
to be a non-trivial burden on the development team (as opposed to the
stable-maint team), due to grenade and branchless tempest.

My understanding is this tradeoff has already been debated at one of the
summits, and the agreed upon path going forward was to pin stable branches,
just no one has done it yet.


 Cheers,
 Alan

 [1] https://review.openstack.org/138546

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] The API WG mission statement

2015-01-30 Thread Jeremy Stanley
On 2015-01-30 17:18:00 -0600 (-0600), Dean Troyer wrote:
[...]
 Identify existing and future best practices in OpenStack REST APIs
 to enable new and existing projects to evolve and converge.
[...]

I'm shuddering at the anti-term best practices there. How about
ideals instead? More shorter means more twittable too, right?

(Quickly putting my brush away before someone hands me a paint can.)
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] The API WG mission statement

2015-01-30 Thread Dean Troyer
On Friday, January 30, 2015, Jeremy Stanley fu...@yuggoth.org wrote:

 On 2015-01-30 17:18:00 -0600 (-0600),
 I'm shuddering at the anti-term best practices there. How about
 ideals instead? More shorter means more twittable too, right?


I'd buy that even if it costs a buzzword point.

dt



-- 
-- 
Dean Troyer
dtro...@gmail.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] The API WG mission statement

2015-01-30 Thread Jeremy Stanley
On 2015-01-30 18:25:43 -0600 (-0600), Dean Troyer wrote:
 I'd buy that even if it costs a buzzword point.

The Vancouver summit needs buzzword bingo as a social event.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.db] PyMySQL review

2015-01-30 Thread Monty Taylor
On 01/28/2015 06:33 PM, Johannes Erdfelt wrote:
 On Wed, Jan 28, 2015, Mike Bayer mba...@redhat.com wrote:
 I can envision turning this driver into a total monster, adding
 C-speedups where needed but without getting in the way of async
 patching, adding new APIs for explicit async, and everything else.
 However, I’ve no idea what the developers have an appetite for.

 This is great information. I appreciate the work on evaluating it.

First response spawns a thread about eventlet ...

I would like to register my support of using this MySQL library.

 Can I bring up the alternative of dropping eventlet and switching to
 native threads?
 
 We spend a lot of time working on the various incompatibilies between
 eventlet and other libraries we use. It also restricts us by making it
 difficult to use an entire class of python modules (that use C
 extensions for performance, etc).
 
 I personally have spent more time than I wish to admit fixing bugs in
 eventlet and troubleshooting problems we've had.
 
 And it's never been clear to me why we *need* to use eventlet or
 green threads in general.
 
 Our modern Nova appears to only be weakly tied to eventlet and greenlet.
 I think we would spend less time replacing eventlet with native threads
 than we'll spend in the future trying to fit our code and dependencies
 into the eventlet shaped hole we currently have.
 
 I'm not as familiar with the code in other OpenStack projects, but from
 what I have seen, they appear to be similar to Nova and are only weakly
 tied to eventlet/greenlet.
 
 JE
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack Foundation] Finding people to work on the EC2 API in Nova

2015-01-30 Thread Dean Troyer
On Fri, Jan 30, 2015 at 5:21 PM, Davanum Srinivas dava...@gmail.com wrote:

 Are there plans afoot to add support to switch on stackforge/ec2-api
 in devstack? add tempest tests etc? CI Would go a long way in
 alleviating concerns i think.


I would encourage DevStack support to be implemented as an external plugin
in the stackforge repo directly if they want to do the integration.  This
would allow local devstack runs to use it directly even if it doesn't get
in to the CI gate.

Most of the EC2 exercises in DevStack are broken and I plan to go ahead and
remove them soon, plus disabling n-crt and n-obj which both only exist to
support euca2ools bundling.

dt

-- 

Dean Troyer
dtro...@gmail.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.db][nova] Deprecating use_slave in Nova

2015-01-30 Thread Matt Riedemann



On 1/30/2015 11:05 AM, Matthew Booth wrote:

At some point in the near future, hopefully early in L, we're intending
to update Nova to use the new database transaction management in
oslo.db's enginefacade.

Spec:
http://git.openstack.org/cgit/openstack/oslo-specs/plain/specs/kilo/make-enginefacade-a-facade.rst

Implementation:
https://review.openstack.org/#/c/138215/

One of the effects of this is that we will always know when we are in a
read-only transaction, or a transaction which includes writes. We intend
to use this new contextual information to make greater use of read-only
slave databases. We are currently proposing that if an admin has
configured a slave database, we will use the slave for *all* read-only
transactions. This would make the use_slave parameter passed to some
Nova apis redundant, as we would always use the slave where the context
allows.

However, using a slave database has a potential pitfall when mixed with
separate write transactions. A caller might currently:

1. start a write transaction
2. update the database
3. commit the transaction
4. start a read transaction
5. read from the database

The client might expect data written in step 2 to be reflected in data
read in step 5. I can think of 3 cases here:

1. A short-lived RPC call is using multiple transactions

This is a bug which the new enginefacade will help us eliminate. We
should not be using multiple transactions in this case. If the reads are
in the same transaction as the write: they will be on the master, they
will be consistent, and there is no problem. As a bonus, lots of these
will be race conditions, and we'll fix at least some.

2. A long-lived task is using multiple transactions between long-running
sub-tasks

In this case, for example creating a new instance, we genuinely want
multiple transactions: we don't want to hold a database transaction open
while we copy images around. However, I can't immediately think of a
situation where we'd write data, then subsequently want to read it back
from the db in a read-only transaction. I think we will typically be
updating state, meaning it's going to be a succession of write transactions.

3. Separate RPC calls from a remote client

This seems potentially problematic to me. A client makes an RPC call to
create a new object. The client subsequently tries to retrieve the
created object, and gets a 404.

Summary: 1 is a class of bugs which we should be able to find fairly
mechanically through unit testing. 2 probably isn't a problem in
practise? 3 seems like a problem, unless consumers of cloud services are
supposed to expect that sort of thing.

I understand that slave databases can occasionally get very behind. How
behind is this in practise?

How do we use use_slave currently? Why do we need a use_slave parameter
passed in via rpc, when it should be apparent to the developer whether a
particular task is safe for out-of-date data.

Any chance they have some kind of barrier mechanism? e.g. block until
the current state contains transaction X.

General comments on the usefulness of slave databases, and the
desirability of making maximum use of them?

Thanks,

Matt



I'd recommend talking to Mike Wilson (geekinutah) about this.

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Requirements.txt and optional requirements

2015-01-30 Thread Alan Pevec
 - Remove this requirement, no optional entries in requirements.txt, a
 'deployer' has to know what dependencies the components he wants to use have

Keystone is documenting its optional dependencies in test-requirements.txt
look for # Optional ... comments in
http://git.openstack.org/cgit/openstack/keystone/tree/test-requirements.txt


Cheers,
Alan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.db] PyMySQL review

2015-01-30 Thread Yuriy Taraday
On Thu Jan 29 2015 at 12:59:34 AM Mike Bayer mba...@redhat.com wrote:

 Hey list -


Hey, Mike.

While PyMySQL is lacking test coverage in some areas, has no external
 documentation, and has at least some areas where Python performance can be
 improved, the basic structure of the driver is perfectly fine and
 straightforward.  I can envision turning this driver into a total monster,
 adding C-speedups where needed but without getting in the way of async
 patching, adding new APIs for explicit async, and everything else.
  However, I’ve no idea what the developers have an appetite for.

 Please review the document at https://wiki.openstack.org/
 wiki/PyMySQL_evaluation.


That's a great research! Under its impression I've spent most of last
evening reading PyMySQL sources. It looks like it not as much need C
speedups currently as plain old Python optimizations. Protocol parsing code
seems very inefficient (chained struct.unpack's interleaved with data
copying and util method calls that do the same struct.unpack with
unnecessary type check... wow...) That's a huge place for improvement.
I think it worth spending time on coming vacation to fix these slowdowns.
We'll see if they'll pay back those 10% slowdown people are talking about.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Request Spec Freeze Exception (DRBD for Nova)

2015-01-30 Thread Duncan Thomas
+1

Very excited to see this in and usable

Duncan Thomas
On Jan 30, 2015 2:56 AM, Philipp Marek philipp.ma...@linbit.com wrote:

 Hi all,

  in Paris (and later on, on IRC and the mailing list) I began to ask
 around
  about providing a DRBD storage driver for Nova.
  This is an alternative to using iSCSI for block storage access, and would
  be especially helpful for backends already using DRBD for replicated
  storage.
 any news about this?
 https://review.openstack.org/#/c/134153/


 To reiterate:
   * Spec was submitted in time (Nov 13)
   * Spec wasn't approved, because the Cinder implementation
 (https://review.openstack.org/#/c/140451/) merge got delayed,
 because its prerequisite (https://review.openstack.org/#/c/135139/,
 Transition LVM Driver to use Target Objects) wasn't merged in time,
 because on deadline day (Dec. 17) Gerrit was so much used that
 this devstack run got timeouts against some python site during
 setup
   * Spec Freeze Exception was submitted in time

 http://lists.openstack.org/pipermail/openstack-dev/2015-January/054225.html
   * Having DRBD for Cinder is good, but using the *same* protocol
 to the Nova nodes should really help performance (and reliability);
 for example, the transitions
   Network - Kernel - iSCSI daemon - Kernel - Block Device
 wouldn't be needed anymore; the Kernel could directly respond to the
 queries, and in the near future even using RDMA (where available).
 Reliability should be improved as the Nova node can access multiple
 storage nodes _at the same time_, so it wouldn't matter if one of them
 crashes for whatever reason.


 Please help us *now* to get the change in. It's only a few lines in
 a separate driver, so until it gets configured it won't even be
 noticed!


 And yes, of course we're planning to do CI for that Nova driver, too.


 Regards,

 Phil

 --
 : Ing. Philipp Marek
 : LINBIT | Your Way to High Availability
 : DRBD/HA support and consulting http://www.linbit.com :

 DRBD® and LINBIT® are registered trademarks of LINBIT, Austria.

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat][Keystone] Native keystone resources in Heat

2015-01-30 Thread Steven Hardy
On Thu, Jan 29, 2015 at 12:31:17PM -0500, Zane Bitter wrote:
 On 29/01/15 12:03, Steven Hardy wrote:
 On Thu, Jan 29, 2015 at 11:41:36AM -0500, Zane Bitter wrote:
 IIUC keystone now allows you to add users to a domain that is otherwise
 backed by a read-only backend (i.e. LDAP). If this means that it's now
 possible to configure a cloud so that one need not be an admin to create
 users then I think it would be a really useful thing to expose in Heat. 
 Does
 anyone know if that's the case?
 
 I've not heard of that feature, but it's definitely now possible to
 configure per-domain backends, so for example you could have the heat
 domain backed by SQL and other domains containing real human users backed
 by a read-only directory.
 
 http://adam.younglogic.com/2014/08/getting-service-users-out-of-ldap/

Perhaps we need to seek clarification from Adam/Henry, but my understanding
of that feature is not that it enables you to add users to domains backed
by a read-only directory, but rather that multiple backends are possible,
such that one domain can be backed by a read-only backend, and another
(different) domain can be backed by a different read/write one.

E.g in the example above, you might have the freeipa domain backed by
read-only LDAP which contains your directory of human users, and you might
also have a different domain e.g services or heat backed by a
read/write backend e.g Sql.

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [barbican] python-barbicanclient 3.0.2 released

2015-01-30 Thread Alan Pevec
2015-01-29 19:31 GMT+01:00 Joe Gordon joe.gord...@gmail.com:
 That means clients need overlapping dependencies with the stable branches.
 I don't think this is a reasonable requirement, and am not sure what we gain 
 from it.

Capping all Oslo and clients on stable/juno was reverted[1] due to
issue with upgrades when you don't have overlap between master and
previous release, has that been resolved?


Cheers,
Alan

[1] https://review.openstack.org/138546

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api][nova] Openstack HTTP error codes

2015-01-30 Thread Simon Pasquier
On Fri, Jan 30, 2015 at 3:05 AM, Kenichi Oomichi oomi...@mxs.nes.nec.co.jp
wrote:

  -Original Message-
  From: Roman Podoliaka [mailto:rpodoly...@mirantis.com]
  Sent: Friday, January 30, 2015 2:12 AM
  To: OpenStack Development Mailing List (not for usage questions)
  Subject: Re: [openstack-dev] [api][nova] Openstack HTTP error codes
 
  Hi Anne,
 
  I think Eugeniya refers to a problem, that we can't really distinguish
  between two different  badRequest (400) errors (e.g. wrong security
  group name vs wrong key pair name when starting an instance), unless
  we parse the error description, which might be error prone.

 Yeah, current Nova v2 API (not v2.1 API) returns inconsistent messages
 in badRequest responses, because these messages are implemented at many
 places. But Nova v2.1 API can return consistent messages in most cases
 because its input validation framework generates messages automatically[1].


When you say most cases, you mean JSON schema validation only, right?
IIUC, this won't apply to the errors described by the OP such as invalid
key name, unknown security group, ...

Thanks,
Simon



 Thanks
 Ken'ichi Ohmichi

 ---
 [1]:
 https://github.com/openstack/nova/blob/master/nova/api/validation/validators.py#L104

  On Thu, Jan 29, 2015 at 6:46 PM, Anne Gentle
  annegen...@justwriteclick.com wrote:
  
  
   On Thu, Jan 29, 2015 at 10:33 AM, Eugeniya Kudryashova
   ekudryash...@mirantis.com wrote:
  
   Hi, all
  
  
   Openstack APIs interact with each other and external systems
 partially by
   passing of HTTP errors. The only valuable difference between types of
   exceptions is HTTP-codes, but current codes are generalized, so
 external
   system can’t distinguish what actually happened.
  
  
   As an example two different failures below differs only by error
 message:
  
  
   request:
  
   POST /v2/790f5693e97a40d38c4d5bfdc45acb09/servers HTTP/1.1
  
   Host: 192.168.122.195:8774
  
   X-Auth-Project-Id: demo
  
   Accept-Encoding: gzip, deflate, compress
  
   Content-Length: 189
  
   Accept: application/json
  
   User-Agent: python-novaclient
  
   X-Auth-Token: 2cfeb9283d784cfba694f3122ef413bf
  
   Content-Type: application/json
  
  
   {server: {name: demo, imageRef:
   171c9d7d-3912-4547-b2a5-ea157eb08622, key_name: test,
 flavorRef:
   42, max_count: 1, min_count: 1, security_groups: [{name:
 bar}]}}
  
   response:
  
   HTTP/1.1 400 Bad Request
  
   Content-Length: 118
  
   Content-Type: application/json; charset=UTF-8
  
   X-Compute-Request-Id: req-a995e1fc-7ea4-4305-a7ae-c569169936c0
  
   Date: Fri, 23 Jan 2015 10:43:33 GMT
  
  
   {badRequest: {message: Security group bar not found for project
   790f5693e97a40d38c4d5bfdc45acb09., code: 400}}
  
  
   and
  
  
   request:
  
   POST /v2/790f5693e97a40d38c4d5bfdc45acb09/servers HTTP/1.1
  
   Host: 192.168.122.195:8774
  
   X-Auth-Project-Id: demo
  
   Accept-Encoding: gzip, deflate, compress
  
   Content-Length: 192
  
   Accept: application/json
  
   User-Agent: python-novaclient
  
   X-Auth-Token: 24c0d30ff76c42e0ae160fa93db8cf71
  
   Content-Type: application/json
  
  
   {server: {name: demo, imageRef:
   171c9d7d-3912-4547-b2a5-ea157eb08622, key_name: foo,
 flavorRef:
   42, max_count: 1, min_count: 1, security_groups: [{name:
   default}]}}
  
   response:
  
   HTTP/1.1 400 Bad Request
  
   Content-Length: 70
  
   Content-Type: application/json; charset=UTF-8
  
   X-Compute-Request-Id: req-87604089-7071-40a7-a34b-7bc56d0551f5
  
   Date: Fri, 23 Jan 2015 10:39:43 GMT
  
  
   {badRequest: {message: Invalid key_name provided., code: 400}}
  
  
   The former specifies an incorrect security group name, and the latter
 an
   incorrect keypair name. And the problem is, that just looking at the
   response body and HTTP response code an external system can’t
 understand
   what exactly went wrong. And parsing of error messages here is not
 the way
   we’d like to solve this problem.
  
  
   For the Compute API v 2 we have the shortened Error Code in the
   documentation at
  
 http://developer.openstack.org/api-ref-compute-v2.html#compute_server-addresses
  
   such as:
  
   Error response codes
   computeFault (400, 500, …), serviceUnavailable (503), badRequest (400),
   unauthorized (401), forbidden (403), badMethod (405), overLimit (413),
   itemNotFound (404), buildInProgress (409)
  
   Thanks to a recent update (well, last fall) to our build tool for docs.
  
   What we don't have is a table in the docs saying computeFault has this
   longer Description -- is that what you are asking for, for all
 OpenStack
   APIs?
  
   Tell me more.
  
   Anne
  
  
  
  
   Another example for solving this problem is AWS EC2 exception codes
 [1]
  
  
   So if we have some service based on Openstack projects it would be
 useful
   to have some concrete error codes(textual or numeric), which could
 allow to
   define what actually goes wrong and later correctly process obtained
   exception. 

[openstack-dev] [tripleo] Undercloud heat version expectations

2015-01-30 Thread Steven Hardy
Hi all,

I've had a couple of discussions lately causing me to question $subject,
and in particular what our expectations are around tripleo-heat-templates
working with older (e.g non trunk) versions of Heat in the undercloud.

For example, in [1], we're discussing merging a template-level workaround
for a heat bug which has been fixed for nearly 4 months (I've now proposed
a stable/juno backport..) - this raises the question, do we actually
support tripleo-heat-templates with a stable/juno heat in the undercloud?

Related to this is discussion such as [2], where ideally I'd like us to
start using some new-shiny features we've been landing in heat to make the
templates cleaner - is this valid, e.g can I start proposing template
changes to tripleo-heat-templates which will definitely require
new-for-kilo heat functionality?

Thanks,

Steve

[1] https://review.openstack.org/#/c/151038/
[2] https://review.openstack.org/#/c/151389/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] removing single mode

2015-01-30 Thread Aleksandr Didenko
What do you guys think about switching CentOS CI job [1] to HA with single
controller (1 controller + 1 or 2 computes)? Just to verify that our
replacement of Simple mode works fine.

[1]
https://fuel-jenkins.mirantis.com/job/master.fuel-library.centos.ha_nova_vlan/

On Fri, Jan 30, 2015 at 10:54 AM, Mike Scherbakov mscherba...@mirantis.com
wrote:

 Thanks Igor for the quick turn over, excellent!

 On Fri, Jan 30, 2015 at 1:19 AM, Igor Belikov ibeli...@mirantis.com
 wrote:

 Folks,

 Changes in CI jobs have been made, for master branch of fuel-library we
 are running CentOS HA + Nova VLAN and Ubuntu HA + Neutron VLAN .
 Job naming schema has also been changed, so now it includes actual
 testgroup. Current links for master branch CI jobs are [1] and [2], all
 other jobs can be found here[3] or will show up in your gerrit reviews.
 ISO and environments have been updated to the latest ones.

 [1]
 https://fuel-jenkins.mirantis.com/job/master.fuel-library.centos.ha_nova_vlan/
 [2]
 https://fuel-jenkins.mirantis.com/job/master.fuel-library.ubuntu.ha_neutron_vlan/
 [3]https://fuel-jenkins.mirantis.com
 --
 Igor Belikov
 Fuel DevOps
 ibeli...@mirantis.com





 On 29 Jan 2015, at 13:42, Aleksandr Didenko adide...@mirantis.com
 wrote:

 Mike,

  Any objections / additional suggestions?

 no objections from me, and it's already covered by LP 1415116 bug [1]

 [1] https://bugs.launchpad.net/fuel/+bug/1415116

 On Wed, Jan 28, 2015 at 6:42 PM, Mike Scherbakov 
 mscherba...@mirantis.com wrote:

 Folks,
 one of the things we should not forget about - is out Fuel CI gating
 jobs/tests. [1], [2].

 One of them is actually runs simple mode. Unfortunately, I don't see
 details about tests ran for [1], [2], but I'm pretty sure it's same set as
 [3], [4].

 I suggest to change tests. First of all, we need to get rid of simple
 runs (since we are deprecating it), and second - I'd like us to run Ubuntu
 HA + Neutron VLAN for one of the tests.

 Any objections / additional suggestions?

 [1]
 https://fuel-jenkins.mirantis.com/job/master_fuellib_review_systest_centos/
 [2]
 https://fuel-jenkins.mirantis.com/job/master_fuellib_review_systest_ubuntu/
 [3]
 https://fuel-jenkins.mirantis.com/job/6_0_fuellib_review_systest_centos/
 [4]
 https://fuel-jenkins.mirantis.com/job/6_0_fuellib_review_systest_ubuntu/

 On Wed, Jan 28, 2015 at 2:28 PM, Sergey Vasilenko 
 svasile...@mirantis.com wrote:

 +1 to replace simple to HA with one controller

 /sv


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://openstack-dev-requ...@lists.openstack.org/?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Mike Scherbakov
 #mihgen



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://openstack-dev-requ...@lists.openstack.org/?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org
 ?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Mike Scherbakov
 #mihgen


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] check-grenade-dsvm-neutron failure

2015-01-30 Thread Anna Kamyshnikova
Hello everyone!

A change has been merged yesterday [1] that propose default security group
table to Neutron. It passed all checks well. But now from time to time I
see that check-grenade-dsvm-neutron fails on db upgrade as it has more than
one default security group in one tenant. Having more than one default
security group is not expected behavior and I'm really curious how does it
sometimes happen. I filed a bug about this [2] for Tempest at this moment.

[1] - https://review.openstack.org/142101
[2] - https://bugs.launchpad.net/tempest/+bug/1416294

Regards,
Ann Kamyshnikova
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] SQL Schema Downgrades and Related Issues

2015-01-30 Thread Boris Bobrov
On Friday 30 January 2015 01:01:00 Boris Bobrov wrote:
 On Thursday 29 January 2015 22:06:25 Morgan Fainberg wrote:
  I’d like to propose we stop setting the expectation that a downwards
  migration is a “good idea” or even something we should really support.
  Offering upwards-only migrations would also simplify the migrations in
  general. This downward migration path is also somewhat broken by the
  migration collapses performed in a number of projects (to limit the
  number of migrations that need to be updated when we change a key
  component such as oslo.db or SQL-Alchemy Migrate to Alembic).
  
  Are downward migrations really a good idea for us to support? Is this
  downward migration path a sane expectation? In the real world, would any
  one really trust the data after migrating downwards?
 
 Frankly, I don't see a case when a downgrade from n to (n - 1) in
 development cannot be replaced with a set of fixtures and upgrade from 0
 to (n - 1).
 
 If we assume that upgrade can possible break something in production, we
 should not rely on fixing by downgrading the schema, because a) the code
 depends on the latest schema and b) break can be different and
 unrecoverable.
 
 IMO downward migrations should be disabled. We could make a survey though,
 maybe someone has a story of using them in the fields.

I've made a little survey and there are people who used downgrades for 
debugging of different OpenStack releases.

So, I think I'm +1 on Mike Bayer's opinion.

-- 
Best regards,
Boris Bobrov

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Request Spec Freeze Exception (DRBD for Nova)

2015-01-30 Thread Philipp Marek
Hi all,

 in Paris (and later on, on IRC and the mailing list) I began to ask around 
 about providing a DRBD storage driver for Nova.
 This is an alternative to using iSCSI for block storage access, and would 
 be especially helpful for backends already using DRBD for replicated 
 storage.
any news about this?
https://review.openstack.org/#/c/134153/


To reiterate:
  * Spec was submitted in time (Nov 13)
  * Spec wasn't approved, because the Cinder implementation
(https://review.openstack.org/#/c/140451/) merge got delayed,
because its prerequisite (https://review.openstack.org/#/c/135139/,
Transition LVM Driver to use Target Objects) wasn't merged in time,
because on deadline day (Dec. 17) Gerrit was so much used that
this devstack run got timeouts against some python site during
setup
  * Spec Freeze Exception was submitted in time
http://lists.openstack.org/pipermail/openstack-dev/2015-January/054225.html
  * Having DRBD for Cinder is good, but using the *same* protocol
to the Nova nodes should really help performance (and reliability);
for example, the transitions
  Network - Kernel - iSCSI daemon - Kernel - Block Device
wouldn't be needed anymore; the Kernel could directly respond to the
queries, and in the near future even using RDMA (where available).
Reliability should be improved as the Nova node can access multiple
storage nodes _at the same time_, so it wouldn't matter if one of them
crashes for whatever reason.


Please help us *now* to get the change in. It's only a few lines in
a separate driver, so until it gets configured it won't even be
noticed!


And yes, of course we're planning to do CI for that Nova driver, too.


Regards,
 
Phil

-- 
: Ing. Philipp Marek
: LINBIT | Your Way to High Availability
: DRBD/HA support and consulting http://www.linbit.com :

DRBD® and LINBIT® are registered trademarks of LINBIT, Austria.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] removing single mode

2015-01-30 Thread Mike Scherbakov
Thanks Igor for the quick turn over, excellent!

On Fri, Jan 30, 2015 at 1:19 AM, Igor Belikov ibeli...@mirantis.com wrote:

 Folks,

 Changes in CI jobs have been made, for master branch of fuel-library we
 are running CentOS HA + Nova VLAN and Ubuntu HA + Neutron VLAN .
 Job naming schema has also been changed, so now it includes actual
 testgroup. Current links for master branch CI jobs are [1] and [2], all
 other jobs can be found here[3] or will show up in your gerrit reviews.
 ISO and environments have been updated to the latest ones.

 [1]
 https://fuel-jenkins.mirantis.com/job/master.fuel-library.centos.ha_nova_vlan/
 [2]
 https://fuel-jenkins.mirantis.com/job/master.fuel-library.ubuntu.ha_neutron_vlan/
 [3]https://fuel-jenkins.mirantis.com
 --
 Igor Belikov
 Fuel DevOps
 ibeli...@mirantis.com





 On 29 Jan 2015, at 13:42, Aleksandr Didenko adide...@mirantis.com wrote:

 Mike,

  Any objections / additional suggestions?

 no objections from me, and it's already covered by LP 1415116 bug [1]

 [1] https://bugs.launchpad.net/fuel/+bug/1415116

 On Wed, Jan 28, 2015 at 6:42 PM, Mike Scherbakov mscherba...@mirantis.com
  wrote:

 Folks,
 one of the things we should not forget about - is out Fuel CI gating
 jobs/tests. [1], [2].

 One of them is actually runs simple mode. Unfortunately, I don't see
 details about tests ran for [1], [2], but I'm pretty sure it's same set as
 [3], [4].

 I suggest to change tests. First of all, we need to get rid of simple
 runs (since we are deprecating it), and second - I'd like us to run Ubuntu
 HA + Neutron VLAN for one of the tests.

 Any objections / additional suggestions?

 [1]
 https://fuel-jenkins.mirantis.com/job/master_fuellib_review_systest_centos/
 [2]
 https://fuel-jenkins.mirantis.com/job/master_fuellib_review_systest_ubuntu/
 [3]
 https://fuel-jenkins.mirantis.com/job/6_0_fuellib_review_systest_centos/
 [4]
 https://fuel-jenkins.mirantis.com/job/6_0_fuellib_review_systest_ubuntu/

 On Wed, Jan 28, 2015 at 2:28 PM, Sergey Vasilenko 
 svasile...@mirantis.com wrote:

 +1 to replace simple to HA with one controller

 /sv


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://openstack-dev-requ...@lists.openstack.org/?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Mike Scherbakov
 #mihgen


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://openstack-dev-requ...@lists.openstack.org/?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Mike Scherbakov
#mihgen
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Requirements.txt and optional requirements

2015-01-30 Thread Silvan Kaiser
Hello again,
i submitted a new patch set for this at
https://review.openstack.org/#/c/110722/ ,
looking forward to further reviews. :)
Best regards
Silvan


2015-01-28 10:19 GMT+01:00 Silvan Kaiser sil...@quobyte.com:

 Hi All!
 Thanks for the feedback!

 I'll remove xattr from the requirements in my change set.
 Currently i'm working on a workaround to execute 'getfattr' instead of the
 xattr api call. We can asure getfattr is available via package dependencies
 of our client who has to be installed either way.

 I'm also checking out your proposal in parallel, i cannot find any
 documentation about the 'configuration management manifests', do you mean
 the puppet manifests? Otherwise, could somebody please give me a pointer to
 their documentation, etc.?

 Best regards
 Silvan


 2015-01-27 18:32 GMT+01:00 Jay Pipes jaypi...@gmail.com:

 On 01/27/2015 09:13 AM, Silvan Kaiser wrote:

 Am 27.01.2015 um 16:51 schrieb Jay Pipes jaypi...@gmail.com:
 b) The Glance API image cache can use xattr if SQLite is not
 desired [1], and Glance does *not* list xattr as a dependency in
 requirements.txt. Swift also has a dependency on python-xattr [2].
 So, this particular Python library is not an unknown by any means.

 Do you happen to know how Glance handles this if the dep. is not
 handled in requirements.txt?


 Yep, it's considered a documentation thing and handled in configuration
 management manifests...

 Best,
 -jay


 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
 unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 

--
*Quobyte* GmbH
Boyenstr. 41 - 10115 Berlin-Mitte - Germany
+49-30-814 591 800 - www.quobyte.com
Amtsgericht Berlin-Charlottenburg, HRB 149012B
management board: Dr. Felix Hupfeld, Dr. Björn Kolbeck, Dr. Jan Stender
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][VMware] BP priorities

2015-01-30 Thread John Garbutt
Hi,

Thanks for this updates. Thats super useful.

On 28 January 2015 at 13:21, Gary Kotton gkot...@vmware.com wrote:
 Ephemeral disk support -
 https://blueprints.launchpad.net/nova/+spec/vmware-ephemeral-disk-support

I have made that medium, to raise it above the others.

 The following BP’s need to update their status as they are complete:
 VSAN - https://blueprints.launchpad.net/nova/+spec/vmware-vsan-support

I marked this as complete.

Do ping me on IRC if you don't have permission in launchpad to make
those changes.

 The following need to be abandoned:
 SOAP session management -
 https://blueprints.launchpad.net/nova/+spec/vmware-soap-session-management
 (this is now part of oslo.vmware)

Marked as obsolete.

Thanks,
John

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][gate][stable] How eventlet 0.16.1 broke the gate

2015-01-30 Thread Bailey, Darragh

You may find the code for pip-compile
https://github.com/nvie/pip-tools/tree/future of interest for this, as I
think they may already have a solution for the deep dependency analysis.


I've started experimenting with it for git-upstream cause GitPython have
a habbit of breaking stuff through a couple of releases now :-(


What I like is:
* Doesn't require an extra tool before using 'pip install'
** Some may want to regen the dependencies, but it's optional and the
common python dev approach is retained
* Stable releases are guaranteed to use the versions of dependencies
they were released and verified against
* Improves on the guarantee of gated branch CI
** The idea that if you sync with upstream any test failures are due to
your local changes
** Which is not always true if updated deps can break stuff


On the flip side:
* You remain exposed to security issues in python code until you
manually update
* Development cycle doesn't move forward automatically, may not see
compatibility issues until late when forced to move forward one of the deps


Think the cons can be handled by some additional CI jobs to update the
pins on a regular basis and pass it through the standard gates and
potentially to auto approve during development cycles if they pass
(already getting the latest matching ones so no big diff here). Some
decisions on trade off around whether this should be done for stable
releases automatically or periodically requiring manual approval would
have to be made.


Did I say how much I like the fact that it doesn't require another tool
before just being able to use 'pip install'?


To experiment with it:
virtualenv .venv/pip-tools
source .venv/pip-tools/bin/activate
pip install git+https://github.com/nvie/pip-tools.git@future

Regards,
Darragh Bailey

Nothing is foolproof to a sufficiently talented fool - Unknown

On 22/01/15 03:45, Joshua Harlow wrote:
 A slightly better version that starts to go deeper (and downloads
 dependencies of dependencies and extracts there egg_info to get at
 these dependencies...)

 https://gist.github.com/harlowja/555ea019aef4e901897b

 Output @ http://paste.ubuntu.com/9813919/

 When ran on the same 'test.txt' mentioned below...

 Happy hacking!

 -Josh

 Joshua Harlow wrote:
 A run that shows more of the happy/desired path:

 $ cat test.txt
 six1
 taskflow0.5
 $ python pippin.py -r test.txt
 Initial package set:
 - six ['1']
 - taskflow ['0.5']
 Deep package set:
 - six ['==1.9.0']
 - taskflow ['==0.4.0']

 -Josh

 Joshua Harlow wrote:
 Another thing that I just started whipping together:

 https://gist.github.com/harlowja/5e39ec5ca9e3f0d9a21f

 The idea for the above is to use pip to download dependencies, but
 figure out what versions will work using our own resolver (and our own
 querying of 'http://pypi.python.org/pypi/%s/json') that just does a
 very
 deep search of all requirements (and requirements of requirements...).

 The idea for that is that the probe() function in that gist will
 'freeze' a single requirement then dive down into further requirements
 and ensure compatibility while that 'diving' (aka, recursion into
 further requirements) is underway. If a incompatibility is found then
 the recursion will back-track and try a to freeze a different
 version of
 a desired package (and repeat...).

 To me this kind of deep finding would be a potential way of making this
 work in a way that basically only uses pip for downloading (and does
 the
 deep matching/probing) on our own since once the algorithm above
 doesn't
 backtrack and finds a matching set of requirements that will all work
 together the program can exit (and this set can then be used as the
 master set for openstack; at that point we might have to tell people to
 not use pip, or to only use pip --download to fetch the compatible
 versions).

 It's not completed but it could be complementary to what others are
 working on; feel free to hack away :)

 So far the following works:

 $ cat test.txt
 six1
 taskflow1

 $ python pippin.py -r test.txt
 Initial package set:
 - six ['1']
 - taskflow ['1']
 Traceback (most recent call last):
 File pippin.py, line 168, in module
 main()
 File pippin.py, line 162, in main
 matches = probe(initial, {})
 File pippin.py, line 139, in probe
 result = probe(requirements, gathered)
 File pippin.py, line 129, in probe
 m = find_match(pkg_name, req)
 File pippin.py, line 112, in find_match
 return match_available(req.req, find_versions(pkg_name))
 File pippin.py, line 108, in match_available
  matches '%s' (tried %s) % (req, looked_in))
 __main__.NotFoundException: No requirement found that matches
 'taskflow1' (tried ['0.6.1', '0.6.0', '0.5.0', '0.4.0', '0.3.21',
 '0.2', '0.1.3', '0.1.2', '0.1.1', '0.1'])

 I suspect all that is needed to add is the code that is marked with
 FIXME/TODO there and this kind of recursive back-tracking might just do
 the trick...

 -Josh

 Joe Gordon wrote:


 On Fri, Jan 16, 2015 at 12:25 PM, Joe Gordon joe.gord...@gmail.com
 

Re: [openstack-dev] [Heat][Keystone] Native keystone resources in Heat

2015-01-30 Thread Zane Bitter

On 30/01/15 05:20, Steven Hardy wrote:

On Thu, Jan 29, 2015 at 12:31:17PM -0500, Zane Bitter wrote:

On 29/01/15 12:03, Steven Hardy wrote:

On Thu, Jan 29, 2015 at 11:41:36AM -0500, Zane Bitter wrote:

IIUC keystone now allows you to add users to a domain that is otherwise
backed by a read-only backend (i.e. LDAP). If this means that it's now
possible to configure a cloud so that one need not be an admin to create
users then I think it would be a really useful thing to expose in Heat. Does
anyone know if that's the case?


I've not heard of that feature, but it's definitely now possible to
configure per-domain backends, so for example you could have the heat
domain backed by SQL and other domains containing real human users backed
by a read-only directory.


http://adam.younglogic.com/2014/08/getting-service-users-out-of-ldap/


Perhaps we need to seek clarification from Adam/Henry, but my understanding
of that feature is not that it enables you to add users to domains backed
by a read-only directory, but rather that multiple backends are possible,
such that one domain can be backed by a read-only backend, and another
(different) domain can be backed by a different read/write one.

E.g in the example above, you might have the freeipa domain backed by
read-only LDAP which contains your directory of human users, and you might
also have a different domain e.g services or heat backed by a
read/write backend e.g Sql.


Ah, you're right, I've been misinterpreting that post this whole time. 
Thanks!


- ZB


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Why are we continuing to add new namespaced oslo libs?

2015-01-30 Thread Thomas Goirand
On 01/29/2015 05:24 PM, Doug Hellmann wrote:
 
 
 On Thu, Jan 29, 2015, at 11:03 AM, Thomas Goirand wrote:
 On 01/24/2015 02:01 AM, Doug Hellmann wrote:


 On Fri, Jan 23, 2015, at 07:48 PM, Thomas Goirand wrote:
 Hi,

 I've just noticed that oslo.log made it to global-requirements.txt 9
 days ago. How come are we still adding some name.spaced oslo libs?
 Wasn't the outcome of the discussion in Paris that we shouldn't do that
 anymore, and that we should be using oslo-log instead of oslo.log?

 Is three something that I am missing here?

 Cheers,

 Thomas Goirand (zigo)

 The naming is described in the spec:
 http://specs.openstack.org/openstack/oslo-specs/specs/kilo/drop-namespace-packages.html

 tl;dr - We did it this way to make life easier for the packagers.

 Doug

 Hi Doug,

 Sorry for the late reply.

 Well, you're not making the life of *package maintainers* more easy,
 that's in fact the opposite way, I'm afraid.

 The Debian policy is that Python module packages should be named after
 the import statement in a source file. Meaning that if we do:

 import oslo_db

 then the package should be called python-oslo-db. This means that I will
 have to rename all the Debian packages to remove the dot and put a dash
 instead. But by doing so, if OpenStack upstream is keeping the old
 naming convention, then all the requirements.txt will be wrong (by
 wrong, I mean from my perspective as a package maintainer), and the
 automated dependency calculation of dh_python2 will put package names
 with dots instead of dashes.

 So, what is going to happen, is that I'll have to, for each and every
 package, build a dictionary of translations in debian/pydist-overrides.
 
 That's unfortunate, but you're the only packager who seems to have this
 issue.
 
 I've already spent 2 months more time working on this transition than I
 planned, so I'm not planning to do anything else disruptive with it this
 cycle. If it remains a problem, or some of the other packagers support
 renaming the packages, we can discuss it at the L summit to be done
 during the L cycle.
 
 Doug

Hi Doug,

I've been thinking about this issue for a long time. And finally, I
don't think it's as bad as I previously thought.

What I could do is the opposite of what I wrote above. Which is, package
oslo.db as python-oslo.db and just add a Provides: python-oslo-db, and
that's it. Since we have oslo.db in the requirements.txt, the automated
dh_python2 dependency calculation will only add python-oslo.db in the
Depends: of the packages.

The only thing is that I have already uploaded python-oslo-context (with
a dash), and this has to be fixed, but that's only a single lib, so it
should be fine. I'll have to synchronize with guys from Canonical about
this, so we are doing the same thing on both distro.

Sorry for waiving hard for a false positive,

Cheers,

Thomas Goirand (zigo)

P.S: By the way, I'm *very* happy that you did all the work of moving
away from the namespace, Doug. Thanks a lot for that huge work!


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano] SQLite support - drop or not?

2015-01-30 Thread Andrew Pashkin
Working on this issue I encountered another problem.

Most indices in the project has no names and because of that,
developer must reverse-engineer them in every migration.
Read about that also here [1].

SQLAlchemy and Alembic provide feature for generation constraint
names by pattern, specifically to resolve that kind of issues [1].

I decided to introduce usage of this feature in Murano.

I've implemented solution that preserves backward-compatibility
for migration and allows to rename all constraints according
to patterns safely [2]. With it user, that have already deployed Murano
will be able to upgrade to new version of Murano without issues.

There are downsides in this solution:
- It assumes that all versions of Postgres and MySQL uses the
  same patterns for constraints names generation.
- It is hard to implement a test for this solution and it will be slow.
  Because there is need to reproduce such situation when user has old
  versions of migrations applied, and then tries to upgrade.

Another possible solution is to drop all current migrations and
introduce new one with correct names.
This brings us to new problem - migrations and models are out of sync
right now in multiple places - there are different field types in
migrations and models, migrations introduces indices that is absent
in models, etc.

And this solution has great downside - it is not backward-compatible,
so all old users will lost their data.

We (Murano team) should decide, what solution we want to use.


[1]
http://alembic.readthedocs.org/en/latest/naming.html#tutorial-constraint-names
[2] https://review.openstack.org/150818

-- 
With kind regards, Andrew Pashkin.
cell phone - +7 (985) 898 57 59
Skype - waves_in_fluids
e-mail - apash...@mirantis.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] SQL Schema Downgrades and Related Issues

2015-01-30 Thread Sandy Walsh


From: Johannes Erdfelt [johan...@erdfelt.com]
Sent: Thursday, January 29, 2015 9:18 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all][tc] SQL Schema Downgrades and Related Issues

On Thu, Jan 29, 2015, Morgan Fainberg morgan.fainb...@gmail.com wrote:
 The concept that there is a utility that can (and in many cases
 willfully) cause permanent, and in some cases irrevocable, data loss
 from a simple command line interface sounds crazy when I try and
 explain it to someone.

 The more I work with the data stored in SQL, and the more I think we
 should really recommend the tried-and-true best practices when trying
 to revert from a migration: Restore your DB to a known good state.

You mean like restoring from backup?

Unless your code deploy fails before it has any chance of running, then
you could have had new instances started or instances changed and then
restoring from backups would lose data.

If you meant another way of restoring your data, then there are
some strategies that downgrades could employ that doesn't lose data,
but there is nothing that can handle 100% of cases.

All of that said, for the Rackspace Public Cloud, we have never rolled
back our deploy. We have always rolled forward for any fixes we needed.

From my perspective, I'd be fine with doing away with downgrades, but
I'm not sure how to document that deployers should roll forward if they
have any deploy problems.

JE

Yep ... downgrades simply aren't practical with a SQL-schema based
solution. Too coarse-grained.

We'd have to move to a schema-less model, per-record versioning and
up-down conversion at the Nova Objects layer. Or, possibly introduce
more nodes that can deal with older versions. Either way, that's a big
hairy change.

The upgrade code is still required, so removing the downgrades (and
tests, if any) is a relatively small change to the code base.

The bigger issue is the anxiety the deployer will experience until a
patch lands.

-S

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] problems with instance consoles and novnc

2015-01-30 Thread Jesse Pretorius
On 29 January 2015 at 04:57, Chris Friesen chris.frie...@windriver.com
wrote:

 On 01/28/2015 10:33 PM, Mathieu Gagné wrote:

 On 2015-01-28 11:13 PM, Chris Friesen wrote:

 Anyone have any suggestions on where to start digging?

 We have a similar issue which has yet to be properly diagnosed on our
 side.

 One workaround which looks to be working for us is enabling the private
 mode
 in the browser. If it doesn't work, try deleting your cookies.

 Can you see if those workarounds work for you?


 Neither of those seems to work for me.  I still get a multi-second delay
 and then the red bar with Connect timeout.

 I suspect it's something related to websockify, but I can't figure out
 what.


In some versions of websockify and the relates noVNC versions that use it
I've seen the same behaviour. This is due to the way websockify tries to
detect the protocol to use. It ends up doing a localhost connection and the
browser rejects it as an unsafe operation.

It was fixed in later versions of websockify.

Have you tried manually updating the NoVNC and websockify files to later
versions from source?
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack Foundation] Finding people to work on the EC2 API in Nova

2015-01-30 Thread Stefano Maffulli
On Thu, 2015-01-29 at 16:01 -0800, Michael Still wrote:
 However, we got here because no one is maintaining the code in Nova
 for the EC2 API. This is despite repeated calls over the last 18
 months (at least).

I'd love to get to the root cause before we jump to look for solutions.
The story we hear is that EC2 is important and according to the user
survey there are users that seem to be using OpenStack's EC2 code. Nova
developers point out though that the EC2 code is broken and unusable. So
something is out of whack: either user report are more 'wishes' than
usage or the openstack code is not as bad or someone else is feeding
these users good code (that is not in openstack repositories) or
something else.

I would suggest that we start by reaching out to these users. 

Which questions shall we ask them? I'd start from:

* where did you get the EC2 API: vanilla openstack (version, etc) or via
a vendor? which vendor?
* how do you use the EC2 code? Anecdotes are enough I think at this
point.


Tim and user committee: do you think I or Tom can get the list of
respondents to the user survey who declared to use EC2 so we can ask
them more questions?

If not, we can start by asking on the operators list and blog posts and
wait for someone to come forward.




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] high dhcp lease times in neutron deployments considered harmful (or not???)

2015-01-30 Thread Kevin Benton
But they will if we document it well, which is what Salvatore suggested.

I don't think this is a good approach, and it's a big part of why I started
this thread. Most of the deployers/operators I have worked with only read
the bare minimum documentation to get a Neutron deployment working and they
only adjust the settings necessary for basic functionality.

We have an overwhelming amount of configuration options and adding a note
specifying that a particular setting for DHCP leases has been optimized to
reduce logging at the cost of long downtimes during port IP address updates
is a waste of time and effort on our part.

I think the current default value is also more indicative of something
you'd find in your house, or at work - i.e. stable networks.

Tenants don't care what the DHCP lease time is or that it matches what they
would see from a home router. They only care about connectivity.

One solution is to disallow this operation.

I want this feature to be useful in deployments by default, not strip it
away. You can probably do this with /etc/neutron/policy.json without a code
change if you wanted to block it in a deployment like yours where you have
such a high lease time.

Perhaps letting the user set it, but allow the admin to set the valid
range for min/max?  And if they don't specify they get the default?

Tenants wouldn't have any reason to adjust this default. They would be even
less likely than the operator to know about this weird relationship between
a DHCP setting and the amount of time they lose connectivity after updating
their ports' IPs.

It impacts anyone that hasn't changed from the default since July 2013 and
later
(Havana), since if they don't notice, they might get bitten by it.

Keep in mind that what I am suggesting with the lease-renewal-time would be
separate from the lease expiration time. The only difference that an
operator would see on upgrade (if using the defaults) is increased DHCP
traffic and more logs to syslog from dnsmasq. The lease time would still be
the same so the downtime windows for DHCP agents would be maintained. That
is much less of an impact than many of the non-config changes we make
between cycles.

To clarify, even with an option for dhcp-renewal-time I am proposing, you
are still opposed to setting it to anything low because of logging and the
~24 bps background DHCP traffic per VM?

On Thu, Jan 29, 2015 at 7:11 PM, Brian Haley brian.ha...@hp.com wrote:

 On 01/29/2015 05:28 PM, Kevin Benton wrote:
 How is Neutron breaking this?  If I move a port on my physical switch to
 a
  different subnet, can you still communicate with the host sitting on it?
  Probably not since it has a view of the world (next-hop router) that no
 longer
  exists, and the network won't route packets for it's old IP address to
 the new
  location.  It has to wait for it's current DHCP lease to tick down to
 the point
  where it will use broadcast to get a new one, after which point it will
 work.
 
  That's not just moving to a different subnet. That's moving to a
 different
  broadcast domain. Neutron supports multiple subnets per network
 (broadcast
  domain). An address on either subnet will work. The router has two
 interfaces
  into the network, one on each subnet.[2]
 
 
 Does it work on Windows VMs too?  People run those in clouds too.  The
 point is
  that if we don't know if all the DHCP clients will support it then it's a
  non-starter since there's no way to tell from the server side.
 
  It appears they do.[1] Even for clients that don't, the worst case
 scenario is
  just that they are stuck where we are now.
 
 ... then the deployer can adjust the value upwards..., hmm, can they
 adjust it
  downwards as well?  :)
 
  Yes, but most people doing initial openstack deployments don't and
 wouldn't
  think to without understanding the intricacies of the security groups
 filtering
  in Neutron.

 But they will if we document it well, which is what Salvatore suggested.

 I'm glad you're willing to boil the ocean to try and get the default
 changed,
  but is all this really worth it when all you have to do is edit the
 config file
  in your deployment?  That's why the value is there in the first place.
 
  The default value is basically incompatible with port IP changes. We
 shouldn't
  be shipping defaults that lead to half-broken functionality. What I'm
  understanding is that the current default value is to workaround
 shortcomings in
  dnsmasq. This is an example of implementation details leaking out and
 leading to
  bad UX.

 I think the current default value is also more indicative of something
 you'd
 find in your house, or at work - i.e. stable networks.

 I had another thought on this Kevin, hoping that we could come to some
 resolution, because sure, shipping broken functionality isn't great.  But
 here's
 the rub - how do we make a change in a fixed IP work in *all* deployments?
 Since the end-user can't set this value, they'll run into this problem in
 my