Re: [openstack-dev] [Solum] Core Reviewer Change

2014-10-01 Thread Julien Vey
+1

2014-10-01 0:20 GMT+02:00 Angus Salkeld asalk...@mirantis.com:

 +1
 On 01/10/2014 3:08 AM, Adrian Otto adrian.o...@rackspace.com wrote:

 Solum Core Reviewer Team,

 I propose the following change to our core reviewer group:

 -lifeless (Robert Collins) [inactive]
 +murali-allada (Murali Allada)
 +james-li (James Li)

 Please let me know your votes (+1, 0, or -1).

 Thanks,

 Adrian

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Core Reviewer Change

2014-10-01 Thread Noorul Islam K M
Adrian Otto adrian.o...@rackspace.com writes:

 Solum Core Reviewer Team,

 I propose the following change to our core reviewer group:

 -lifeless (Robert Collins) [inactive]
 +murali-allada (Murali Allada)
 +james-li (James Li)

 Please let me know your votes (+1, 0, or -1).


+1

Regards,
Noorul

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] 2 Minute tokens

2014-10-01 Thread Chmouel Boudjnah
On Wed, Oct 1, 2014 at 3:47 AM, Adam Young ayo...@redhat.com wrote:

 1.  Identify the roles for the APIs that Cinder is going to be calling on
 swift based on Swifts policy.json


FYI: there is no Swifts policy.json in mainline code, there is one external
middleware available that provides it here
https://github.com/cloudwatt/swiftpolicy.git.

Chmouel
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zaqar] Comments on the concerns arose during the TC meeting

2014-10-01 Thread Dmitry Tantsur

On 09/30/2014 02:03 PM, Soren Hansen wrote:

2014-09-12 1:05 GMT+02:00 Jay Pipes jaypi...@gmail.com:

If Nova was to take Soren's advice and implement its data-access layer
on top of Cassandra or Riak, we would just end up re-inventing SQL
Joins in Python-land.


I may very well be wrong(!), but this statement makes it sound like you've
never used e.g. Riak. Or, if you have, not done so in the way it's
supposed to be used.

If you embrace an alternative way of storing your data, you wouldn't just
blindly create a container for each table in your RDBMS.

For example: In Nova's SQL-based datastore we have a table for security
groups and another for security group rules. Rows in the security group
rules table have a foreign key referencing the security group to which
they belong. In a datastore like Riak, you could have a security group
container where each value contains not just the security group
information, but also all the security group rules. No joins in
Python-land necessary.


I've said it before, and I'll say it again. In Nova at least, the SQL
schema is complex because the problem domain is complex. That means
lots of relations, lots of JOINs, and that means the best way to query
for that data is via an RDBMS.


I was really hoping you could be more specific than best/most
appropriate so that we could have a focused discussion.

I don't think relying on a central data store is in any conceivable way
appropriate for a project like OpenStack. Least of all Nova.

I don't see how we can build a highly available, distributed service on
top of a centralized data store like MySQL.
Coming from Skype background I can assure your that you definitely can, 
depending on your needs (and our experiments with e.g. MongoDB ended 
very badly: it just died under IO loads, that our PostgreSQL treated 
like normal). I mean, that's complex topic and I see a lot of people 
switching to NoSQL and a lot of people switching from. NoSQL is not a 
silver bullet for scalability. Just my 0.5.


/me disappears again



Tens or hundreds of thousands of nodes, spread across many, many racks
and datacentre halls are going to experience connectivity problems[1].

This means that some percentage of your infrastructure (possibly many
thousands of nodes, affecting many, many thousands of customers) will
find certain functionality not working on account of your datastore not
being reachable from the part of the control plane they're attempting to
use (or possibly only being able to read from it).

I say over and over again that people should own their own uptime.
Expect things to fail all the time. Do whatever you need to do to ensure
your service keeps working even when something goes wrong. Of course
this applies to our customers too. Even if we take the greatest care to
avoid downtime, customers should spread their workloads across multiple
availability zones and/or regions and probably even multiple cloud
providers. Their service towards their users is their responsibility.

However, our service towards our users is our responsibility. We should
take the greatest care to avoid having internal problems affect our
users.  Building a massively distributed system like Nova on top of a
centralized data store is practically a guarantee of the opposite.


For complex control plane software like Nova, though, an RDBMS is the
best tool for the job given the current lay of the land in open source
data storage solutions matched with Nova's complex query and
transactional requirements.


What transactional requirements?


Folks in these other programs have actually, you know, thought about
these kinds of things and had serious discussions about alternatives.
It would be nice to have someone acknowledge that instead of snarky
comments implying everyone else has it wrong.


I'm terribly sorry, but repeating over and over that an RDBMS is the
best tool without further qualification than Nova's data model is
really complex reads *exactly* like a snarky comment implying everyone
else has it wrong.

[1]: http://aphyr.com/posts/288-the-network-is-reliable




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] minimum python support version for juno

2014-10-01 Thread Osanai, Hisashi

Hi,

I would like to know the minimum python support version for juno.
I checked the following memo. My understanding is python 2.6 support will be 
supported in juno and also dropped before kilo so it will be dropped in 
one of stable releases in juno. Is this correct understanding?

https://etherpad.openstack.org/p/juno-cross-project-future-of-python

Want to drop 2.6 ASAP, currently blocked on SLES confirmation that 2.6 is no 
longer needed
Declare intent that it will definitely go away by K (for services)
Make sure that every *python module* (dependencies, and not only core projects) 
that 
we maintain declare non-support in 2.6 if they stop supporting it

Cheers,
Hisashi Osanai

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack cascading

2014-10-01 Thread Tom Fifield
Hi Joe,

On 01/10/14 09:10, joehuang wrote:
 OpenStack cascading: to integrate multi-site / multi-vendor OpenStack 
 instances into one cloud with OpenStack API exposed.
 Cells: a single OpenStack instance scale up methodology

Just to let you know - there are actually some users out there that use
cells to integrate multi-site / multi-vendor OpenStack instances into
one cloud with OpenStack API exposed., and this is their main reason
for using cells - not as a scale up methodology.


Regards,

Tom

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] Glance on swift problem

2014-10-01 Thread Sławek Kapłoński

Hello,

W dniu 2014-09-30 20:28, Timur Nurlygayanov napisał(a):

Hi Slawek,

we faced the same error and this is issue with Swift.
We can see 100% disk usage on the Swift node during the file upload
and looks like Swift can't send info about status of the file loading
in time.

On our environments we found the workaround for this issue:

1. Set  swift_store_large_object_size = 200 in glance.conf.


I have it set to 1024 - should it be lower? Value which I have is too 
big now?




2. Add to Swift proxy-server.conf:

[DEFAULT]
 ...
 node_timeout = 90

Probably we can set this value as default value for this parameter
instead of '30'?

Regards,

Timur

On Tue, Sep 30, 2014 at 7:41 PM, Sławek Kapłoński
sla...@kaplonski.pl wrote:


Hello,

I can't find that upload from was previous logs but I now try to
upload same image once again. In glance there was exactly same
error. In swift logs I have:

Sep 30 17:35:10 127.0.0.1 proxy-server X.X.X.X Y.Y.Y.Y
30/Sep/2014/15/35/10 HEAD
/v1/AUTH_7ef5a7661ccd4c069e3ad387a6dceebd/glance HTTP/1.0 204
Sep 30 17:35:16 127.0.0.1 proxy-server X.X.X.X Y.Y.Y.Y
30/Sep/2014/15/35/16 PUT


/v1/AUTH_7ef5a7661ccd4c069e3ad387a6dceebd/glance/fa5dfe09-74f5-4287-9852-d2f1991eebc0-1

HTTP/1.0 201 - -

Best regards
Slawek Kaplonski

W dniu 2014-09-30 17:03, Kuo Hugo napisał(a):

Hi ,

Could you please post the log of related requests in Swift's log
???

Thanks // Hugo

2014-09-30 22:20 GMT+08:00 Sławek Kapłoński
sla...@kaplonski.pl:

Hello,

I'm using openstack havana release and glance with swift backend.
Today I found that I have problem when I create image with url in
--copy-from when image is bigger than my
swift_store_large_object_size because then glance is trying to
split image to chunks with size given in
swift_store_large_object_chunk_size and when try to upload first
chunk to swift I have error:

2014-09-30 15:05:29.361 18023 ERROR glance.store.swift [-] Error
during chunked upload to backend, deleting stale chunks
2014-09-30 15:05:29.361 18023 TRACE glance.store.swift Traceback
(most recent call last):
2014-09-30 15:05:29.361 18023 TRACE glance.store.swift   File
/usr/lib/python2.7/dist-packages/glance/store/swift.py, line 384,
in add
2014-09-30 15:05:29.361 18023 TRACE glance.store.swift   
 content_length=content_length)
2014-09-30 15:05:29.361 18023 TRACE glance.store.swift   File
/usr/lib/python2.7/dist-packages/swiftclient/client.py, line
1234,
in put_object
2014-09-30 15:05:29.361 18023 TRACE glance.store.swift   
 response_dict=response_dict)
2014-09-30 15:05:29.361 18023 TRACE glance.store.swift   File
/usr/lib/python2.7/dist-packages/swiftclient/client.py, line
1143,
in _retry
2014-09-30 15:05:29.361 18023 TRACE glance.store.swift   
 reset_func(func, *args, **kwargs)
2014-09-30 15:05:29.361 18023 TRACE glance.store.swift   File
/usr/lib/python2.7/dist-packages/swiftclient/client.py, line
1215,
in _default_reset
2014-09-30 15:05:29.361 18023 TRACE glance.store.swift     %
(container, obj))
2014-09-30 15:05:29.361 18023 TRACE glance.store.swift
ClientException: put_object('glance',
'9f56ccec-deeb-4020-95ba-ca7bf1170056-1', ...) failure and no
ability to reset contents for reupload.
2014-09-30 15:05:29.361 18023 TRACE glance.store.swift
2014-09-30 15:05:29.362 18023 ERROR glance.store.swift [-] Failed
to add object to Swift.
Got error from Swift: put_object('glance',
'9f56ccec-deeb-4020-95ba-ca7bf1170056-1', ...) failure and no
ability to reset contents for reupload.
2014-09-30 15:05:29.362 18023 ERROR glance.api.v1.upload_utils [-]
Failed to upload image 9f56ccec-deeb-4020-95ba-ca7bf1170056
2014-09-30 15:05:29.362 18023 TRACE glance.api.v1.upload_utils
Traceback (most recent call last):
2014-09-30 15:05:29.362 18023 TRACE glance.api.v1.upload_utils 
 File
/usr/lib/python2.7/dist-packages/glance/api/v1/upload_utils.py,
line 101, in upload_data_to_store
2014-09-30 15:05:29.362 18023 TRACE glance.api.v1.upload_utils   
 store)
2014-09-30 15:05:29.362 18023 TRACE glance.api.v1.upload_utils 
 File /usr/lib/python2.7/dist-packages/glance/store/__init__.py,
line 333, in store_add_to_backend
2014-09-30 15:05:29.362 18023 TRACE glance.api.v1.upload_utils   
 (location, size, checksum, metadata) = store.add(image_id, data,
size)
2014-09-30 15:05:29.362 18023 TRACE glance.api.v1.upload_utils 
 File /usr/lib/python2.7/dist-packages/glance/store/swift.py,
line 447, in add
2014-09-30 15:05:29.362 18023 TRACE glance.api.v1.upload_utils   
 raise glance.store.BackendException(msg)
2014-09-30 15:05:29.362 18023 TRACE glance.api.v1.upload_utils
BackendException: Failed to add object to Swift.
2014-09-30 15:05:29.362 18023 TRACE glance.api.v1.upload_utils Got
error from Swift: put_object('glance',
'9f56ccec-deeb-4020-95ba-ca7bf1170056-1', ...) failure and no
ability to reset contents for reupload.

Does someone of You got same error and know what is solution of it?
I was searching about that in google but I not found anything what
could solve my problem.

--
Best regards

[openstack-dev] [Nova] [Cinder] [Sahara] Juno RC1 available

2014-10-01 Thread Thierry Carrez
Hello everyone,

Nova, Cinder, and Sahara just published their first release candidate
for the upcoming 2014.2 (Juno) release.

The RC1 tarballs are available for download at:
https://launchpad.net/nova/juno/juno-rc1
https://launchpad.net/cinder/juno/juno-rc1
https://launchpad.net/sahara/juno/juno-rc1

Unless release-critical issues are found that warrant a release
candidate respin, these RC1s will be formally released as the 2014.2
final version on October 16. You are therefore strongly encouraged to
test and validate these tarballs !

Alternatively, you can directly test the proposed/juno branch at:
https://github.com/openstack/nova/tree/proposed/juno
https://github.com/openstack/cinder/tree/proposed/juno
https://github.com/openstack/sahara/tree/proposed/juno

If you find an issue that could be considered release-critical, please
file it at:

https://bugs.launchpad.net/nova/+filebug
https://bugs.launchpad.net/cinder/+filebug
https://bugs.launchpad.net/sahara/+filebug

and tag it *juno-rc-potential* to bring it to the release crew's
attention.

Note that the master branch of Keystone and Glance are now open for
Kilo development, and feature freeze restrictions no longer apply there.

Regards,

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack cascading

2014-10-01 Thread loy wolfe
Hi Joe and Cellers,

I've tried to understand relationship between Cell and Cascading. If
Cell has been designed as below, would it be the same as Cascading?

1) Besides Nova, Neutron/Ceilometer.. is also hierarchically
structured for scalability.

2) Child-parent interaction is based on REST OS-API, but not internal
rpc message.

By my understanding, core idea of Cascading is that each resource
building block(like child cell) is a clearly separated autonomous
system, with the already defined REST OS-API as the NB integration
interface of each block, is that right?

So, what's the OAM and business value? Is it easy to add a building
block POD into the running production cloud, while this POD is from a
different Openstack packager and has its own deployment choice:
Openstack version release(J/K/L...), MQ/DB type(mysql/pg,
rabbitmq/zeromq..), backend drivers, Nova/Neutron/Cinder/Ceilometer
controller-node / api-server config options...?

Best Regards
Loy


On Wed, Oct 1, 2014 at 3:19 PM, Tom Fifield t...@openstack.org wrote:

 Hi Joe,

 On 01/10/14 09:10, joehuang wrote:
  OpenStack cascading: to integrate multi-site / multi-vendor OpenStack 
  instances into one cloud with OpenStack API exposed.
  Cells: a single OpenStack instance scale up methodology

 Just to let you know - there are actually some users out there that use
 cells to integrate multi-site / multi-vendor OpenStack instances into
 one cloud with OpenStack API exposed., and this is their main reason
 for using cells - not as a scale up methodology.


 Regards,

 Tom

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] 2 Minute tokens

2014-10-01 Thread Steven Hardy
On Tue, Sep 30, 2014 at 10:44:51AM -0400, Adam Young wrote:
 What is keeping us from dropping the (scoped) token duration to 5 minutes?
 
 
 If we could keep their lifetime as short as network skew lets us, we would
 be able to:
 
 Get rid of revocation checking.
 Get rid of persisted tokens.
 
 OK,  so that assumes we can move back to PKI tokens, but we're working on
 that.
 
 What are the uses that require long lived tokens?  Can they be replaced with
 a better mechanism for long term delegation (OAuth or Keystone trusts) as
 Heat has done?

FWIW I think you're misrepresenting Heat's usage of Trusts here - 2 minute
tokens will break Heat just as much as any other service:

https://bugs.launchpad.net/heat/+bug/1306294

http://lists.openstack.org/pipermail/openstack-dev/2014-September/045585.html


Summary:

- Heat uses the request token to process requests (e.g stack create), which
  may take an arbitrary amount of time (default timeout one hour).

- Some use-cases demand timeout of more than one hour (specifically big
  TripleO deployments), heat breaks in these situations atm, folks are
  working around it by using long (several hour) token expiry times.

- Trusts are only used of asynchronous signalling, e.g Ceilometer signals
  Heat, we switch to a trust scoped token to process the response to the
  alarm (e.g launch more instances on behalf of the user for autoscaling)

My understanding, ref notes in that bug, is that using Trusts while
servicing a request to effectively circumvent token expiry was not legit
(or at least yukky and to be avoided).  If you think otherwise then please
let me know, as that would be the simplest way to fix the bug above (switch
to a trust token while doing the long-running create operation).

Trusts is not really ideal for this use-case anyway, as it requires the
service to have knowledge of the roles to delegate (or that the user
provides a pre-created trust), ref bug #1366133.  I suppose we could just
delegate all the roles we find in the request scope and be done with it,
given that bug has been wontfixed.

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Core Reviewer Change

2014-10-01 Thread Pierre Padrixe
+1

2014-10-01 8:50 GMT+02:00 Noorul Islam K M noo...@noorul.com:

 Adrian Otto adrian.o...@rackspace.com writes:

  Solum Core Reviewer Team,
 
  I propose the following change to our core reviewer group:
 
  -lifeless (Robert Collins) [inactive]
  +murali-allada (Murali Allada)
  +james-li (James Li)
 
  Please let me know your votes (+1, 0, or -1).
 

 +1

 Regards,
 Noorul

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack cascading

2014-10-01 Thread joehuang
Hello, Tom, 

Thanks for your mail to mention that some users out there that use
cells to integrate multi-site / multi-vendor OpenStack instances into
one cloud with OpenStack API exposed.,.

Why do I think Cells using scale up methodology?

1, Use case 1: All cells using shared Cinder, Neutron, Glance as John Garbutt 
mentioned in the mail. In this use case, ,one Cinder, Neutron, Galnce intance 
to scale up for multi-sites, no multi-vendor cinder, neutron, glance, althoth 
it can integrate different vendor's driver/agent/plugin. This use case has 
unified north-bound OpenStack API

2, Use case 2, each child cells with seperate Cinder, Nova-Network. For this 
use case, No unified north-bound OpenStack API, multi-endpoints for upper layer.

3, Until now, only RPC used for inter-cell or api-cell communication. For 
multi-data center deployment, it leads to the risk of out of management: if one 
parent cell faild, no API or CLI is available to manage the child cells. RPC 
message interface is imposible being used to manage child cells.

4. Ok, suppose that Cells will add new driver to use REST API for  inter-cell 
or api-cell communication, then should it be done on Cinder/Neutron/Glance too? 
or still using shared Cinder/Neutron/Glance. If it's the the first choice, it 
is the same design as OpenStack cascading solution wants to do. If it's still 
using shared service, the RPC message acorss different data centers still 
exists, and it's still scale up methodology.

Best Regards

Chaoyi Huang ( joehuang )

From: Tom Fifield [t...@openstack.org]
Sent: 01 October 2014 15:19
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack 
cascading

Hi Joe,

On 01/10/14 09:10, joehuang wrote:
 OpenStack cascading: to integrate multi-site / multi-vendor OpenStack 
 instances into one cloud with OpenStack API exposed.
 Cells: a single OpenStack instance scale up methodology

Just to let you know - there are actually some users out there that use
cells to integrate multi-site / multi-vendor OpenStack instances into
one cloud with OpenStack API exposed., and this is their main reason
for using cells - not as a scale up methodology.


Regards,

Tom

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Weekly Octavia meeting (2014-10-01)

2014-10-01 Thread Stephen Balukoff
Hi Folks!

Since I had to miss last week's meeting due to a last minute emergency, and
since it seems there was significant confusion over some of the items I had
added to last week's agenda, this week's agenda will actually be
essentially the same thing as last week's. Here's what we've got on tap:


   - Review progress on action items from last week
   - From blogan: Neutron lbaas v1 and v2 right now creates a neutron port
   before passing any control to the driver, we need to decide how Octavia is
   going to handle that
   - Discuss any outstanding blockers
   - Review status on outstanding gerrit reviews
   - Review list of blueprints, assign people to specific blueprints and/or
   tasks



Please feel free to add additional agenda items here:
https://wiki.openstack.org/wiki/Octavia/Weekly_Meeting_Agenda#Agenda

If you've been working on Octavia, please update our standup etherpad:
https://etherpad.openstack.org/p/octavia-weekly-standup

Beyond that, this is your friendly reminder that the Octavia team meets on
Wednesdays at 20:00UTC in #openstack-meeting-alt

Thanks,
Stephen

-- 
Stephen Balukoff
Blue Box Group, LLC
(800)613-4305 x807
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Requirements freeze exception for testscenarios, oslotest, psycopg2, MySQL-python

2014-10-01 Thread Thierry Carrez
Ihar Hrachyshka wrote:
 Also, depfreeze seems to apply to openstack/requirements repository
 only [1], and projects are open to consume new dependencies from there.
 
 [1]: https://wiki.openstack.org/wiki/DepFreeze

Yes, you're right.
Nothing prevents this patch from merging at this point.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack cascading

2014-10-01 Thread joehuang
Hello, Loy,

Thank you very much. You have already grasped the core design idea for 
OpenStack cascading:

By my understanding, core idea of Cascading is that each resource
building block(like child cell) is a clearly separated autonomous
system, with the already defined REST OS-API as the NB integration
interface of each block, is that right?

Yes, you are right. the cascading OpenStack (the parent) using already defined 
REST OS-API as the NB integration for each building block
(we called cascaded OpenStack).

So, what's the OAM and business value? Is it easy to add a building
block POD into the running production cloud, while this POD is from a
different Openstack packager and has its own deployment choice:
Openstack version release(J/K/L...), MQ/DB type(mysql/pg,
rabbitmq/zeromq..), backend drivers, Nova/Neutron/Cinder/Ceilometer
controller-node / api-server config options...?

In the lab, we have already dynamicly added new block POD (cascaded OpenStack)
 into the cloud with OpenStack cascading introduced. And each cascaded OpenStack
 version can be different because we use pythonclient and OpenStack itself 
support 
multiple API version compatibility co-exist. For sure DB/messagebus/backend 
drivers/controller 
node configuration can be different for different cascaded OpenStack.

Best regards

Chaoyi Huang ( joehuang )


From: loy wolfe [loywo...@gmail.com]
Sent: 01 October 2014 16:13
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by 
OpenStack cascading

Hi Joe and Cellers,

I've tried to understand relationship between Cell and Cascading. If
Cell has been designed as below, would it be the same as Cascading?

1) Besides Nova, Neutron/Ceilometer.. is also hierarchically
structured for scalability.

2) Child-parent interaction is based on REST OS-API, but not internal
rpc message.

By my understanding, core idea of Cascading is that each resource
building block(like child cell) is a clearly separated autonomous
system, with the already defined REST OS-API as the NB integration
interface of each block, is that right?

So, what's the OAM and business value? Is it easy to add a building
block POD into the running production cloud, while this POD is from a
different Openstack packager and has its own deployment choice:
Openstack version release(J/K/L...), MQ/DB type(mysql/pg,
rabbitmq/zeromq..), backend drivers, Nova/Neutron/Cinder/Ceilometer
controller-node / api-server config options...?

Best Regards
Loy


On Wed, Oct 1, 2014 at 3:19 PM, Tom Fifield t...@openstack.org wrote:

 Hi Joe,

 On 01/10/14 09:10, joehuang wrote:
  OpenStack cascading: to integrate multi-site / multi-vendor OpenStack 
  instances into one cloud with OpenStack API exposed.
  Cells: a single OpenStack instance scale up methodology

 Just to let you know - there are actually some users out there that use
 cells to integrate multi-site / multi-vendor OpenStack instances into
 one cloud with OpenStack API exposed., and this is their main reason
 for using cells - not as a scale up methodology.


 Regards,

 Tom

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] minimum python support version for juno

2014-10-01 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

On 01/10/14 09:08, Osanai, Hisashi wrote:
 
 Hi,
 
 I would like to know the minimum python support version for juno. I
 checked the following memo. My understanding is python 2.6 support
 will be supported in juno and also dropped before kilo so it will
 be dropped in one of stable releases in juno. Is this correct
 understanding?

All stable Juno releases will support Python 2.6. All Kilo releases
are expected to drop Python 2.6 support.

 
 https://etherpad.openstack.org/p/juno-cross-project-future-of-python

  Want to drop 2.6 ASAP, currently blocked on SLES confirmation that
 2.6 is no longer needed Declare intent that it will definitely go
 away by K (for services) Make sure that every *python module*
 (dependencies, and not only core projects) that we maintain declare
 non-support in 2.6 if they stop supporting it
 
 Cheers, Hisashi Osanai
 
 ___ OpenStack-dev
 mailing list OpenStack-dev@lists.openstack.org 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2.0.22 (Darwin)

iQEcBAEBCgAGBQJUK8hnAAoJEC5aWaUY1u57sVwH/1hFuICjwHD5RZ+HQnDikkr6
EN4pI/CGEJFfRRymcml2DhT5O3njZNQCB3Q49qbGrDa2PLUIds5l8s8I6jkkHd3a
+yw4yshyBNCu+I8cpivKMGfdd6o2xRuI5YUaZLS4td+cfoU2Pt5+tD0M/uiJid86
rbb+RWCVtE5mHWk84iWLwIpNs72ozUwoYLOuHKI8P3nm1jTffiY8600QzGe1T3DF
w+2RZ344UKfrRx9+DpJKtl5CRV6jkRMapT1j7u+KWZTwbs0xiGp0FSyKBOJup+oR
fkvQMg4aZYsHSeXUqNtoXH7HHx/+P7OVMBbB20hjCy7Z/Df/5KEHHOZGYY4QZAs=
=jgtp
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [NFV] VLAN trunking to VM - justification/use cases

2014-10-01 Thread Calum Loudon
Hello all

I took the action on last week's call to explain why the VLAN trunking to VM bp
is a relevant use case for NFV - here's my take.

The big picture is that this is about how service providers can use 
virtualisation to provide differentiated network services to their customers 
(and specifically enterprise customers rather than end users); it's not about 
VMs want to set up networking between themselves.

A typical service provider may be providing network services to thousands or 
more of enterprise customers.  The details of and configuration required for 
individual services will differ from customer to customer.  For example, 
consider a Session Border Control service (basically, policing VoIP 
interconnect): different customers will have different sets of SIP trunks that 
they can connect to, different traffic shaping requirements, different 
transcoding rules etc.

Those customers will normally connect in to the service provider in one of two 
ways: a dedicated physical link, or through a VPN over the public Internet.  
Once that traffic reaches the edge of the SP's network, then it makes sense for 
the SP to put all that traffic onto the same core network while keeping some 
form of separation to allow the network services to identify the source of the 
traffic and treat it independently.  There are various overlay techniques that 
can be used (e.g. VXLAN, GRE tunnelling) but one common and simple one is VLANs.
Carrying VLAN trunking into the VM allows this scheme to continue to be used in 
a virtual world.

In this set-up, then any VMs implementing those services have to be able to 
differentiate between customers.  About the only way of doing that today in 
OpenStack is to configure one provider network per customer then have one vNIC 
per provider network, but that approach clearly doesn't scale (both performance 
and configuration effort) if a VM has to see traffic from hundreds or thousands 
of customers.  Instead, carrying VLAN trunking into the VM allows them to do 
this scalably.

The net is that a VM providing a service that needs to have access to a 
customer's non-NATed source addresses needs an overlay technology to allow this,
and VLAN trunking into the VM is sufficiently scalable for this use case and 
leverages a common approach.

Calum


Calum Loudon 
Director, Architecture
+44 (0)208 366 1177
 
METASWITCH NETWORKS 
THE BRAINS OF THE NEW GLOBAL NETWORK
www.metaswitch.com



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [IPv6] New API format for extra_dhcp_opts

2014-10-01 Thread Xu Han Peng

ip_version sounds great.

Currently the opt-names are written into the configuration file of 
dnsmasq directly. So I would say yes, they are coming from dnsmasq 
definitions.


It will make more sense when ip_version is missing or null, the option 
apply to both since we could have only ipv6 or ipv4 address on the port. 
However, the validation of opt-value should rule out the ones which are 
not suitable for the current address. For example, an IPv6 dns server 
should not be specified for IPv4 address port, etc...


Xu Han

On 09/30/2014 08:41 PM, Robert Li (baoli) wrote:

Xu Han,

That looks good to me. To keep it consistent with existing CLI, we 
should use ip-version instead of 'version'. It seems to be identical 
to prefixing the option_name with v4 or v6, though.


Just to clarify, are the available opt-names coming from dnsmasq 
definitions?


With regard to the default, your suggestion *version is optional (no 
version means version=4).* seems to be different from Mark's:



I'm -1 for both options because neither is properly backwards
compatible.  Instead we should add an optional 3rd value to the
dictionary: version.  The version key would be used to make
the option only apply to either version 4 or 6. *If the key is
missing or null, then the option would apply to both*. 




Thanks,
Robert

On 9/30/14, 1:46 AM, Xu Han Peng pengxu...@gmail.com 
mailto:pengxu...@gmail.com wrote:


Robert,

I think the CLI will look something like based on Mark's suggestion:

neutron port-create extra_dhcp_opts
opt_name=dhcp_option_name,opt_value=value,version=4(or 6)
network

This extra_dhcp_opts can be repeated and version is optional (no
version means version=4).

Xu Han

On 09/29/2014 08:51 PM, Robert Li (baoli) wrote:

Hi Xu Han,

My question is how the CLI user interface would look like to
distinguish between v4 and v6 dhcp options?

Thanks,
Robert

On 9/28/14, 10:29 PM, Xu Han Peng pengxu...@gmail.com
mailto:pengxu...@gmail.com wrote:

Mark's suggestion works for me as well. If no one objects, I
am going to start the implementation.

Thanks,
Xu Han

On 09/27/2014 01:05 AM, Mark McClain wrote:


On Sep 26, 2014, at 2:39 AM, Xu Han Peng
pengxu...@gmail.com mailto:pengxu...@gmail.com wrote:


Currently the extra_dhcp_opts has the following API
interface on a port:

{
port:
{
extra_dhcp_opts: [
{opt_value: testfile.1,opt_name:
bootfile-name},
{opt_value: 123.123.123.123, opt_name:
tftp-server},
{opt_value: 123.123.123.45, opt_name:
server-ip-address}
],

 }
}

During the development of DHCPv6 function for IPv6 subnets,
we found this format doesn't work anymore because an port
can have both IPv4 and IPv6 address. So we need to find a
new way to specify extra_dhcp_opts for DHCPv4 and DHCPv6,
respectively. (
https://bugs.launchpad.net/neutron/+bug/1356383)

Here are some thoughts about the new format:

Option1: Change the opt_name in extra_dhcp_opts to add a
prefix (v4 or v6) so we can distinguish opts for v4 or v6
by parsing the opt_name. For backward compatibility, no
prefix means IPv4 dhcp opt.

extra_dhcp_opts: [
{opt_value: testfile.1,opt_name:
bootfile-name},
{opt_value: 123.123.123.123, opt_name:
*v4:*tftp-server},
{opt_value: [2001:0200:feed:7ac0::1],
opt_name: *v6:*dns-server}
]

Option2: break extra_dhcp_opts into IPv4 opts and IPv6
opts. For backward compatibility, both old format and new
format are acceptable, but old format means IPv4 dhcp opts.

extra_dhcp_opts: {
 ipv4: [
{opt_value: testfile.1,opt_name:
bootfile-name},
{opt_value: 123.123.123.123,
opt_name: tftp-server},
 ],
 ipv6: [
{opt_value:
[2001:0200:feed:7ac0::1], opt_name: dns-server}
 ]
}

The pro of Option1 is there is no need to change API
structure but only need to add validation and parsing to
opt_name. The con of Option1 is that user need to input
prefix for every opt_name which can be error prone. The pro
of Option2 is that it's clearer than Option1. The con is
that we need to check two formats for backward compatibility.

We discussed this in IPv6 sub-team meeting and we think
Option2 is preferred. Can I also get community's feedback
on which one is preferred or any other comments?




[openstack-dev] Taskflow for Juno RC1 effectively require Kombu 3.x

2014-10-01 Thread Thomas Goirand
Hi,

When building the latest release (eg: Juno RC1) of Taskflow 0.4, needed
by Cinder, I've notice failures due to the impossibility to do:

from kombu import message

More in details, the failure is:

==
FAIL:
unittest.loader.ModuleImportFailure.taskflow.tests.unit.worker_based.test_dispatcher
unittest.loader.ModuleImportFailure.taskflow.tests.unit.worker_based.test_dispatcher
--
_StringException: Traceback (most recent call last):
ImportError: Failed to import test module:
taskflow.tests.unit.worker_based.test_dispatcher
Traceback (most recent call last):
  File /usr/lib/python2.7/unittest/loader.py, line 252, in _find_tests
module = self._get_module_from_name(name)
  File /usr/lib/python2.7/unittest/loader.py, line 230, in
_get_module_from_name
__import__(name)
  File taskflow/tests/unit/worker_based/test_dispatcher.py, line 17,
in module
from kombu import message
ImportError: cannot import name message

The thing is, there's no message.py in the latest Kombu 2.x, this
appears in version 3.0. Though in our global-requirements.txt, we only
have kombu=2.5.0, which IMO is just completely wrong, considering what
Taskflow does in .

Changing the requirement to be kombu=3.0 means that we also need to
import new dependencies, as kombu 3.x needs python-beanstalkc.

So here, we have 2 choices:

1/ Fix Taskflow so that it really supports Kombu 2.5, as per our decided
Juno requirements.

2/ Accept beanstalkc and kombu=3.0, modify our global-requirements.txt
and add these 2.

Since Ubuntu is already in a deep freeze, probably 2/ isn't a very good
solution. Also, python-beanstalkc fails to build in Wheezy (when doing
its doc tests). I didn't investigate a lot why (yet), but that's annoying.

On my test system (eg: a cowbuilder chroot), I have just added a Debian
patch to completely remove
taskflow/tests/unit/worker_based/test_dispatcher.py from taskflow, and
everything works again (eg: no unit test errors). This is maybe a bit
more drastic than what we could do, probably... :)

Joshua, I've CC-ed you because git blame told me that you were the
person writing these tests. Could you patch it quickly (eg: before the
final release of Juno) so that it works with the older Kombu?

Thoughts anyone?

Cheers,

Thomas Goirand (zigo)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] VPNaaS site to site connection down.

2014-10-01 Thread Paul Michali (pcm)
See in-line @PCM


PCM (Paul Michali)

MAIL …..…. p...@cisco.com
IRC ……..… pcm_ (irc.freenode.com)
TW ………... @pmichali
GPG Key … 4525ECC253E31A83
Fingerprint .. 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83



On Sep 30, 2014, at 11:31 PM, masoom alam masoom.a...@gmail.com wrote:

 Hi Paul, 
 
 Apologies for late response. I was having throat infection. 
 
 
 
 
 Can you show the ipsec-site-connection-create command used on each end?
 
 
 neutron ipsec-site-connection-create --name vpnconnection1 --vpnservice-id 
 myvpn --ikepolicy-id ikepolicy1 --ipsecpolicy-id ipsecpolicy1 --peer-address 
 public address --peer-id q-router-ip --peer-cidr 10.2.0.0/24 --psk secret
 
 In the above command: --peer-address is the public ip of the node having 
 devstack setup -- you can call it devstack West
 --peer-id: we are giving the ip of the q-router
 
 Make sense?
 
  
 Can you show the topology with IP addresses used (and indicate how the two 
 clouds are connected)?
 Are you using devstack? Two physical nodes? How are they interconnected?
 
 We are using exactly the same topology as shown below even the floating ip 
 addresses are same one mentioned below. However, Our Internet gateway is a 
 public ip. Similarly, other Internet GW is also a public ip. 

@PCM So is the public IP for the router (172.24.4.226) an internet on the 
Internet?  In the example, IIRC, the quantum router has an IP on the public 
network, and the GW IP is also on the same network (172.24.4.225). I think the 
latter, is assigned to the external bridge on the host (br-ex). Is that what 
you have?


 
   (10.1.0.0/24 - DevStack East)
   |
   |  10.1.0.1
  [Quantum Router]
   |  172.24.4.226
   |
   |  172.24.4.225
  [Internet GW]
   |  
   |
  [Internet GW]
   | 172.24.4.232
   |
   | 172.24.4.233
  [Quantum Router]
   |  10.2.0.1
   |
  (10.2.0.0/24 DevStack West)
 
 
 
 First thing would be to ensure that you can ping from one host to another 
 over the public IPs involved. You can then go to the namespace of the router 
 and see if you can ping the public I/F of the other end’s router.
 
 We can ping anything on the host having devstack setup for example 
 google.com, but GW of the other host.

@PCM Are you saying that the host for devstack East can ping on the Internet, 
but cannot ping the GW IP of the other Devstack setup (also on the internet)?

I guess I need to understand what the “GW” actually is, in your setup. For the 
example given, it is the host’s br-ex interface and is on the same subnet as 
the router’s public interface.


 However, we cannot ping from within the CirrOS instance. I have run the 
 traceroute command and we are reaching till 172.24.4.225 but not beyond this 
 point.

@PCM By 172.24.4.225 do you mean the Internet IP for the br-ex interface on the 
local host? The cirrus VM, irrespective of VPN, should be able to ping the 
router’s public IP, the gateway IP and the far end public IPs. I’m struggling 
to understand what you have setup. Is the internet GW just the br-ex or some 
external router?

Should like you have some connectivity issues outside of VPN.  From the Cirros 
VM should should be able to ping everything, except the Cirros VMs on the other 
side.


 BTW we did some other experiments as well. For example, when we tried to 
 explicitly link our br-ex (172.24.4.225) with eth0 (Internet GW), machine got 
 corrupted. Same is the issue if we do a hard reboot, Neutron gets corrupted :)

@PCM This seems to be the point of confusion. On the example, br-ex would have 
an IP on the public network. Sounds like that is not the case here. The br-ex 
would have, a port that is the interface that is actually connected to the 
public network. For example, I may have eth1 on my system added to br-ex, and 
eth1 would be connected to a switch that connects this to the other node (in a 
simple lab environment).

Not sure I understand what you mean by “machine got corrupted” and “Neutron 
gets corrupted”. Can you elaborate?

When I set up this in a lab, I add the interface to br-ex and then I stack. In 
the localrc, the interface is specified, along with br-ex.


  
 
 You can look at the screen-q-vpn.log (assuming devstack used) to see if any 
 errors during setup.
 
 Note: When I stack, I turn off neutron security groups and then set nova 
 security groups to allow SSH and ICMP. I imagine the alternative would be to 
 setup neutron security groups to allow these two protocols.

@PCM What are you doing for security groups? I disable Neutron security groups 
and have set Nova to allow ICMP and SSH. I think you can instead, do:

LIBVIRT_FIREWALL_DRIVER=nova.virt.firewall.NoopFirewallDriver

HTHs,

PCM


 
 I didn’t quite follow what you meant by Please note that my two devstack 
 nodes are on different public addresses, so scenario is a little different 
 

Re: [openstack-dev] Taskflow for Juno RC1 effectively require Kombu 3.x

2014-10-01 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

On 01/10/14 12:55, Thomas Goirand wrote:
 Hi,
 
 When building the latest release (eg: Juno RC1) of Taskflow 0.4,
 needed by Cinder, I've notice failures due to the impossibility to
 do:
 
 from kombu import message
 
 More in details, the failure is:
 
 ==

 
FAIL:
 unittest.loader.ModuleImportFailure.taskflow.tests.unit.worker_based.test_dispatcher

 
unittest.loader.ModuleImportFailure.taskflow.tests.unit.worker_based.test_dispatcher
 --

 
_StringException: Traceback (most recent call last):
 ImportError: Failed to import test module: 
 taskflow.tests.unit.worker_based.test_dispatcher Traceback (most
 recent call last): File /usr/lib/python2.7/unittest/loader.py,
 line 252, in _find_tests module = self._get_module_from_name(name) 
 File /usr/lib/python2.7/unittest/loader.py, line 230, in 
 _get_module_from_name __import__(name) File
 taskflow/tests/unit/worker_based/test_dispatcher.py, line 17, in
 module from kombu import message ImportError: cannot import name
 message

Does it show up in unit tests only?

 
 The thing is, there's no message.py in the latest Kombu 2.x, this 
 appears in version 3.0. Though in our global-requirements.txt, we
 only have kombu=2.5.0, which IMO is just completely wrong,
 considering what Taskflow does in .
 
 Changing the requirement to be kombu=3.0 means that we also need
 to import new dependencies, as kombu 3.x needs python-beanstalkc.
 
 So here, we have 2 choices:
 
 1/ Fix Taskflow so that it really supports Kombu 2.5, as per our
 decided Juno requirements.

Should be doable.

 
 2/ Accept beanstalkc and kombu=3.0, modify our
 global-requirements.txt and add these 2.

This will be a major pain point for both upstream and downstream.
Let's stick to the first option. I don't see why we should bump the
version unless there is no other way from it.

 
 Since Ubuntu is already in a deep freeze, probably 2/ isn't a very
 good solution. Also, python-beanstalkc fails to build in Wheezy
 (when doing its doc tests). I didn't investigate a lot why (yet),
 but that's annoying.
 
 On my test system (eg: a cowbuilder chroot), I have just added a
 Debian patch to completely remove 
 taskflow/tests/unit/worker_based/test_dispatcher.py from taskflow,
 and everything works again (eg: no unit test errors). This is maybe
 a bit more drastic than what we could do, probably... :)
 
 Joshua, I've CC-ed you because git blame told me that you were the 
 person writing these tests. Could you patch it quickly (eg: before
 the final release of Juno) so that it works with the older Kombu?
 
 Thoughts anyone?
 
 Cheers,
 
 Thomas Goirand (zigo)
 
 ___ OpenStack-dev
 mailing list OpenStack-dev@lists.openstack.org 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2.0.22 (Darwin)

iQEcBAEBCgAGBQJUK+EBAAoJEC5aWaUY1u57IH4H+wWrENjwF0cPXBw135otTJir
CNq/kdSxax6ZQHEDR3AA+7mOtaDbm6eVYutx3U8/UHxoUxHC4V3kAxxq4r5g3LFi
I3+YkeQBmsx9o8n4YrApUd53enRxf5kvCK2UWt31934RCqubAjO+ytV13dHW9EUs
jTK/C0+aOtvsFhs9kEYCNaRt8jMZ7JNk/aS6d34bN3bCpQO8ckaFqne+lVRMtq3x
nTK2UCbRP5fOnwtSEWXM/wumzAJiwiS+VKAlr5mvab8cbIrRDtfr89WyYcDdNdTm
nci4QMN4xwr9RNbS5+B0IjV7uH6HQLcsgqcjIHa7z+XUeNBxEoWIKRWQUYtRM8Y=
=8FNp
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Tripleo] Release report

2014-10-01 Thread Ladislav Smola

1. os-apply-config: release: 0.1.22 -- 0.1.23
--https://pypi.python.org/pypi/os-apply-config/0.1.23
--
http://tarballs.openstack.org/os-apply-config/os-apply-config-0.1.23.tar.gz

2. os-refresh-config:   release: 0.1.7 -- 0.1.8
--https://pypi.python.org/pypi/os-refresh-config/0.1.8
--
http://tarballs.openstack.org/os-refresh-config/os-refresh-config-0.1.8.tar.gz

3. os-collect-config:   no changes, 0.1.28

4. os-cloud-config: release: 0.1.10 -- 0.1.11
--https://pypi.python.org/pypi/os-cloud-config/0.1.11
--
http://tarballs.openstack.org/os-cloud-config/os-cloud-config-0.1.11.tar.gz

5. diskimage-builder:   release: 0.1.31 -- 0.1.32
--https://pypi.python.org/pypi/diskimage-builder/0.1.32
--
http://tarballs.openstack.org/diskimage-builder/diskimage-builder-0.1.32.tar.gz

6. dib-utils:   release: 0.0.6 -- 0.0.7
--https://pypi.python.org/pypi/dib-utils/0.0.7
--http://tarballs.openstack.org/dib-utils/dib-utils-0.0.7.tar.gz

7. tripleo-heat-templates:  release: 0.7.7 -- 0.7.8
--https://pypi.python.org/pypi/tripleo-heat-templates/0.7.8
--
http://tarballs.openstack.org/tripleo-heat-templates/tripleo-heat-templates-0.7.8.tar.gz

8: tripleo-image-elements:  release: 0.8.7 -- 0.8.8
--https://pypi.python.org/pypi/tripleo-image-elements/0.8.8
--
http://tarballs.openstack.org/tripleo-image-elements/tripleo-image-elements-0.8.8.tar.gz

9: tuskar:  release 0.4.12 -- 0.4.13
--https://pypi.python.org/pypi/tuskar/0.4.13
--http://tarballs.openstack.org/tuskar/tuskar-0.4.13.tar.gz

10. python-tuskarclient:release 0.1.12 -- 0.1.13
--https://pypi.python.org/pypi/python-tuskarclient/0.1.13
--
http://tarballs.openstack.org/python-tuskarclient/python-tuskarclient-0.1.13.tar.gz


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] VPNaaS site to site connection down.

2014-10-01 Thread masoom alam
In line. Thanks for the response.



 @PCM So is the public IP for the router (172.24.4.226) an internet on the
Internet?  In the example, IIRC, the quantum router has an IP on the public
network, and the GW IP is also on the same network (172.24.4.225). I think
the latter, is assigned to the external bridge on the host (br-ex). Is that
what you have?


@MA: both these ips are the provider network ips. Think of these ips as
data center internal ips. Thus host system is also acting as a router in
some way. Br-ex should be connected to our eth0 (having a public ip). By
giving the following command our system gets corrupted.
sudo ovs-vsctl add-port br-ex eth0.
Even when I try to set eth0 as default gate way system s corrupted. We
noticed that IP forwarding was not enabled so we enabled it but no gain






   (10.1.0.0/24 - DevStack East)
   |
   |  10.1.0.1
  [Quantum Router]
   |  172.24.4.226
   |
   |  172.24.4.225
  [Internet GW]
   |
   |
  [Internet GW]
   | 172.24.4.232
   |
   | 172.24.4.233
  [Quantum Router]
   |  10.2.0.1
   |
  (10.2.0.0/24 DevStack West)



 First thing would be to ensure that you can ping from one host to
another over the public IPs involved. You can then go to the namespace of
the router and see if you can ping the public I/F of the other end’s router.


 We can ping anything on the host having devstack setup for example
google.com, but GW of the other host.


 @PCM Are you saying that the host for devstack East can ping on the
Internet, but cannot ping the GW IP of the other Devstack setup (also on
the internet)?

 I guess I need to understand what the “GW” actually is, in your setup.
For the example given, it is the host’s br-ex interface and is on the same
subnet as the router’s public interface.


 However, we cannot ping from within the CirrOS instance. I have run the
traceroute command and we are reaching till 172.24.4.225 but not beyond
this point.


 @PCM By 172.24.4.225 do you mean the Internet IP for the br-ex interface
on the local host? The cirrus VM, irrespective of VPN, should be able to
ping the router’s public IP, the gateway IP and the far end public IPs. I’m
struggling to understand what you have setup. Is the internet GW just the
br-ex or some external router?


 Should like you have some connectivity issues outside of VPN.  From the
Cirros VM should should be able to ping everything, except the Cirros VMs
on the other side.


 BTW we did some other experiments as well. For example, when we tried to
explicitly link our br-ex (172.24.4.225) with eth0 (Internet GW), machine
got corrupted. Same is the issue if we do a hard reboot, Neutron gets
corrupted :)


 @PCM This seems to be the point of confusion. On the example, br-ex would
have an IP on the public network. Sounds like that is not the case here.
The br-ex would have, a port that is the interface that is actually
connected to the public network. For example, I may have eth1 on my system
added to br-ex, and eth1 would be connected to a switch that connects this
to the other node (in a simple lab environment).

 Not sure I understand what you mean by “machine got corrupted” and
“Neutron gets corrupted”. Can you elaborate?

 When I set up this in a lab, I add the interface to br-ex and then I
stack. In the localrc, the interface is specified, along with br-ex.





 You can look at the screen-q-vpn.log (assuming devstack used) to see if
any errors during setup.

 Note: When I stack, I turn off neutron security groups and then set
nova security groups to allow SSH and ICMP. I imagine the alternative would
be to setup neutron security groups to allow these two protocols.


 @PCM What are you doing for security groups? I disable Neutron security
groups and have set Nova to allow ICMP and SSH. I think you can instead, do:

 LIBVIRT_FIREWALL_DRIVER=nova.virt.firewall.NoopFirewallDriver

 HTHs,

 PCM



 I didn’t quite follow what you meant by Please note that my two
devstack nodes are on different public addresses, so scenario is a little
different than the one described here:
https://wiki.openstack.org/wiki/Neutron/VPNaaS/HowToInstall”. Can you
elaborate (showing the commands and topology will help)?

 Germy,

 I have created this BP during Juno (unfortunately no progress on it
however), regarding being able to see more status information for
troubleshooting:
https://blueprints.launchpad.net/neutron/+spec/l3-svcs-vendor-status-report

 It was targeted for vendor implementations, but would include reference
implementation status too. Right now, if a VPN connection negotiation
fails, there’s no indication of what went wrong.

 Regards,


 PCM (Paul Michali)

 MAIL …..…. p...@cisco.com
 IRC ……..… pcm_ (irc.freenode.com)
 TW ………... @pmichali
 GPG Key … 4525ECC253E31A83
 Fingerprint .. 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83



 On Sep 29, 

Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack cascading

2014-10-01 Thread joehuang
Hello,  Alex,

Thank you very much for your mail about remote cluster hypervisor.

One of the inspiration for OpenStack cascading is from the remote clustered 
hypervisor like vCenter. The difference between the remote clustered hypervisor 
and OpenStack cascading is that not only Nova involved in the cascading, but 
also Cinder, Neutron, Ceilometer, and even Glance(optional).

Please refer to 
https://wiki.openstack.org/wiki/OpenStack_cascading_solution#Inspiration,
https://wiki.openstack.org/wiki/OpenStack_cascading_solution#Architecture for 
more detail information.

Best Regards

Chaoyi Huang ( joehuang )


From: Alex Glikson [glik...@il.ibm.com]
Sent: 01 October 2014 12:51
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack 
cascading

This sounds related to the discussion on the 'Nova clustered hypervisor driver' 
which started at Juno design summit [1]. Talking to another OpenStack should be 
similar to talking to vCenter. The idea was that the Cells support could be 
refactored around this notion as well.
Not sure whether there have been any active progress with this in Juno, though.

Regards,
Alex


[1] http://junodesignsummit.sched.org/event/a0d38e1278182eb09f06e22457d94c0c#
[2] https://etherpad.openstack.org/p/juno-nova-clustered-hypervisor-support




From:joehuang joehu...@huawei.com
To:OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Date:30/09/2014 04:08 PM
Subject:[openstack-dev] [all] [tc] Multi-clouds integration by 
OpenStack cascading




Hello, Dear TC and all,

Large cloud operators prefer to deploy multiple OpenStack instances(as 
different zones), rather than a single monolithic OpenStack instance because of 
these reasons:

1) Multiple data centers distributed geographically;
2) Multi-vendor business policy;
3) Server nodes scale up modularized from 00's up to million;
4) Fault and maintenance isolation between zones (only REST interface);

At the same time, they also want to integrate these OpenStack instances into 
one cloud. Instead of proprietary orchestration layer, they want to use 
standard OpenStack framework for Northbound API compatibility with HEAT/Horizon 
or other 3rd ecosystem apps.

We call this pattern as OpenStack Cascading, with proposal described by 
[1][2]. PoC live demo video can be found[3][4].

Nova, Cinder, Neutron, Ceilometer and Glance (optional) are involved in the 
OpenStack cascading.

Kindly ask for cross program design summit session to discuss OpenStack 
cascading and the contribution to Kilo.

Kindly invite those who are interested in the OpenStack cascading to work 
together and contribute it to OpenStack.

(I applied for “other projects” track [5], but it would be better to have a 
discussion as a formal cross program session, because many core programs are 
involved )


[1] wiki: https://wiki.openstack.org/wiki/OpenStack_cascading_solution
[2] PoC source code: https://github.com/stackforge/tricircle
[3] Live demo video at YouTube: https://www.youtube.com/watch?v=OSU6PYRz5qY
[4] Live demo video at Youku (low quality, for those who can't access 
YouTube):http://v.youku.com/v_show/id_XNzkzNDQ3MDg4.html
[5] http://www.mail-archive.com/openstack-dev@lists.openstack.org/msg36395.html

Best Regards
Chaoyi Huang ( Joe Huang )
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Taskflow for Juno RC1 effectively require Kombu 3.x

2014-10-01 Thread Ian Cordasco
There just needs to be a fallback import. In v2.5.0 the Message class
(which is the only item used from kombu.message) was in
kombu.transport.base. Thomas, can you confirm that something like

try:
from kombu import message
except ImportError:
from kombu.transport import base as message

Allows the tests to pass?

On 10/1/14, 6:09 AM, Ihar Hrachyshka ihrac...@redhat.com wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

On 01/10/14 12:55, Thomas Goirand wrote:
 Hi,
 
 When building the latest release (eg: Juno RC1) of Taskflow 0.4,
 needed by Cinder, I've notice failures due to the impossibility to
 do:
 
 from kombu import message
 
 More in details, the failure is:
 
 ==

 
FAIL:
 
unittest.loader.ModuleImportFailure.taskflow.tests.unit.worker_based.test
_dispatcher

 
unittest.loader.ModuleImportFailure.taskflow.tests.unit.worker_based.test_
dispatcher
 --

 
_StringException: Traceback (most recent call last):
 ImportError: Failed to import test module:
 taskflow.tests.unit.worker_based.test_dispatcher Traceback (most
 recent call last): File /usr/lib/python2.7/unittest/loader.py,
 line 252, in _find_tests module = self._get_module_from_name(name)
 File /usr/lib/python2.7/unittest/loader.py, line 230, in
 _get_module_from_name __import__(name) File
 taskflow/tests/unit/worker_based/test_dispatcher.py, line 17, in
 module from kombu import message ImportError: cannot import name
 message

Does it show up in unit tests only?

 
 The thing is, there's no message.py in the latest Kombu 2.x, this
 appears in version 3.0. Though in our global-requirements.txt, we
 only have kombu=2.5.0, which IMO is just completely wrong,
 considering what Taskflow does in .
 
 Changing the requirement to be kombu=3.0 means that we also need
 to import new dependencies, as kombu 3.x needs python-beanstalkc.
 
 So here, we have 2 choices:
 
 1/ Fix Taskflow so that it really supports Kombu 2.5, as per our
 decided Juno requirements.

Should be doable.

 
 2/ Accept beanstalkc and kombu=3.0, modify our
 global-requirements.txt and add these 2.

This will be a major pain point for both upstream and downstream.
Let's stick to the first option. I don't see why we should bump the
version unless there is no other way from it.

 
 Since Ubuntu is already in a deep freeze, probably 2/ isn't a very
 good solution. Also, python-beanstalkc fails to build in Wheezy
 (when doing its doc tests). I didn't investigate a lot why (yet),
 but that's annoying.
 
 On my test system (eg: a cowbuilder chroot), I have just added a
 Debian patch to completely remove
 taskflow/tests/unit/worker_based/test_dispatcher.py from taskflow,
 and everything works again (eg: no unit test errors). This is maybe
 a bit more drastic than what we could do, probably... :)
 
 Joshua, I've CC-ed you because git blame told me that you were the
 person writing these tests. Could you patch it quickly (eg: before
 the final release of Juno) so that it works with the older Kombu?
 
 Thoughts anyone?
 
 Cheers,
 
 Thomas Goirand (zigo)
 
 ___ OpenStack-dev
 mailing list OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2.0.22 (Darwin)

iQEcBAEBCgAGBQJUK+EBAAoJEC5aWaUY1u57IH4H+wWrENjwF0cPXBw135otTJir
CNq/kdSxax6ZQHEDR3AA+7mOtaDbm6eVYutx3U8/UHxoUxHC4V3kAxxq4r5g3LFi
I3+YkeQBmsx9o8n4YrApUd53enRxf5kvCK2UWt31934RCqubAjO+ytV13dHW9EUs
jTK/C0+aOtvsFhs9kEYCNaRt8jMZ7JNk/aS6d34bN3bCpQO8ckaFqne+lVRMtq3x
nTK2UCbRP5fOnwtSEWXM/wumzAJiwiS+VKAlr5mvab8cbIrRDtfr89WyYcDdNdTm
nci4QMN4xwr9RNbS5+B0IjV7uH6HQLcsgqcjIHa7z+XUeNBxEoWIKRWQUYtRM8Y=
=8FNp
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] concrete proposal on changing the library testing model with devstack-gate

2014-10-01 Thread Sean Dague
An initial WIP patch to show the direction I think this could
materialize - https://review.openstack.org/125346

Early feedback appreciated so I can do this conversion across the board.
My assumption is that all non integrated release projects should be
converted to this model.

On 09/26/2014 02:27 PM, Sean Dague wrote:
 As we've been talking about the test disaggregation the hamster wheels
 in the back of my brain have been churning on the library testing
 problem I think we currently have. Namely, openstack components mostly
 don't rely on released library versions, they rely on git master of
 them. Right now olso and many clients are open masters (their
 stable/juno versions are out), but we've still not cut RCs on servers.
 So we're actually going to have a time rewind event as we start cutting
 stables.
 
 We did this as a reaction to the fact that library releases were often
 cratering the world. However, I think the current pattern leads us into
 a much more dangerous world where basically the requirements.txt is invalid.
 
 So here is the particular unwind that I think would be useful here:
 
 1) Change setup_library in devstack to be able to either setup the
 library from git or install via pip. This would apply to all libraries
 we are installing from oslo, the python clients, stackforge, etc.
 Provide a mechanism to specify LIBRARIES_FROM_GIT (or something) so that
 you can selectively decide to use libraries from git for development
 purposes.
 
 2) Default devstack to use pip released versions.
 
 3) Change the job definition on the libraries to test against devstack
 in check, not in gate. The library teams can decide if they want their
 forward testing to be voting or not, but this is basically sniff testing
 that when they release a new library they won't ruin the world for
 everyone else.
 
 4) If a ruin the world event happens, figure out how to prevent that
 kind of event in local project testing, unit or functional. Basically an
 unknown contract was broken. We should bring that contract back into the
 project itself, or yell at the consuming project about why they were
 using code in a crazy pants way.
 
 Additionally, I'd like us to consider: No more alpha libraries. The
 moment we've bumped global requirements in projects we've actually
 released these libraries to production, as people are CDing the servers.
 We should just be honest about that and just give things a real version.
 Version numbers are cheap.
 
   -Sean
 


-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] 2 Minute tokens

2014-10-01 Thread Adam Young

On 10/01/2014 04:14 AM, Steven Hardy wrote:

On Tue, Sep 30, 2014 at 10:44:51AM -0400, Adam Young wrote:

What is keeping us from dropping the (scoped) token duration to 5 minutes?


If we could keep their lifetime as short as network skew lets us, we would
be able to:

Get rid of revocation checking.
Get rid of persisted tokens.

OK,  so that assumes we can move back to PKI tokens, but we're working on
that.

What are the uses that require long lived tokens?  Can they be replaced with
a better mechanism for long term delegation (OAuth or Keystone trusts) as
Heat has done?

FWIW I think you're misrepresenting Heat's usage of Trusts here - 2 minute
tokens will break Heat just as much as any other service:

https://bugs.launchpad.net/heat/+bug/1306294

http://lists.openstack.org/pipermail/openstack-dev/2014-September/045585.html


Summary:

- Heat uses the request token to process requests (e.g stack create), which
   may take an arbitrary amount of time (default timeout one hour).

- Some use-cases demand timeout of more than one hour (specifically big
   TripleO deployments), heat breaks in these situations atm, folks are
   working around it by using long (several hour) token expiry times.

- Trusts are only used of asynchronous signalling, e.g Ceilometer signals
   Heat, we switch to a trust scoped token to process the response to the
   alarm (e.g launch more instances on behalf of the user for autoscaling)

My understanding, ref notes in that bug, is that using Trusts while
servicing a request to effectively circumvent token expiry was not legit
(or at least yukky and to be avoided).  If you think otherwise then please
let me know, as that would be the simplest way to fix the bug above (switch
to a trust token while doing the long-running create operation).
Using trusts to circumvent timeout is OK.  There are two issues in 
tension here:


1.  A user needs to be able to maintain control of their own data.

2.  We want to limit the attack surface provided by tokens.

Since tokens are currently blanket access to the users data, there 
really is no lessening of control by using trusts in a wider context.  
I'd argue that using trusts would actually reduce the capability for 
abuse,if coupled with short lived tokens. With long lived tokens, anyone 
can reuse the token.  With a trust, only the trustee would be able to 
create a new token.



Could we start by identifying the set of operations that are currently 
timing out due to the one hour token duration and add an optional 
trustid on those operations?





Trusts is not really ideal for this use-case anyway, as it requires the
service to have knowledge of the roles to delegate (or that the user
provides a pre-created trust), ref bug #1366133.  I suppose we could just
delegate all the roles we find in the request scope and be done with it,
given that bug has been wontfixed.

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-qa] Post job failures

2014-10-01 Thread Matthew Treinish
Hi Josh,

Just a heads up that you shouldn't use this list for any discussion. We've moved
all of the discussion off this list into openstack-dev. The only reason we
haven't removed the openstack-qa list is so we have a separate address for the
periodic job results. (which honestly hasn't been the most effective approach
for handling those jobs)

On Wed, Oct 01, 2014 at 07:39:44PM +1000, Joshua Hesketh wrote:
 Hello QA team,
 
 When a job fails in the post queue (which have jobs that are triggered on a
 change being merged) no warning or failure message is sent anywhere so it
 does so silently. This has caused an issue in the past[0] and there are
 likely more cases we don't know about.
 
 We should report failures somewhere but since post jobs don't come from
 gerrit they can't be reported back to gerrit trivially. And even if we could
 it would be a comment on a closed change.

So I actually think as a first pass this would be the best way to handle it. You
can leave comments on a closed gerrit changes, it would still generate the same
notifications for people who have that enabled. It also would be picked up in
the ci results table on the top which I think might be convenient.

Long term I'm thinking we might need to make a separate dashboard view for all
of these jobs so we can track the results over time. I don't think instantaneous
reporting is actually important for post or periodic jobs because if it were
they'd be running in check or gate then. Back in the days when there was a
single jenkins, the jenkins dashboard could be used for this to a certain extent
which was useful. 

 
 My feeling is an easy solution is to email somewhere when a post job fails.
 However I'm not sure where might be an appropriate location for that. Would
 this mailing list, for example, be a good place to start and then see how we
 go?

I really don't think this is the right approach. The issue is that most of these
things are a project specific failure and you'd either be spamming everyone 
that it failed or small set of people who aren't interested. I also feel that we
run the post jobs far too frequently to have it be sent to any ML.
 
 I've set up the change here: https://review.openstack.org/#/c/125298/
 
 Cheers,
 Josh
 
 [0]
 http://lists.openstack.org/pipermail/openstack-dev/2014-September/046481.html
 

-Matt Treinish


pgpTOBhqldUAY.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] VPNaaS site to site connection down.

2014-10-01 Thread Paul Michali (pcm)
See inline at #PCM


PCM (Paul Michali)

MAIL …..…. p...@cisco.com
IRC ……..… pcm_ (irc.freenode.com)
TW ………... @pmichali
GPG Key … 4525ECC253E31A83
Fingerprint .. 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83



On Oct 1, 2014, at 8:35 AM, masoom alam masoom.a...@gmail.com wrote:

 In line. Thanks for the response.
 
 
 
  @PCM So is the public IP for the router (172.24.4.226) an internet on the 
  Internet?  In the example, IIRC, the quantum router has an IP on the public 
  network, and the GW IP is also on the same network (172.24.4.225). I think 
  the latter, is assigned to the external bridge on the host (br-ex). Is that 
  what you have?
 
 
 @MA: both these ips are the provider network ips. Think of these ips as data 
 center internal ips. Thus host system is also acting as a router in some way. 
 Br-ex should be connected to our eth0 (having a public ip). By giving the 
 following command our system gets corrupted.
 

#PCM Afraid I don’t understand what you mean by “system gets corrupted”.

If I understand, it sounds like you have:

Neutron router (public IP) 172.24.4.225
   |
   |
br-ex 172.24.4.226
   |
   |
eth0 x.x.x.x (internet IP)

Is that correct?

I’ve never done that (always played with lab environment), but I don’t see how 
the OVS bridge can “route” packets between two different networks. I could see 
this working with a physical router or with the Neutron router having an IP on 
the Internet.

in any case, you have a basic connectivity issue, so need to get that squared 
away first.

Sorry I’m not much help… maybe someone can chime in…


Regards,

PCM

 sudo ovs-vsctl add-port br-ex eth0.
 Even when I try to set eth0 as default gate way system s corrupted. We 
 noticed that IP forwarding was not enabled so we enabled it but no gain
 
 
 
 
 
 
 
(10.1.0.0/24 - DevStack East)
|
|  10.1.0.1
   [Quantum Router]
|  172.24.4.226
|
|  172.24.4.225
   [Internet GW]
|  
|
   [Internet GW]
| 172.24.4.232
|
| 172.24.4.233
   [Quantum Router]
|  10.2.0.1
|
   (10.2.0.0/24 DevStack West)
 
 
 
  First thing would be to ensure that you can ping from one host to another 
  over the public IPs involved. You can then go to the namespace of the 
  router and see if you can ping the public I/F of the other end’s router.
 
 
  We can ping anything on the host having devstack setup for example 
  google.com, but GW of the other host.
 
 
  @PCM Are you saying that the host for devstack East can ping on the 
  Internet, but cannot ping the GW IP of the other Devstack setup (also on 
  the internet)?
 
  I guess I need to understand what the “GW” actually is, in your setup. For 
  the example given, it is the host’s br-ex interface and is on the same 
  subnet as the router’s public interface.
 
 
  However, we cannot ping from within the CirrOS instance. I have run the 
  traceroute command and we are reaching till 172.24.4.225 but not beyond 
  this point.
 
 
  @PCM By 172.24.4.225 do you mean the Internet IP for the br-ex interface on 
  the local host? The cirrus VM, irrespective of VPN, should be able to ping 
  the router’s public IP, the gateway IP and the far end public IPs. I’m 
  struggling to understand what you have setup. Is the internet GW just the 
  br-ex or some external router?
 
 
 
  Should like you have some connectivity issues outside of VPN.  From the 
  Cirros VM should should be able to ping everything, except the Cirros VMs 
  on the other side.
 
 
  BTW we did some other experiments as well. For example, when we tried to 
  explicitly link our br-ex (172.24.4.225) with eth0 (Internet GW), machine 
  got corrupted. Same is the issue if we do a hard reboot, Neutron gets 
  corrupted :)
 
 
  @PCM This seems to be the point of confusion. On the example, br-ex would 
  have an IP on the public network. Sounds like that is not the case here. 
  The br-ex would have, a port that is the interface that is actually 
  connected to the public network. For example, I may have eth1 on my system 
  added to br-ex, and eth1 would be connected to a switch that connects this 
  to the other node (in a simple lab environment).
 
  Not sure I understand what you mean by “machine got corrupted” and “Neutron 
  gets corrupted”. Can you elaborate?
 
  When I set up this in a lab, I add the interface to br-ex and then I stack. 
  In the localrc, the interface is specified, along with br-ex.
 
 
   
 
 
  You can look at the screen-q-vpn.log (assuming devstack used) to see if 
  any errors during setup.
 
  Note: When I stack, I turn off neutron security groups and then set nova 
  security groups to allow SSH and ICMP. I imagine the alternative would be 
  to setup neutron security groups to allow these two protocols.
 
 
  @PCM What are you doing for security 

[openstack-dev] [QA] Meeting Thursday October 2nd at 22:00 UTC

2014-10-01 Thread Matthew Treinish
Hi everyone,

Just a quick reminder that the weekly OpenStack QA team IRC meeting will be
tomorrow Thursday, October 2nd at 22:00 UTC in the #openstack-meeting
channel.

The agenda for tomorrow's meeting can be found here:
https://wiki.openstack.org/wiki/Meetings/QATeamMeeting
Anyone is welcome to add an item to the agenda.

It's also worth noting that a few weeks ago we started having a regular
dedicated Devstack topic during the meetings. So if anyone is interested in
Devstack development please join the meetings to be a part of the discussion.

To help people figure out what time 22:00 UTC is in other timezones tomorrow's
meeting will be at:

18:00 EDT
07:00 JST
07:30 ACST
0:00 CEST
17:00 CDT
15:00 PDT

-Matt Treinish


pgpXFzkXGUw8I.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Kolla Blueprints

2014-10-01 Thread Steven Dake

On 09/30/2014 09:55 AM, Chmouel Boudjnah wrote:


On Tue, Sep 30, 2014 at 6:41 PM, Steven Dake sd...@redhat.com 
mailto:sd...@redhat.com wrote:



I've done a first round of prioritization.  I think key things we
need people to step up for are nova and rabbitmq containers.

For the developers, please take a moment to pick a specific
blueprint to work on.  If your already working on something, this
hsould help to prevent duplicate work :)



As I understand in the current implementations[1]  the containers are 
configured with a mix of shell scripts using crudini and other shell 
command. Is it the way to configure the containers? and is a 
deployment tool like Ansible (or others) is something that is planned 
to be used in the future?


Chmouel


Chmouel,

I am not really sure what the best solution to configure the 
containers.  It is clear to me the current shell scripts are fragile in 
nature and do not handle container restart properly.  The idea of using 
Puppet or Ansible as a CM tool has been discussed with no resolution.  
At the moment, I'm satisified with a somewhat hacky solution if we can 
get the containers operational.


Regards,
-steve





[1] from https://github.com/jlabocki/superhappyfunshow/


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Create an instance with a custom uuid

2014-10-01 Thread Pasquale Porreca

Thank you for the answers.

I understood the concerns about having the UUID completely user defined 
and I also understand Nova has no interest in supporting a customized 
algorithm to generate UUID. Anyway I may have found a solution that will 
cover my use case and respect the standard for UUID (RFC 4122 
http://www.ietf.org/rfc/rfc4122.txt) .


The generation of the UUID in Nova make use of the function /uuid4()/ 
from the module /uuid.py/ to have an UUID (pseudo)random, according to 
version 4 described in RFC 4122. Anyway this is not the only algorithm 
supported in the standard (and implemented yet in /uuid.py/).


In particular I focused my attention on UUID version 1 and the method 
/uuid1(node=None, clock_seq=None)/ that allows to pass as parameter a 
part of the UUID (/node/ is the field containing the last 12 hexadecimal 
digits of the UUID).


So my idea was to give the chance to the user to set uiid version (1 or 
4, with the latter as default) when creating a new instance and in case 
of version 1  to pass optionally a value for parameter /node/.


Any thoughts?

On 09/30/14 14:07, Andrew Laski wrote:


On 09/30/2014 06:53 AM, Pasquale Porreca wrote:

Going back to my original question, I would like to know:

1) Is it acceptable to have the UUID passed from client side?


In my opinion, no.  This opens a door to issues we currently don't 
need to deal with, and use cases I don't think Nova should support. 
Another possibility, which I don't like either, would be to pass in 
some data which could influence the generation of the UUID to satisfy 
requirements.


But there was a suggestion to look into addressing your use case on 
the QEMU mailing list, which I think would be a better approach.




2) What is the correct way to do it? I started to implement this 
feature, simply passing it as metadata with key uuid, but I feel that 
this feature should have a reserved option rather then use metadata.



On 09/25/14 17:26, Daniel P. Berrange wrote:

On Thu, Sep 25, 2014 at 05:23:22PM +0200, Pasquale Porreca wrote:

This is correct Daniel, except that that it is done by the virtual
firmware/BIOS of the virtual machine and not by the OS (not yet 
installed at

that time).

This is the reason we thought about UUID: it is yet used by the 
iPXE client
to be included in Bootstrap Protocol messages, it is taken from the 
uuid
field in libvirt template and the uuid in libvirt is set by 
OpenStack; the
only missing passage is the chance to set the UUID in OpenStack 
instead to

have it randomly generated.

Having another user defined tag in libvirt won't help for our 
issue, since
it won't be included in Bootstrap Protocol messages, not without 
changes in
the virtual BIOS/firmware (as you stated too) and honestly my team 
doesn't

have interest in this (neither the competence).

I don't think the configdrive or metadata service would help 
either: the OS
on the instance is not yet installed at that time (the target if 
the network
boot is exactly to install the OS on the instance!), so it won't be 
able to

mount it.

Ok, yes, if we're considering the DHCP client inside the iPXE BIOS
blob, then I don't see any currently viable options besides UUID.
There's no mechanism for passing any other data into iPXE that I
am aware of, though if there is a desire todo that it could be
raised on the QEMU mailing list for discussion.


Regards,
Daniel





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
Pasquale Porreca

DEK Technologies
Via dei Castelli Romani, 22
00040 Pomezia (Roma)

Mobile +39 3394823805
Skype paskporr

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Horizon] [All?] Status vs State

2014-10-01 Thread Akihiro Motoki
Hi,

# The first half is related to Horizon and the latter half is about
the wording in Nova and Neutron API.

During Horizon translation for Juno, I noticed the words State and
Status in multiple contexts. Sometimes they are in very similar
contexts and sometimes they have different contexts.

I would like to know what are the difference between  Status and
State, and if the current usage is right or not, whether we can
reword them. Input from native speakers would be really appreciated.

I see three usages.

(1) Status to show operational status (e.g. Up/Down/Active/Error/Build/...)
(2) Status to show administrative status (e.g. Enabled/Disabled/...)
(3) State to show operational state (e.g., Up/Down/)

Note that (2) and (3) are shown in a same table (for example Compute
Host table in Hypervisor summary). Also (1) and (3) (e.g., task state
in nova) are used in a same table (for example, the instance table).

Status in (1) and (2) have different meaning to me, so at least
we need to add some contextual note (contextual marker in I18N term)
so that translators can distinguish (1) and (2).


Related to this, I check Nova and Neutron API, and
I don't see a clear usage of these words.

In Nova API, Status and Task State/Power State in instance list
 are both used to show current operational information (state is a
bit more detail
information compared to Status). On the other hand, in service lits
Status is used to show a current administrative status
(Enabled/Disabled) and State is used to show current operational
information like Up/Down.

In Neutron API, both State (admin_state_up)  and Status are
usually used in Neutron resources (networks, ports, routers, and so
on), but it seems the meaning of State and Status are reversed
from the meaning of Nova service list above.

I am really confused what is the right usage of these words

Thanks,
Akihiro

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [All] [I18N] compiling translation message catalogs

2014-10-01 Thread Akihiro Motoki
Hi,

To display localized strings, we need to compile translated message
catalogs (PO files) into compiled one (MO files).
I would like to discuss and get a consensus who and when generate
compiled message catalogs.
Inputs from packagers are really appreciated.

[The current status]
* Horizon contains compile message catalogs in the git repo. (It is
just a history and there seems no strong reason to have compiled one
in the repo. There is a bug report on it.)
* Other all projects do not contain compiled message catalogs and have
only PO files.

[Possible choices]
I think there are several options. (there may be other options)
(a) OpenStack does not distribute compiled message catalogs, and only
provides a command (setup.py integration) to compile message catalogs.
Deployers or distributors need to compile message catalogs.
(b) Similar to (a), but compile message catalogs as a part of pip install.
(c) OpenStack distributes compiled message catalogs as a part of the release.
(c1) the git repo maintains compiled message catalogs.
(c2) only tarball contains compiled message catalogs

Note that the current Horizon is (c1) and others are (a).

[Pros/Cons]
(a) It is a simplest way as OpenStack upstream.
 Packagers need to compile message catalogs and customize there scripts.
 Deployers who install openstack from the source need to compile them too.
(b) It might be a simple approach from users perspective. However to compile
 message catalogs during installation, we need to install required modules
 (like babel or django) before running installation (or require
them as the first
  citizen in setup.py require). I don't think it is much simpler
compared to
  option (a).
(c1) Compiled message catalogs are a kind of binary files and they can
  be generated from PO files. There is no need to manage two same data.
(c2) OpenStack is downloaded in several ways (release tarballs is not the
  only option), so I don't see much merits that the only tarball contains
  compiled message catalogs. A merit is that compiled message catalogs
  are available and there is no additional step users need to do.

My preference is (a), but would like to know broader opinions
especially from packagers.

Thanks,
Akihiro

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Create an instance with a custom uuid

2014-10-01 Thread Joe Gordon
On Wed, Oct 1, 2014 at 8:29 AM, Solly Ross sr...@redhat.com wrote:

 (response inline)

 - Original Message -
  From: Pasquale Porreca pasquale.porr...@dektech.com.au
  To: openstack-dev@lists.openstack.org
  Sent: Wednesday, October 1, 2014 11:08:50 AM
  Subject: Re: [openstack-dev] [nova] Create an instance with a custom uuid
 
  Thank you for the answers.
 
  I understood the concerns about having the UUID completely user defined
 and I
  also understand Nova has no interest in supporting a customized
 algorithm to
  generate UUID. Anyway I may have found a solution that will cover my use
  case and respect the standard for UUID (RFC 4122
  http://www.ietf.org/rfc/rfc4122.txt ) .
 
  The generation of the UUID in Nova make use of the function uuid4() from
 the
  module uuid.py to have an UUID (pseudo)random, according to version 4
  described in RFC 4122. Anyway this is not the only algorithm supported in
  the standard (and implemented yet in uuid.py ).
 
  In particular I focused my attention on UUID version 1 and the method
  uuid1(node=None, clock_seq=None) that allows to pass as parameter a part
 of
  the UUID ( node is the field containing the last 12 hexadecimal digits of
  the UUID).
 
  So my idea was to give the chance to the user to set uiid version (1 or
 4,
  with the latter as default) when creating a new instance and in case of
  version 1 to pass optionally a value for parameter node .

 I would think that we could just have a node parameter here, and
 automatically
 use version 1 if that parameter is passed (if we decided to go the route
 of changing the current UUID behavior).


From what I gather this requested change in API is based on for your
blueprint https://blueprints.launchpad.net/nova/+spec/pxe-boot-instance.
Since your blueprint is not approved yet discussing further work to improve
it is a bit premature.



 
  Any thoughts?
 
  On 09/30/14 14:07, Andrew Laski wrote:
 
 
 
  On 09/30/2014 06:53 AM, Pasquale Porreca wrote:
 
 
  Going back to my original question, I would like to know:
 
  1) Is it acceptable to have the UUID passed from client side?
 
  In my opinion, no. This opens a door to issues we currently don't need to
  deal with, and use cases I don't think Nova should support. Another
  possibility, which I don't like either, would be to pass in some data
 which
  could influence the generation of the UUID to satisfy requirements.
 
  But there was a suggestion to look into addressing your use case on the
 QEMU
  mailing list, which I think would be a better approach.
 
 
 
 
  2) What is the correct way to do it? I started to implement this feature,
  simply passing it as metadata with key uuid, but I feel that this feature
  should have a reserved option rather then use metadata.
 
 
  On 09/25/14 17:26, Daniel P. Berrange wrote:
 
 
  On Thu, Sep 25, 2014 at 05:23:22PM +0200, Pasquale Porreca wrote:
 
 
  This is correct Daniel, except that that it is done by the virtual
  firmware/BIOS of the virtual machine and not by the OS (not yet
 installed at
  that time).
 
  This is the reason we thought about UUID: it is yet used by the iPXE
 client
  to be included in Bootstrap Protocol messages, it is taken from the
 uuid
  field in libvirt template and the uuid in libvirt is set by OpenStack;
 the
  only missing passage is the chance to set the UUID in OpenStack instead
 to
  have it randomly generated.
 
  Having another user defined tag in libvirt won't help for our issue,
 since
  it won't be included in Bootstrap Protocol messages, not without changes
 in
  the virtual BIOS/firmware (as you stated too) and honestly my team
 doesn't
  have interest in this (neither the competence).
 
  I don't think the configdrive or metadata service would help either: the
 OS
  on the instance is not yet installed at that time (the target if the
 network
  boot is exactly to install the OS on the instance!), so it won't be able
 to
  mount it.
  Ok, yes, if we're considering the DHCP client inside the iPXE BIOS
  blob, then I don't see any currently viable options besides UUID.
  There's no mechanism for passing any other data into iPXE that I
  am aware of, though if there is a desire todo that it could be
  raised on the QEMU mailing list for discussion.
 
 
  Regards,
  Daniel
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
  --
  Pasquale Porreca
 
  DEK Technologies
  Via dei Castelli Romani, 22
  00040 Pomezia (Roma)
 
  Mobile +39 3394823805
  Skype paskporr
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 Best Regards,
 Solly Ross

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 

[openstack-dev] [Murano] Juno RC1 is Available

2014-10-01 Thread Serg Melikyan
We successfully released RC1 release of Murano!

The RC1 tarballs are available for download at:
https://launchpad.net/murano/juno/juno-rc1

Unless critical issues are found, this RC1 will be formally released
as the 2014.2 final version of Murano. Please, try to do extensive
testing of this version as possible, and do file all found bugs.
-- 
Serg Melikyan, Senior Software Engineer at Mirantis, Inc.
http://mirantis.com | smelik...@mirantis.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Taskflow for Juno RC1 effectively require Kombu 3.x

2014-10-01 Thread Jeremy Stanley
On 2014-10-01 16:04:37 -0007 (-0007), Joshua Harlow wrote:
 Thanks for finding this one (it'd be nice for some gate job to run
 in 'strict' requirements mode which tests the lower bounds of the
 requirements repo somehow, since with things like kombu=2.5.0
 this will always pull in the newest and everything will look fine,
 it'd be neat if somehow we could turn all '=' to '==' in one gate
 job somehow)...

This has been suggested before, and can be implemented in the
magical land of fairies and elves where pip has an actual dependency
solver... ;)

Snarkiness aside, pip just installs what you ask it to install, in
sequence. Transitive dependencies which conflict with your
dependencies don't cause an installation failure, they just override
you. So you can force things from = to == all you like, but in
many, many cases it won't prevent you from winding up with newer
versions of libraries than you asked for.

One alternative would be to hack an --always-lowest option into a
new version of pip, which would cause it to always choose the lowest
match for any declared range rather than the highest. Though I
expect this would break horribly as we've no doubt got unversioned
transitive dependencies (so not under our control, unlike direct
dependencies) where the earliest releases were unusable.
-- 
Jeremy Stanley

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] icehouse failure rates are somewhat catastrophic - who is actually maintaining it?

2014-10-01 Thread Sean Dague
As stable branches got discussed recently, I'm kind of curious who is
actually stepping up to make icehouse able to pass tests in any real
way. Because right now I've been trying to fix devstack icehouse so that
icehouse requirements can be unblocked (and to land code that will
reduce grenade failures)

I'm on retry #7 of modifying the tox.ini file in devstack.

During the last summit people said they wanted to support icehouse for
15 months. Right now we're at 6 months and the tree is basically unable
to merge code.

So who is actually standing up to fix these things, or are we going to
just leave it broken and shoot icehouse in the head early?

-Sean

-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack cascading

2014-10-01 Thread Tiwari, Arvind
Hi Chaoyi,

Thanks for sharing these information.

Sometime back I have stared a project called “Alliance” which trying to address 
the same concerns (see the link below). Alliance service is designed to provide 
Inter-Cloud Resource Federation which will enable resource sharing across 
cloud in distributed multi-site OpenStack clouds deployments. This service will 
run on top of OpenStack Cloud and fabricate different cloud (or data centers) 
instances in distributed cloud setup. This service will work closely with 
OpenStack components (Keystone, Nova, Cinder) to manage and provision 
different resources (token, VM, images, network .). Alliance service will 
provide abstraction to hide interoperability and integration complexities from 
underpinning cloud instance and enable following business use cases.

- Multi Region Capability
- Virtual Private Cloud
- Cloud Bursting

This service will provide true plug  play model for region expansion, VPC like 
use case, conceptual design can be found at  
https://wiki.openstack.org/wiki/Inter_Cloud_Resource_Federation. We are working 
on POC using this concept which is in WIP.

I will be happy to coordinate with you on this and try to come up with common 
solution, seems we both are trying to address same issues.

Thoughts?

Thanks,
Arvind

From: joehuang [mailto:joehu...@huawei.com]
Sent: Wednesday, October 01, 2014 6:56 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack 
cascading

Hello,  Alex,

Thank you very much for your mail about remote cluster hypervisor.

One of the inspiration for OpenStack cascading is from the remote clustered 
hypervisor like vCenter. The difference between the remote clustered hypervisor 
and OpenStack cascading is that not only Nova involved in the cascading, but 
also Cinder, Neutron, Ceilometer, and even Glance(optional).

Please refer to 
https://wiki.openstack.org/wiki/OpenStack_cascading_solution#Inspiration,
https://wiki.openstack.org/wiki/OpenStack_cascading_solution#Architecture for 
more detail information.

Best Regards

Chaoyi Huang ( joehuang )


From: Alex Glikson [glik...@il.ibm.com]
Sent: 01 October 2014 12:51
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack 
cascading
This sounds related to the discussion on the 'Nova clustered hypervisor driver' 
which started at Juno design summit [1]. Talking to another OpenStack should be 
similar to talking to vCenter. The idea was that the Cells support could be 
refactored around this notion as well.
Not sure whether there have been any active progress with this in Juno, though.

Regards,
Alex


[1] 
http://junodesignsummit.sched.org/event/a0d38e1278182eb09f06e22457d94c0c#http://junodesignsummit.sched.org/event/a0d38e1278182eb09f06e22457d94c0c
[2] https://etherpad.openstack.org/p/juno-nova-clustered-hypervisor-support




From:joehuang joehu...@huawei.commailto:joehu...@huawei.com
To:OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date:30/09/2014 04:08 PM
Subject:[openstack-dev] [all] [tc] Multi-clouds integration by 
OpenStack cascading




Hello, Dear TC and all,

Large cloud operators prefer to deploy multiple OpenStack instances(as 
different zones), rather than a single monolithic OpenStack instance because of 
these reasons:

1) Multiple data centers distributed geographically;
2) Multi-vendor business policy;
3) Server nodes scale up modularized from 00's up to million;
4) Fault and maintenance isolation between zones (only REST interface);

At the same time, they also want to integrate these OpenStack instances into 
one cloud. Instead of proprietary orchestration layer, they want to use 
standard OpenStack framework for Northbound API compatibility with HEAT/Horizon 
or other 3rd ecosystem apps.

We call this pattern as OpenStack Cascading, with proposal described by 
[1][2]. PoC live demo video can be found[3][4].

Nova, Cinder, Neutron, Ceilometer and Glance (optional) are involved in the 
OpenStack cascading.

Kindly ask for cross program design summit session to discuss OpenStack 
cascading and the contribution to Kilo.

Kindly invite those who are interested in the OpenStack cascading to work 
together and contribute it to OpenStack.

(I applied for “other projects” track [5], but it would be better to have a 
discussion as a formal cross program session, because many core programs are 
involved )


[1] wiki: https://wiki.openstack.org/wiki/OpenStack_cascading_solution
[2] PoC source code: https://github.com/stackforge/tricircle
[3] Live demo video at YouTube: https://www.youtube.com/watch?v=OSU6PYRz5qY
[4] Live demo video at Youku (low quality, for those who can't access 

Re: [openstack-dev] [openstack-qa] Post job failures

2014-10-01 Thread Jeremy Stanley
On 2014-10-01 10:39:40 -0400 (-0400), Matthew Treinish wrote:
[...]
 So I actually think as a first pass this would be the best way to
 handle it. You can leave comments on a closed gerrit changes,
[...]

Not so easy as it sounds. Jobs in post are running on an arbitrary
Git commit (more often than not, a merge commit), and mapping that
back to a change in Gerrit is nontrivial.
-- 
Jeremy Stanley

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] icehouse failure rates are somewhat catastrophic - who is actually maintaining it?

2014-10-01 Thread Morgan Fainberg
On Wednesday, October 1, 2014, Sean Dague s...@dague.net wrote:

 As stable branches got discussed recently, I'm kind of curious who is
 actually stepping up to make icehouse able to pass tests in any real
 way. Because right now I've been trying to fix devstack icehouse so that
 icehouse requirements can be unblocked (and to land code that will
 reduce grenade failures)

 I'm on retry #7 of modifying the tox.ini file in devstack.

 During the last summit people said they wanted to support icehouse for
 15 months. Right now we're at 6 months and the tree is basically unable
 to merge code.

 So who is actually standing up to fix these things, or are we going to
 just leave it broken and shoot icehouse in the head early?

 -Sean

 --
 Sean Dague
 http://dague.net


We should stick with the longer support for Icehouse in my opinion. I'll
happily volunteer time to help get it back into shape.

The other question is will Juno *also* have extended stable support? Or is
it more of an LTS style thing (I'm not a huge fan of the LTS model, but it
is easier in some regards). If every release is getting extended support,
we may need to look at our tool chains so we can better support the
releases.

Cheers,
Morgan

Sent via mobile
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Kolla Blueprints

2014-10-01 Thread Clint Byrum
Excerpts from Steven Dake's message of 2014-10-01 08:04:38 -0700:
 On 09/30/2014 09:55 AM, Chmouel Boudjnah wrote:
 
  On Tue, Sep 30, 2014 at 6:41 PM, Steven Dake sd...@redhat.com 
  mailto:sd...@redhat.com wrote:
 
 
  I've done a first round of prioritization.  I think key things we
  need people to step up for are nova and rabbitmq containers.
 
  For the developers, please take a moment to pick a specific
  blueprint to work on.  If your already working on something, this
  hsould help to prevent duplicate work :)
 
 
 
  As I understand in the current implementations[1]  the containers are 
  configured with a mix of shell scripts using crudini and other shell 
  command. Is it the way to configure the containers? and is a 
  deployment tool like Ansible (or others) is something that is planned 
  to be used in the future?
 
  Chmouel
 
 Chmouel,
 
 I am not really sure what the best solution to configure the 
 containers.  It is clear to me the current shell scripts are fragile in 
 nature and do not handle container restart properly.  The idea of using 
 Puppet or Ansible as a CM tool has been discussed with no resolution.  
 At the moment, I'm satisified with a somewhat hacky solution if we can 
 get the containers operational.

What if we modified diskimage-builder to create docker containers out
of the existing tripleo-image-elements?

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Core Reviewer Change

2014-10-01 Thread Ravi Penta
+1

- Original Message -
From: Adrian Otto adrian.o...@rackspace.com
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Sent: Tuesday, September 30, 2014 10:03:16 AM
Subject: [openstack-dev] [Solum] Core Reviewer Change

Solum Core Reviewer Team,

I propose the following change to our core reviewer group:

-lifeless (Robert Collins) [inactive]
+murali-allada (Murali Allada)
+james-li (James Li)

Please let me know your votes (+1, 0, or -1).

Thanks,

Adrian

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] django-pyscss failing with Django 1.7

2014-10-01 Thread Douglas Fish
Hi Thomas,

I have a few suggestions inline that I hope will be helpful!

 From: Matt Riedemann mrie...@linux.vnet.ibm.com
 To: Douglas Fish/Rochester/IBM@IBMUS, Justin Pomeroy/Rochester/IBM@IBMUS
 Date: 09/30/2014 10:26 PM
 Subject: Fwd: Re: [openstack-dev] django-pyscss failing with Django 1.7

 FYI in case one of you can provide some guidance. Thomas is a Debian
 packager.


  Original Message 
 Subject: Re: [openstack-dev] django-pyscss failing with Django 1.7
 Date: Tue, 30 Sep 2014 14:03:50 +0800
 From: Thomas Goirand z...@debian.org
 Reply-To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 Organization: Debian
 To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org

 On 09/30/2014 10:10 AM, Thomas Goirand wrote:
  Since the latest commit before the release of version 1.0.3,
  django-pyscss fails in Sid:
 
  https://github.com/fusionbox/django-pyscss/commit/
 187a7a72bf72370c739f3675bef84532e524eaf1
 
  The issue is that storage.prefix doesn't seem to exist anymore in
Django
  1.7.

I'm not sure that's really your root cause.  It seems like it _should_
exist.
I think it comes from
https://github.com/django/django/blob/master/django/contrib/staticfiles/finders.py#L71

What error message do you see?

 
  Does anyone have an idea how to fix this? Would it be ok to just revert
  that commit in my Debian package?

Have you engaged with the django_pyscss guys at
https://github.com/fusionbox/django-pyscss ?
They were helpful and responsive to me when I interacted with them.

 
  Cheers,
 
  Thomas Goirand (zigo)

 I produced this patch:
 http://anonscm.debian.org/cgit/openstack/python-django-pyscss.git/
 tree/debian/patches/fix-storage.prefix-not-found.patch

It looks like an accurate revert to me.  At first glance I don't see why
its functionally different.  Do you know why the revert works?


 Does this look correct? I really would appreciate peer review, as I'm
 really not sure of what I'm doing. The only thing I know, it that it's
 looking like it's passing all unit tests, including the prefix one.

 Cheers,

 Thomas Goirand (zigo)


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





Doug Fish


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] oslo.db 1.0.2 released

2014-10-01 Thread Doug Hellmann
The Oslo team has released version 1.0.2 of oslo.db. This patch release on the 
Juno series includes a fix for bug 1374497 (“change in oslo.db ‘ping’ handling 
is causing issues in projects that are not using transactions”).

https://bugs.launchpad.net/oslo.db/+bug/1374497

Doug


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Help with EC2 Driver functionality using boto ...

2014-10-01 Thread Aparna S Parikh
Hi,

We are currently working on writing a driver for Amazon's EC2 using the
boto libraries, and are hung up on creating a snapshot of an instance. The
instance remains in 'Queued' status on Openstack instead of becoming
 'Active'. The actual EC2 snapshot that gets created is in 'available'
status.

We are essentially calling create_image() from the boto/ec2/instance.py
when snapshot of an instance is being called.

Any help in figuring this out would be greatly appreciated.

Thanks,

Aparna
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] [All?] Status vs State

2014-10-01 Thread Jay Pipes

Hi Akihiro!

IMO, this is precisely where having an API standards working group can 
help to make the user experience of our public APIs less frustrating. 
Such a working group should have the ability to vet terms like state 
vs status and ensure consistency across the public APIs.


More thoughts inline :)

On 10/01/2014 11:24 AM, Akihiro Motoki wrote:

Hi,

# The first half is related to Horizon and the latter half is about
the wording in Nova and Neutron API.

During Horizon translation for Juno, I noticed the words State and
Status in multiple contexts. Sometimes they are in very similar
contexts and sometimes they have different contexts.

I would like to know what are the difference between  Status and
State, and if the current usage is right or not, whether we can
reword them. Input from native speakers would be really appreciated.

I see three usages.

(1) Status to show operational status (e.g. Up/Down/Active/Error/Build/...)
(2) Status to show administrative status (e.g. Enabled/Disabled/...)
(3) State to show operational state (e.g., Up/Down/)

Note that (2) and (3) are shown in a same table (for example Compute
Host table in Hypervisor summary). Also (1) and (3) (e.g., task state
in nova) are used in a same table (for example, the instance table).

Status in (1) and (2) have different meaning to me, so at least
we need to add some contextual note (contextual marker in I18N term)
so that translators can distinguish (1) and (2).

Related to this, I check Nova and Neutron API, and
I don't see a clear usage of these words.

In Nova API, Status and Task State/Power State in instance list
  are both used to show current operational information (state is a
bit more detail
information compared to Status). On the other hand, in service lits
Status is used to show a current administrative status
(Enabled/Disabled) and State is used to show current operational
information like Up/Down.

In Neutron API, both State (admin_state_up)  and Status are
usually used in Neutron resources (networks, ports, routers, and so
on), but it seems the meaning of State and Status are reversed
from the meaning of Nova service list above.

I am really confused what is the right usage of these words


OK, so here are the definitions of these terms in English (at least, the 
relevant definition as used in the APIs...):


state: the particular condition that someone or something is in at a 
specific time.


example: the state of the company's finances

status: the position of affairs at a particular time, especially in 
political or commercial contexts.


example: an update on the status of the bill

Note that state is listed as a synonym for status, but status is 
*not* listed as a synonym for state, which is why there is so much 
frustrating vagueness and confusing duplicity around the terms.


IMO, the term state should be the only one used in the OpenStack APIs 
to refer to the condition of some thing at a point in time. The term 
state can and should be prefaced with a refining descriptor such 
task or power to denote the *thing* that the state represents a 
condition for.


One direct change I would make would be that Neutron's admin_state_up 
field would be instead admin_state with values of UP, DOWN (and maybe 
UNKNOWN?) instead of having the *same* GET /networks/{network_id} call 
return *both* a boolean admin_state_up field *and* a status field 
with a string value like ACTIVE. :(


Another thing that drives me crazy is the various things that represent 
enabled or disabled.


Throughout the APIs, we use, variably:

 * A field called disabled or enabled (Nova flavor-disabled API 
extension with the OS-FLV-DISABLED:disabled attribute, Ceilometer 
alarms, Keystone domains, users and projects but not groups or credentials)
 * enable_XXX or disable_XXX (for example, in Neutron's GET 
/subnets/{subnet_id} response, there is an enable_dhcp field. In Heat's 
GET /stacks/{stack_id} response, there is a disable_rollback field. We 
should be consistent in using either the word enable or the word disable 
(not both terms) and the tense of the verb should at the very least be 
consistent (disabled vs. disable))
 * status:disabled (Nova os-services API extension. The service records 
have a status field with disabled or enabled string in it. Gotta 
love it.)


Yet another thing to tack on the list of stuff that really should be 
cleaned up with an API working group.


Best,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Documentation process

2014-10-01 Thread Anne Gentle
Sorry I'm so late to the party. Yes, Andreas is right, we simply cannot
take on any more DocImpact in openstack-manuals, but it would be great to
change the infrastructure to log against a more exact match for the bug
tracker for the project with the impacted docs. Looks like you're doing so
with https://review.openstack.org/125434 so I'm reviewing there.

Some additional protips for devs:

There's an indicator called docimpact-group: for each project. In that
setting, enter the Launchpad bug project you want bugs automatically logged
into. (wow that's strange grammar).

Don't use docimpact-group: openstack-manuals unless your project is
integrated. We can't blow out our doc bug numbers artificially.

Don't use docimpact-group: openstack-manuals for common projects like Oslo,
QA, or Infra repos. I'm working on a cleanup patch.

No need to use docimpact-group for specs repos or docs repos.

We'll cover all this and more at a docs bootstrapping session coming soon!

Thanks,

Anne


On Thu, Sep 25, 2014 at 1:39 AM, Andreas Jaeger a...@suse.com wrote:

 On 09/24/2014 09:55 PM, Sergii Golovatiuk wrote:
  Hi,
 
  I would like to discuss the documentation process and align it to
  OpenStack flow.
 
  At the moment we add special tags to bugs in Launchpad which is not
  optimal as everyone can add/remove tags
  cannot participate in documentation process or
  enforce documentation process.
 
  I suggest to switch to standard workflow that is used by OpenStack
 community
  All we need is to move the process of tracking documentation from
  launchpad to gerrit
 
  This process gives more control to individual developers or community
  for tracking the changes and reflect them in documentation.
 
  Every reviewer checks the commit. If he thinks that this commit requires
  documentation update, he will set -1 with comment message Docs impact
  required
 
  This will force the author of patchset to update commit with DocImpact
  commit message
 
  Our documentation team will get all messages with DocImpact from 'git
  log'. The documentation team will make a documentation where the author
  of patch will play a key role. All other reviewers from original patch
  must give own +1 for documentation update.
 
  Patches in fuel-docs may have the same Change-ID as original patch. It
  will allow us to match documentation and patches in Gerrit.
 
  More details about DocImpact flow ban be obtained at
 
  https://wiki.openstack.org/wiki/Documentation/DocImpact

 Currently all bugs filed due to DocImpact land in the openstack-manuals
 launchpad bug area unless the repository is setup in the infrastructure
 to push them elsewhere.

 If you want to move forward with this, please setup first the
 infrastructure to properly file the bugs,

 Andreas
 --
  Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
   SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
GF: Jeff Hawn,Jennifer Guild,Felix Imendörffer,HRB16746 (AG Nürnberg)
 GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Core Reviewer Change

2014-10-01 Thread Roshan Agrawal
Murali, James, congratulations on core reviewer status !

-Original Message-
From: Adrian Otto [mailto:adrian.o...@rackspace.com] 
Sent: Wednesday, October 01, 2014 1:11 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Solum] Core Reviewer Change

Thanks everyone for your feedback on this. The adjustments have been made.

Regards,

Adrian

On Sep 30, 2014, at 10:03 AM, Adrian Otto adrian.o...@rackspace.com wrote:

 Solum Core Reviewer Team,
 
 I propose the following change to our core reviewer group:
 
 -lifeless (Robert Collins) [inactive]
 +murali-allada (Murali Allada)
 +james-li (James Li)
 
 Please let me know your votes (+1, 0, or -1).
 
 Thanks,
 
 Adrian
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] formally distinguish server desired state from actual state?

2014-10-01 Thread Chris Friesen


Currently in nova we have the vm_state, which according to the code 
comments is supposed to represent a VM's current stable (not 
transition) state, or what the customer expect the VM to be.


However, we then added in an ERROR state.  How does this possibly make 
sense given the above definition?  Which customer would ever expect the 
VM to be in an error state?


Given this, I wonder whether it might make sense to formally distinguish 
between the expected/desired state (i.e. the state that the customer 
wants the VM to be in), and the actual state (i.e. the state that nova 
thinks the VM is in).


This would more easily allow for recovery actions, since if the actual 
state changes to ERROR (or similar) we would still have the 
expected/desired state available for reference when trying to take 
recovery actions.


Thoughts?

Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Sahara][Doc] Overview and Architecture images in docs

2014-10-01 Thread Sharan Kumar M
Hi all,

There was a discussion previously stating that the Sahara overview image
http://docs.openstack.org/developer/sahara/overview.html and the
architecture image
http://docs.openstack.org/developer/sahara/architecture.html pretty much
convey the same message. So will it be nice to strip off the overview image
and have only the architecture image?

In that case may be we could ameliorate the architecture diagram.

Thanks,
Sharan Kumar M
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] Glance on swift problem

2014-10-01 Thread Sławek Kapłoński
Hello,

Thanks for Your help but it not helps. I checked that for sure on each swift 
node there is a lot of free space. What can confirm that is fact that when I 
try to create image with size about 1.7GB and I have 
swift_store_large_object_size set to 1GB than there is error (always after 
send first chunk to swift (200MB). When I only change 
swift_store_large_object_size to 2GB and restart glance-api than the same 
image is created correctly (it is then in one big object).

---
Best regards
Sławek Kapłoński
sla...@kaplonski.pl

Dnia wtorek, 30 września 2014 22:28:11 Timur Nurlygayanov pisze:
 Hi Slawek,
 
 we faced the same error and this is issue with Swift.
 We can see 100% disk usage on the Swift node during the file upload and
 looks like Swift can't send info about status of the file loading in time.
 
 On our environments we found the workaround for this issue:
 1. Set  swift_store_large_object_size = 200 in glance.conf.
 2. Add to Swift proxy-server.conf:
 
 [DEFAULT]
 ...
 node_timeout = 90
 
 Probably we can set this value as default value for this parameter instead
 of '30'?
 
 
 Regards,
 Timur
 
 
 On Tue, Sep 30, 2014 at 7:41 PM, Sławek Kapłoński sla...@kaplonski.pl
 
 wrote:
  Hello,
  
  I can't find that upload from was previous logs but I now try to upload
  same image once again. In glance there was exactly same error. In swift
  logs I have:
  
  Sep 30 17:35:10 127.0.0.1 proxy-server X.X.X.X Y.Y.Y.Y
  30/Sep/2014/15/35/10 HEAD /v1/AUTH_7ef5a7661ccd4c069e3ad387a6dceebd/glance
  HTTP/1.0 204
  Sep 30 17:35:16 127.0.0.1 proxy-server X.X.X.X Y.Y.Y.Y
  30/Sep/2014/15/35/16 PUT /v1/AUTH_7ef5a7661ccd4c069e3ad387a6dcee
  bd/glance/fa5dfe09-74f5-4287-9852-d2f1991eebc0-1 HTTP/1.0 201 - -
  
  Best regards
  Slawek Kaplonski
  
  W dniu 2014-09-30 17:03, Kuo Hugo napisał(a):
  Hi ,
  
  Could you please post the log of related requests in Swift's log ???
  
  Thanks // Hugo
  
  2014-09-30 22:20 GMT+08:00 Sławek Kapłoński sla...@kaplonski.pl:
   Hello,
   
  I'm using openstack havana release and glance with swift backend.
  Today I found that I have problem when I create image with url in
  --copy-from when image is bigger than my
  swift_store_large_object_size because then glance is trying to
  split image to chunks with size given in
  swift_store_large_object_chunk_size and when try to upload first
  chunk to swift I have error:
  
  2014-09-30 15:05:29.361 18023 ERROR glance.store.swift [-] Error
  during chunked upload to backend, deleting stale chunks
  2014-09-30 15:05:29.361 18023 TRACE glance.store.swift Traceback
  (most recent call last):
  2014-09-30 15:05:29.361 18023 TRACE glance.store.swift   File
  /usr/lib/python2.7/dist-packages/glance/store/swift.py, line 384,
  in add
  2014-09-30 15:05:29.361 18023 TRACE glance.store.swift
  
   content_length=content_length)
  
  2014-09-30 15:05:29.361 18023 TRACE glance.store.swift   File
  /usr/lib/python2.7/dist-packages/swiftclient/client.py, line 1234,
  in put_object
  2014-09-30 15:05:29.361 18023 TRACE glance.store.swift
  
   response_dict=response_dict)
  
  2014-09-30 15:05:29.361 18023 TRACE glance.store.swift   File
  /usr/lib/python2.7/dist-packages/swiftclient/client.py, line 1143,
  in _retry
  2014-09-30 15:05:29.361 18023 TRACE glance.store.swift
  
   reset_func(func, *args, **kwargs)
  
  2014-09-30 15:05:29.361 18023 TRACE glance.store.swift   File
  /usr/lib/python2.7/dist-packages/swiftclient/client.py, line 1215,
  in _default_reset
  2014-09-30 15:05:29.361 18023 TRACE glance.store.swift %
  (container, obj))
  2014-09-30 15:05:29.361 18023 TRACE glance.store.swift
  ClientException: put_object('glance',
  '9f56ccec-deeb-4020-95ba-ca7bf1170056-1', ...) failure and no
  ability to reset contents for reupload.
  2014-09-30 15:05:29.361 18023 TRACE glance.store.swift
  2014-09-30 15:05:29.362 18023 ERROR glance.store.swift [-] Failed
  to add object to Swift.
  Got error from Swift: put_object('glance',
  '9f56ccec-deeb-4020-95ba-ca7bf1170056-1', ...) failure and no
  ability to reset contents for reupload.
  2014-09-30 15:05:29.362 18023 ERROR glance.api.v1.upload_utils [-]
  Failed to upload image 9f56ccec-deeb-4020-95ba-ca7bf1170056
  2014-09-30 15:05:29.362 18023 TRACE glance.api.v1.upload_utils
  Traceback (most recent call last):
  2014-09-30 15:05:29.362 18023 TRACE glance.api.v1.upload_utils
  
   File
  
  /usr/lib/python2.7/dist-packages/glance/api/v1/upload_utils.py,
  line 101, in upload_data_to_store
  2014-09-30 15:05:29.362 18023 TRACE glance.api.v1.upload_utils
  
   store)
  
  2014-09-30 15:05:29.362 18023 TRACE glance.api.v1.upload_utils
  
   File /usr/lib/python2.7/dist-packages/glance/store/__init__.py,
  
  line 333, in store_add_to_backend
  2014-09-30 15:05:29.362 18023 TRACE glance.api.v1.upload_utils
  
   (location, size, checksum, metadata) = store.add(image_id, data,
  
  size)
  2014-09-30 15:05:29.362 18023 TRACE glance.api.v1.upload_utils
  
   File 

[openstack-dev] [nova] Help with EC2 Driver functionality using boto ...

2014-10-01 Thread Aparna S Parikh

 Hi,

 We are trying to leverage EC2 from Openstack, and are currently working on
 writing an EC2 driver using the boto libraries, and are hung up on snapshot
 of an instance functionality. The instance remains in 'Queued' status on
 Openstack instead of becoming  'Active'. The actual EC2 snapshot that gets
 created is in 'available' status though.

 We are essentially calling create_image() from the boto/ec2/instance.py
 when snapshot of an instance is being called.

 Any help in figuring this out would be greatly appreciated.

 Thanks,

 Aparna

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Core Reviewer Change

2014-10-01 Thread Ed Cranford
Happy Corethday to the both of you!

On 10/1/14, 1:10 PM, Adrian Otto adrian.o...@rackspace.com wrote:

Thanks everyone for your feedback on this. The adjustments have been made.

Regards,

Adrian

On Sep 30, 2014, at 10:03 AM, Adrian Otto adrian.o...@rackspace.com
wrote:

 Solum Core Reviewer Team,
 
 I propose the following change to our core reviewer group:
 
 -lifeless (Robert Collins) [inactive]
 +murali-allada (Murali Allada)
 +james-li (James Li)
 
 Please let me know your votes (+1, 0, or -1).
 
 Thanks,
 
 Adrian
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] [announce] OpenStack Bootstrapping Hour Episode 1: Diving into OpenStack Docs - Friday Oct 3 - 19:00 UTC (15:00 Americas/New_York)

2014-10-01 Thread Sean Dague
Anne Gentle will be our guest this week as we dive into OpenStack Docs.

That dirty word everyone loves but hardly anyone does, documentation!
For this bootstrapping hour Anne will talk about the wild world of
developer docs, both for contributors and application developers. We can
do a review and a patch example plus describe current investigations
into WADL replacement options.

Host(s): Sean Dague, Jay Pipes, Dan Smith
Experts(s): Anne Gentle
Youtube Stream: http://www.youtube.com/watch?v=n2I3PFuoNj4

Full info at -
https://wiki.openstack.org/wiki/BootstrappingHour/Diving_Into_Docs

Hope to see you there!

-Sean

-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][neutron] Sideways grenade job to test Nova Network to Neutron migration

2014-10-01 Thread Clark Boylan
Hello,

One of the requirements placed on Ironic was that they must have a path
from Nova Baremetal to Ironic and that path should be tested. This
resulted in a sideways grenade job which instead of going from one
release of OpenStack to another, swaps out components within a release.
In this case the current Juno release.

When throwing this together for Ironic I went ahead and put a skeleton
job, check-grenade-dsvm-neutron-sideways, in place for testing a Nova
Network to Neutron sideways upgrade. This job is in the experimental
queues for Neutron, grenade, and devstack-gate at the moment and does
not pass. While it may be too late to focus on this for Juno it would be
great if Neutron and Nova could make this test pass early in the Kilo
cycle as a clear Nova Network to Neutron process is often asked for.

Random choice of current job result can be found at
http://logs.openstack.org/29/123629/1/experimental/check-grenade-dsvm-neutron-sideways/fb45df6/

The way this job works is it sets up an old and new pair of master
based cloud configs. The old side is configured to use Nova Network
and the new side is configured to use Neutron. Grenade then fires up
the old cloud, adds some things to it, runs some tests, shuts it down,
upgrades, then checks that things still work in the new cloud. My
best guess is that most of the work here will need to be done in the
upgrade section where we teach Grenade (and consequently everyone
else) how to make this transition.

Thanks,
Clark

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] formally distinguish server desired state from actual state?

2014-10-01 Thread Chris Friesen

On 10/01/2014 01:23 PM, Jay Pipes wrote:

On 10/01/2014 03:07 PM, Chris Friesen wrote:

Currently in nova we have the vm_state, which according to the code
comments is supposed to represent a VM's current stable (not
transition) state, or what the customer expect the VM to be.

However, we then added in an ERROR state.  How does this possibly make
sense given the above definition?


Where do you see that vm_state is intended to be what the customer
expects the VM to be?


From nova/compute/vm_states.py:
'vm_state describes a VM's current stable (not transition) state. That 
is, if there is no ongoing compute API calls (running tasks), vm_state 
should reflect what the customer expect the VM to be.'


Also, from http://wiki.openstack.org/VMState:
'vm_state reflects the stable state based on API calls, matching user 
expectation, revised “top-down” within API implementation.'



Now granted, the wiki also says 'If the task fails and is not possible 
to rollback, the vm_state is set to ERROR.'  I don't particularly like 
that behaviour, which is why I'd like to see a separate actual state.



I don't think this is all that useful. I think what would be more useful
is changing the Nova API to perform actions against an instance using a
POST /servers/{server_id}/tasks call, allow a user to see a history of
what actions were taken against an instance with a call to GET
/servers/{server_id}/tasks and allow a user to see the progress of a
particular task (say, a rebuild) by calling GET /tasks/{task_id}/items.


Yep, I like that idea.  But I think it's orthogonal to the issue of 
desired vs actual state.  When you start a task it could change the 
desired state, and when the task completes the actual state should 
match the expected state.



I proposed as much here:

http://docs.oscomputevnext.apiary.io/#servertask


Just curious, where is the equivalent of evacuate?


http://docs.oscomputevnext.apiary.io/#servertaskitem


This would more easily allow for recovery actions, since if the actual
state changes to ERROR (or similar) we would still have the
expected/desired state available for reference when trying to take
recovery actions.


Where would the expected/desired state be stored?  Or is it implicit in 
the most recent task attempted for the instance in question?



I think a task-based API and internal system that uses taskflow to
organize related tasks with state machine changes is the best design to
work towards.


I think something like this would certainly be an improvement over what 
we have now. That said, I don't see that as mutually exclusive with an 
explicit distinction between desired and actual state.  I think having 
nova list or the dashboard equivalent show both states would be useful.


Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] icehouse failure rates are somewhat catastrophic - who is actually maintaining it?

2014-10-01 Thread Ian Cordasco
On 10/1/14, 11:53 AM, Morgan Fainberg morgan.fainb...@gmail.com wrote:



On Wednesday, October 1, 2014, Sean Dague s...@dague.net wrote:

As stable branches got discussed recently, I'm kind of curious who is
actually stepping up to make icehouse able to pass tests in any real
way. Because right now I've been trying to fix devstack icehouse so that
icehouse requirements can be unblocked (and to land code that will
reduce grenade failures)

I'm on retry #7 of modifying the tox.ini file in devstack.

During the last summit people said they wanted to support icehouse for
15 months. Right now we're at 6 months and the tree is basically unable
to merge code.

So who is actually standing up to fix these things, or are we going to
just leave it broken and shoot icehouse in the head early?

-Sean

--
Sean Dague
http://dague.net



We should stick with the longer support for Icehouse in my opinion. I'll
happily volunteer time to help get it back into shape.


The other question is will Juno *also* have extended stable support? Or
is it more of an LTS style thing (I'm not a huge fan of the LTS model,
but it is easier in some regards). If every release is getting extended
support, we may need to look at our tool
 chains so we can better support the releases.


Cheers,
Morgan 


Sent via mobile  

Would ever release need to be LTS or would every other release (or every
4th) release be LTS? We could consider a policy like Ubuntu’s (e.g., 10.04
12.04, 14.04 are all LTS and the next will be 16.04).

—
Ian

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Kolla Blueprints

2014-10-01 Thread Fox, Kevin M
Has anyone figured out a way of having a floating ip like feature with docker 
so that you can have rabbitmq, mysql, or ceph mon's at fixed ip's and be able 
to migrate them around from physical host to physical host and still have them 
at fixed locations that you can easily put in static config files?

Maybe iptables rules? Maybe adding another bridge? Maybe just disabling the 
docker network stack all together and binding the service to a fixed, static 
address on the host?

Also, I ran across: 
http://jperrin.github.io/centos/2014/09/25/centos-docker-and-systemd/ and it 
does seem to work. I was able to get openssh-server and keystone to work in the 
same container without needing to write custom start/stop scripts. This kind of 
setup would make a nova compute container much, much easier.

Thanks,
Kevin

From: Steven Dake [sd...@redhat.com]
Sent: Wednesday, October 01, 2014 8:04 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [kolla] Kolla Blueprints

On 09/30/2014 09:55 AM, Chmouel Boudjnah wrote:

On Tue, Sep 30, 2014 at 6:41 PM, Steven Dake 
sd...@redhat.commailto:sd...@redhat.com wrote:

I've done a first round of prioritization.  I think key things we need people 
to step up for are nova and rabbitmq containers.

For the developers, please take a moment to pick a specific blueprint to work 
on.  If your already working on something, this hsould help to prevent 
duplicate work :)


As I understand in the current implementations[1]  the containers are 
configured with a mix of shell scripts using crudini and other shell command. 
Is it the way to configure the containers? and is a deployment tool like 
Ansible (or others) is something that is planned to be used in the future?

Chmouel

Chmouel,

I am not really sure what the best solution to configure the containers.  It is 
clear to me the current shell scripts are fragile in nature and do not handle 
container restart properly.  The idea of using Puppet or Ansible as a CM tool 
has been discussed with no resolution.  At the moment, I'm satisified with a 
somewhat hacky solution if we can get the containers operational.

Regards,
-steve




[1] from https://github.com/jlabocki/superhappyfunshow/



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] tripleo-puppet-elements is alive!

2014-10-01 Thread Emilien Macchi
{Puppet-OpenStack,TripleO} developpers,

For your information
https://github.com/openstack/tripleo-puppet-elements is now alive.
Puppet Group has been assigned core on this project since most of
contribution will be Puppet code. Good news! Dan Prince is both TripleO
 Puppet core member, so he will participate to the review process with
both hats. By the way, if this is an issue for someone, we can create a
new group and change the permissions.

I think the next step is to prepare blueprints/specs in
https://github.com/openstack/tripleo-specs and decide how it goes.

For now, some thoughts have already been written here:
https://etherpad.openstack.org/p/ci7jWq2lRb
This etherpad was built for brainstorming only.

The idea is to bring a new topic for the next summit where we could
discuss together about design:
https://etherpad.openstack.org/p/kilo-tripleo-summit-topics

Maybe we could arrange a first meeting with people interested by
contributing to this, and then decide where we should start.
What do you think?

Thanks for reading so far.
-- 
Emilien Macchi



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] icehouse failure rates are somewhat catastrophic - who is actually maintaining it?

2014-10-01 Thread Sean Dague
On 10/01/2014 04:46 PM, Ian Cordasco wrote:
 On 10/1/14, 11:53 AM, Morgan Fainberg morgan.fainb...@gmail.com wrote:
 


 On Wednesday, October 1, 2014, Sean Dague s...@dague.net wrote:

 As stable branches got discussed recently, I'm kind of curious who is
 actually stepping up to make icehouse able to pass tests in any real
 way. Because right now I've been trying to fix devstack icehouse so that
 icehouse requirements can be unblocked (and to land code that will
 reduce grenade failures)

 I'm on retry #7 of modifying the tox.ini file in devstack.

 During the last summit people said they wanted to support icehouse for
 15 months. Right now we're at 6 months and the tree is basically unable
 to merge code.

 So who is actually standing up to fix these things, or are we going to
 just leave it broken and shoot icehouse in the head early?

-Sean

 --
 Sean Dague
 http://dague.net



 We should stick with the longer support for Icehouse in my opinion. I'll
 happily volunteer time to help get it back into shape.


 The other question is will Juno *also* have extended stable support? Or
 is it more of an LTS style thing (I'm not a huge fan of the LTS model,
 but it is easier in some regards). If every release is getting extended
 support, we may need to look at our tool
 chains so we can better support the releases.


 Cheers,
 Morgan 


 Sent via mobile  
 
 Would ever release need to be LTS or would every other release (or every
 4th) release be LTS? We could consider a policy like Ubuntu’s (e.g., 10.04
 12.04, 14.04 are all LTS and the next will be 16.04).

Before thinking about LTS policy we should actually think about having a
tree that you can land code in... because today, you can't with icehouse.

https://review.openstack.org/#/c/125075/ is on recheck #7 - still failing.

Note, this is *after* we turned off the 2 highest failing tests on
icehouse as well to alleviate the issue.

-Sean

-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] formally distinguish server desired state from actual state?

2014-10-01 Thread Johannes Erdfelt
On Wed, Oct 01, 2014, Chris Friesen chris.frie...@windriver.com wrote:
 Currently in nova we have the vm_state, which according to the
 code comments is supposed to represent a VM's current stable (not
 transition) state, or what the customer expect the VM to be.
 
 However, we then added in an ERROR state.  How does this possibly
 make sense given the above definition?  Which customer would ever
 expect the VM to be in an error state?
 
 Given this, I wonder whether it might make sense to formally
 distinguish between the expected/desired state (i.e. the state that
 the customer wants the VM to be in), and the actual state (i.e. the
 state that nova thinks the VM is in).
 
 This would more easily allow for recovery actions, since if the
 actual state changes to ERROR (or similar) we would still have the
 expected/desired state available for reference when trying to take
 recovery actions.
 
 Thoughts?

I'm happy you brought this up because I've had a similar proposal in the
bouncing around in the back of my head lately.

ERROR is a pet peeve of mine because it doesn't tell you the operational
state of the instance. It may be running or it may not be running. It
also ends up complicating logic quite a bit (we have a very ugly patch
to allow users to revert resizes in ERROR).

Also, in a few places we have to store vm_state off into instance
metadata (key 'old_vm_state') so it can be restored to the correct state
(for things like RESCUED). This is fairly ugly.

I've wanted to sit down and work through all of the different vm_state
transitions and figure out to make it all less confusing. I just haven't
had the time to do it yet :(

JE


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][cross-project-work] What about adding cross-project-spec repo?

2014-10-01 Thread Joe Gordon
On Mon, Sep 29, 2014 at 11:58 AM, Doug Hellmann d...@doughellmann.com
wrote:


 On Sep 29, 2014, at 5:51 AM, Thierry Carrez thie...@openstack.org wrote:

  Boris Pavlovic wrote:
  it goes without saying that working on cross-project stuff in OpenStack
  is quite hard task.
 
  Because it's always hard to align something between a lot of people from
  different project. And when topic start being too HOT  the discussion
  goes in wrong direction and attempt to do cross project change fails, as
  a result maybe not *ideal* but *good enough* change in OpenStack will be
  abandoned.
 
  The another issue that we have are specs. Projects are asking to make
  spec for change in their project, and in case of cross project stuff you
  need to make N similar specs (for every project). That is really hard to
  manage, and as a result you have N different specs that are describing
  the similar stuff.
 
  To make this process more formal, clear and simple, let's reuse process
  of specs but do it in one repo /openstack/cross-project-specs.
 
  It means that every cross project topic: Unification of python clients,
  Unification of logging, profiling, debugging api, bunch of others will
  be discussed in one single place..
 
  I think it's a good idea, as long as we truly limit it to cross-project
  specs, that is, to concepts that may apply to every project. The
  examples you mention are good ones. As a counterexample, if we have to
  sketch a plan to solve communication between Nova and Neutron, I don't
  think it would belong to that repository (it should live in whatever
  project would have the most work to do).
 
  Process description of cross-project-specs:
 
   * PTL - person that mange core team members list and puts workflow +1
 on accepted specs
   * Every project have 1 core position (stackforge projects are included)
   * Cores are chosen by project team, they task is to advocate project
 team opinion
   * No more veto, and -2 votes
   * If  75% cores +1 spec it's accepted. It means that all project have
 to accept this change.
   * Accepted specs gret high priority blueprints in all projects
 
  So I'm not sure you can force all projects to accept the change.
  Ideally, projects should see the benefits of alignment and adopt the
  common spec. In our recent discussions we are also going towards more
  freedom to projects, rather than less : imposing common specs to
  stackforge projects sounds like a step backwards there.
 
  Finally, I see some overlap with Oslo, which generally ends up
  implementing most of the common policy into libraries it encourages
  usage of. Therefore I'm not sure having a cross-project PTL makes
  sense, as he would be stuck between the Oslo PTL and the Technical
  Committee.

 There is some overlap with Oslo, and we would want to be involved in the
 discussions — especially if the plan includes any code to land in an Oslo
 library. I have so far been resisting the idea that oslo-specs is the best
 home for this, mostly because I didn’t want us to assume everything related
 to cross-project work is also related to Oslo work.

 That said, our approval process looks for consensus among all of the
 participants on the review, in addition to Oslo cores, so we can use
 oslo-specs and continue incorporating the +1/-1 votes from everyone. One of
 the key challenges we’ve had is signaling buy-in for cross-project work so
 having some sort of broader review process would be good, especially to
 help ensure that all interested parties have a chance to participate in the
 review.

 OTOH, a special repo with different voting permission settings also makes
 sense. I don’t have any good suggestions for who would decide when the
 voting on a proposal had reached consensus, or what to do if no consensus
 emerges. Having the TC manage that seems logical, but impractical. Maybe a
 person designated by the TC would oversee it?


Here is a governance patch to propose a openstack-specs repo:
https://review.openstack.org/125509



 
  With such simple rules we will simplify cross project work:
 
  1) Fair rules for all projects, as every project has 1 core that has 1
  vote.
 
  A project is hardly a metric for fairness. Some projects are 50 times
  bigger than others. What is a project in your mind ? A code repository
  ? Or more like a program (a collection of code repositories being worked
  on by the same team ?)
 
  So in summary, yes we need a place to discuss truly cross-project specs,
  but I think it can't force decisions to all projects (especially
  stackforge ones), and it can live within a larger-scope Oslo effort
  and/or the Technical Committee.
 
  --
  Thierry Carrez (ttx)
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 

Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack cascading

2014-10-01 Thread joehuang
Hi, Tiwari,



Great to know you are also trying to address similar issues. For sure we are 
happy to work out a common solution for these issues.



I just go through the wiki page, the question for me is will the Alliance 
provide/retain current north bound OpenStack API ?. It's very important for 
the cloud still expose OpenStack API so that the OpenStack API ecosystem will 
not be lost.



And currently OpenStack cascading has not covered the hybrid cloud (private 
cloud and public cloud federation), so your project will be a good supplement.



May we have a f2f workshop before the formal Paris design summit, so that we 
can exchange ideas completely. 40 minutes design summit session is not enough 
for deep diving. PoC team will stay at Paris from Oct.29 to Nov.8.



Best Regards



Chaoyi Huang ( joehuang )




From: Tiwari, Arvind [arvind.tiw...@hp.com]
Sent: 02 October 2014 0:42
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack 
cascading

Hi Chaoyi,

Thanks for sharing these information.

Sometime back I have stared a project called “Alliance” which trying to address 
the same concerns (see the link below). Alliance service is designed to provide 
Inter-Cloud Resource Federation which will enable resource sharing across 
cloud in distributed multi-site OpenStack clouds deployments. This service will 
run on top of OpenStack Cloud and fabricate different cloud (or data centers) 
instances in distributed cloud setup. This service will work closely with 
OpenStack components (Keystone, Nova, Cinder) to manage and provision 
different resources (token, VM, images, network .). Alliance service will 
provide abstraction to hide interoperability and integration complexities from 
underpinning cloud instance and enable following business use cases.

- Multi Region Capability
- Virtual Private Cloud
- Cloud Bursting

This service will provide true plug  play model for region expansion, VPC like 
use case, conceptual design can be found at  
https://wiki.openstack.org/wiki/Inter_Cloud_Resource_Federation. We are working 
on POC using this concept which is in WIP.

I will be happy to coordinate with you on this and try to come up with common 
solution, seems we both are trying to address same issues.

Thoughts?

Thanks,
Arvind

From: joehuang [mailto:joehu...@huawei.com]
Sent: Wednesday, October 01, 2014 6:56 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack 
cascading

Hello,  Alex,

Thank you very much for your mail about remote cluster hypervisor.

One of the inspiration for OpenStack cascading is from the remote clustered 
hypervisor like vCenter. The difference between the remote clustered hypervisor 
and OpenStack cascading is that not only Nova involved in the cascading, but 
also Cinder, Neutron, Ceilometer, and even Glance(optional).

Please refer to 
https://wiki.openstack.org/wiki/OpenStack_cascading_solution#Inspiration,
https://wiki.openstack.org/wiki/OpenStack_cascading_solution#Architecture for 
more detail information.

Best Regards

Chaoyi Huang ( joehuang )


From: Alex Glikson [glik...@il.ibm.com]
Sent: 01 October 2014 12:51
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack 
cascading
This sounds related to the discussion on the 'Nova clustered hypervisor driver' 
which started at Juno design summit [1]. Talking to another OpenStack should be 
similar to talking to vCenter. The idea was that the Cells support could be 
refactored around this notion as well.
Not sure whether there have been any active progress with this in Juno, though.

Regards,
Alex


[1] 
http://junodesignsummit.sched.org/event/a0d38e1278182eb09f06e22457d94c0c#http://junodesignsummit.sched.org/event/a0d38e1278182eb09f06e22457d94c0c
[2] https://etherpad.openstack.org/p/juno-nova-clustered-hypervisor-support




From:joehuang joehu...@huawei.commailto:joehu...@huawei.com
To:OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date:30/09/2014 04:08 PM
Subject:[openstack-dev] [all] [tc] Multi-clouds integration by 
OpenStack cascading




Hello, Dear TC and all,

Large cloud operators prefer to deploy multiple OpenStack instances(as 
different zones), rather than a single monolithic OpenStack instance because of 
these reasons:

1) Multiple data centers distributed geographically;
2) Multi-vendor business policy;
3) Server nodes scale up modularized from 00's up to million;
4) Fault and maintenance isolation between zones (only REST interface);

At the same time, they also want to integrate these OpenStack instances into 
one cloud. 

Re: [openstack-dev] [nova] formally distinguish server desired state from actual state?

2014-10-01 Thread Jay Pipes

On 10/01/2014 04:31 PM, Chris Friesen wrote:

On 10/01/2014 01:23 PM, Jay Pipes wrote:

On 10/01/2014 03:07 PM, Chris Friesen wrote:

Currently in nova we have the vm_state, which according to the code
comments is supposed to represent a VM's current stable (not
transition) state, or what the customer expect the VM to be.

However, we then added in an ERROR state.  How does this possibly make
sense given the above definition?


Where do you see that vm_state is intended to be what the customer
expects the VM to be?


 From nova/compute/vm_states.py:
'vm_state describes a VM's current stable (not transition) state. That
is, if there is no ongoing compute API calls (running tasks), vm_state
should reflect what the customer expect the VM to be.'


Hmm, interesting wording. I wasn't aware of that wiki page and I'm not 
sure about the freshness of it, but I think what the language is saying 
is that if a user isn't actively running an action against the server, 
it should be in the last state a user put it in -- i.e. active, 
terminated, stopped, paused, etc.



Also, from http://wiki.openstack.org/VMState:
'vm_state reflects the stable state based on API calls, matching user
expectation, revised “top-down” within API implementation.'


Yeah, also awkward wording...


Now granted, the wiki also says 'If the task fails and is not possible
to rollback, the vm_state is set to ERROR.'  I don't particularly like
that behaviour, which is why I'd like to see a separate actual state.


If we had a task-based system, which is what I am advocating for, you 
would have a *task* (action) set to an ERROR state, not the VM itself. 
Which is what I was getting at... the task's history could tell the user 
what failed about the action, but the state of a VM could continue to 
be, for example, ACTIVE (or STOPPED or whatever). In the oscomputevnext 
proposal, I have an ERROR state for virt_state, but you are correct that 
it doesn't make sense to have one there if you have the history of the 
failure of an action in the task item history and ERROR isn't really a 
state of the virtual machine at all, just an operation against one.



I don't think this is all that useful. I think what would be more useful
is changing the Nova API to perform actions against an instance using a
POST /servers/{server_id}/tasks call, allow a user to see a history of
what actions were taken against an instance with a call to GET
/servers/{server_id}/tasks and allow a user to see the progress of a
particular task (say, a rebuild) by calling GET /tasks/{task_id}/items.


Yep, I like that idea.  But I think it's orthogonal to the issue of
desired vs actual state.  When you start a task it could change the
desired state, and when the task completes the actual state should
match the expected state.


Not sure it's necessary to have a desired state on the instance (since 
that could be derived from the task history), but I see your point about 
it being orthogonal.



I proposed as much here:

http://docs.oscomputevnext.apiary.io/#servertask


Just curious, where is the equivalent of evacuate?


Evacuate is an operator API and IMO is not appropriate to live in the 
same API as the one used by regular users of a compute service.


Put another way: you don't see an evacuate host API in the EC2 API, do 
you? I guarantee there *is* such an API, but it's not in the public 
REST-ish EC2 API that you and I use and may not even be an HTTP API at all.


I talk a little bit more about my opinion on having operator API calls 
mixed into the same compute control API in my notes on GitHub here, if 
you're interested:


https://github.com/jaypipes/openstack-compute-api#operator-api-calls


http://docs.oscomputevnext.apiary.io/#servertaskitem


This would more easily allow for recovery actions, since if the actual
state changes to ERROR (or similar) we would still have the
expected/desired state available for reference when trying to take
recovery actions.


Where would the expected/desired state be stored?  Or is it implicit in
the most recent task attempted for the instance in question?


Right, exactly, it would be implicity in the most recent task attempted.


I think a task-based API and internal system that uses taskflow to
organize related tasks with state machine changes is the best design to
work towards.


I think something like this would certainly be an improvement over what
we have now. That said, I don't see that as mutually exclusive with an
explicit distinction between desired and actual state.  I think having
nova list or the dashboard equivalent show both states would be useful.


Sure, fair point.
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] minimum python support version for juno

2014-10-01 Thread Osanai, Hisashi

Thank you for the quick responses.

 On 10/1/2014 4:24 AM, Ihar Hrachyshka wrote:
  All stable Juno releases will support Python 2.6. All Kilo releases
  are expected to drop Python 2.6 support.

On Wednesday, October 01, 2014 11:28 PM, Matt Riedemann wrote:
 Right, and backports could be interesting...but we have to move on at
 some point.

Yeah, I have a concerns if we meet troubles when we use python 2.6 that has a 
bug.

But I understand our direction.

Thanks again!
Hisashi Osanai


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] icehouse failure rates are somewhat catastrophic - who is actually maintaining it?

2014-10-01 Thread Michael Still
I agree with Sean here.

The original idea was that these stable branches would be maintained by the
distros, and that is clearly not happening if you look at the code review
latency there. We need to sort that out before we even consider supporting
a release for more than the one year we currently do.

Michael

On Thu, Oct 2, 2014 at 7:14 AM, Sean Dague s...@dague.net wrote:

 On 10/01/2014 04:46 PM, Ian Cordasco wrote:
  On 10/1/14, 11:53 AM, Morgan Fainberg morgan.fainb...@gmail.com
 wrote:
 
 
 
  On Wednesday, October 1, 2014, Sean Dague s...@dague.net wrote:
 
  As stable branches got discussed recently, I'm kind of curious who is
  actually stepping up to make icehouse able to pass tests in any real
  way. Because right now I've been trying to fix devstack icehouse so that
  icehouse requirements can be unblocked (and to land code that will
  reduce grenade failures)
 
  I'm on retry #7 of modifying the tox.ini file in devstack.
 
  During the last summit people said they wanted to support icehouse for
  15 months. Right now we're at 6 months and the tree is basically unable
  to merge code.
 
  So who is actually standing up to fix these things, or are we going to
  just leave it broken and shoot icehouse in the head early?
 
 -Sean
 
  --
  Sean Dague
  http://dague.net
 
 
 
  We should stick with the longer support for Icehouse in my opinion. I'll
  happily volunteer time to help get it back into shape.
 
 
  The other question is will Juno *also* have extended stable support? Or
  is it more of an LTS style thing (I'm not a huge fan of the LTS model,
  but it is easier in some regards). If every release is getting extended
  support, we may need to look at our tool
  chains so we can better support the releases.
 
 
  Cheers,
  Morgan
 
 
  Sent via mobile
 
  Would ever release need to be LTS or would every other release (or every
  4th) release be LTS? We could consider a policy like Ubuntu’s (e.g.,
 10.04
  12.04, 14.04 are all LTS and the next will be 16.04).

 Before thinking about LTS policy we should actually think about having a
 tree that you can land code in... because today, you can't with icehouse.

 https://review.openstack.org/#/c/125075/ is on recheck #7 - still failing.

 Note, this is *after* we turned off the 2 highest failing tests on
 icehouse as well to alleviate the issue.

 -Sean

 --
 Sean Dague
 http://dague.net

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Rackspace Australia
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] Sideways grenade job to test Nova Network to Neutron migration

2014-10-01 Thread Michael Still
Thanks for doing this. My recollection is that we still need some features
landed in Neutron before this work can complete, but its possible I am
confused.

A public status update on that from the Neutron team would be good.

Michael

On Thu, Oct 2, 2014 at 6:18 AM, Clark Boylan cboy...@sapwetik.org wrote:

 Hello,

 One of the requirements placed on Ironic was that they must have a path
 from Nova Baremetal to Ironic and that path should be tested. This
 resulted in a sideways grenade job which instead of going from one
 release of OpenStack to another, swaps out components within a release.
 In this case the current Juno release.

 When throwing this together for Ironic I went ahead and put a skeleton
 job, check-grenade-dsvm-neutron-sideways, in place for testing a Nova
 Network to Neutron sideways upgrade. This job is in the experimental
 queues for Neutron, grenade, and devstack-gate at the moment and does
 not pass. While it may be too late to focus on this for Juno it would be
 great if Neutron and Nova could make this test pass early in the Kilo
 cycle as a clear Nova Network to Neutron process is often asked for.

 Random choice of current job result can be found at

 http://logs.openstack.org/29/123629/1/experimental/check-grenade-dsvm-neutron-sideways/fb45df6/

 The way this job works is it sets up an old and new pair of master
 based cloud configs. The old side is configured to use Nova Network
 and the new side is configured to use Neutron. Grenade then fires up
 the old cloud, adds some things to it, runs some tests, shuts it down,
 upgrades, then checks that things still work in the new cloud. My
 best guess is that most of the work here will need to be done in the
 upgrade section where we teach Grenade (and consequently everyone
 else) how to make this transition.

 Thanks,
 Clark

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Rackspace Australia
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Help with EC2 Driver functionality using boto ...

2014-10-01 Thread Vishvananda Ishaya
It is hard to tell if this is a bug or a misconfiguration from your 
desctiption. The failure likely generated some kind of error message in nova or 
glance. If you can track down an error message and a tracback it would be worth 
submitting as a bug report to the appropriate project.

Vish

On Oct 1, 2014, at 11:13 AM, Aparna S Parikh apa...@thoughtworks.com wrote:

 Hi,
 We are currently working on writing a driver for Amazon's EC2 using the boto 
 libraries, and are hung up on creating a snapshot of an instance. The 
 instance remains in 'Queued' status on Openstack instead of becoming  
 'Active'. The actual EC2 snapshot that gets created is in 'available' status. 
 
 We are essentially calling create_image() from the boto/ec2/instance.py when 
 snapshot of an instance is being called.
 
 Any help in figuring this out would be greatly appreciated.
 
 Thanks,
 
 Aparna
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Kolla Blueprints

2014-10-01 Thread Vishvananda Ishaya

On Oct 1, 2014, at 2:05 PM, Fox, Kevin M kevin@pnnl.gov wrote:

 Has anyone figured out a way of having a floating ip like feature with docker 
 so that you can have rabbitmq, mysql, or ceph mon's at fixed ip's and be able 
 to migrate them around from physical host to physical host and still have 
 them at fixed locations that you can easily put in static config files?

There are[1] many[2] ways[3] to do this, but in general I don’t think they pass 
the “too much magic” sniff test. I think the standard docker approach of 
passing in the necessary ips via environment variables is probably the most 
user friendly option. Containers are light-weight enough to restart if the data 
changes.

[1] https://github.com/coreos/flannel
[2] https://github.com/vishvananda/wormhole
[3] https://github.com/openshift/geard/blob/master/docs/linking.md

Vish

 Maybe iptables rules? Maybe adding another bridge? Maybe just disabling the 
 docker network stack all together and binding the service to a fixed, static 
 address on the host?
 
 Also, I ran across: 
 http://jperrin.github.io/centos/2014/09/25/centos-docker-and-systemd/ and it 
 does seem to work. I was able to get openssh-server and keystone to work in 
 the same container without needing to write custom start/stop scripts. This 
 kind of setup would make a nova compute container much, much easier.
 
 Thanks,
 Kevin
 From: Steven Dake [sd...@redhat.com]
 Sent: Wednesday, October 01, 2014 8:04 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [kolla] Kolla Blueprints
 
 On 09/30/2014 09:55 AM, Chmouel Boudjnah wrote:
 
 On Tue, Sep 30, 2014 at 6:41 PM, Steven Dake sd...@redhat.com wrote:
 
 I've done a first round of prioritization.  I think key things we need 
 people to step up for are nova and rabbitmq containers.
 
 For the developers, please take a moment to pick a specific blueprint to 
 work on.  If your already working on something, this hsould help to prevent 
 duplicate work :)
 
 
 As I understand in the current implementations[1]  the containers are 
 configured with a mix of shell scripts using crudini and other shell 
 command. Is it the way to configure the containers? and is a deployment tool 
 like Ansible (or others) is something that is planned to be used in the 
 future?
 
 Chmouel
 
 Chmouel,
 
 I am not really sure what the best solution to configure the containers.  It 
 is clear to me the current shell scripts are fragile in nature and do not 
 handle container restart properly.  The idea of using Puppet or Ansible as a 
 CM tool has been discussed with no resolution.  At the moment, I'm satisified 
 with a somewhat hacky solution if we can get the containers operational.
 
 Regards,
 -steve
 
 
 
 
 [1] from https://github.com/jlabocki/superhappyfunshow/
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] a need to assert user ownership in preserved state

2014-10-01 Thread Clint Byrum
Recently we've been testing image based updates using TripleO, and we've
run into an interesting conundrum.

Currently, our image build scripts create a user per service for the
image. We don't, at this time, assert a UID, so it could get any UID in
the /etc/passwd database of the image.

However, if we add a service that happens to have its users created
before a previously existing service, the UID's shift by one. When
this new image is deployed, the username might be 'ceilometer', but
/mnt/state/var/lib/ceilometer is now owned by 'cinder'.

Here are 3 approaches, which are not mutually exclusive to one another.
There are likely others, and I'd be interested in hearing your ideas.

* Static UID's for all state-preserving services. Basically we'd just
  allocate these UID's from a static pool and those are always the UIDs
  no matter what. This is the simplest solution, but does not help
  anybody who is already looking to update a TripleO cloud. Also, this
  would cause problems if TripleO wants to merge with any existing
  system that might also want to use similar UID's. This also provides
  no guard against non-static UID's storing things on the state
  partition.

* Fix the UID's on image update. We can backup /etc/passwd and
  /etc/group to /mnt/state, and on bootup we can diff the two, and any
  UIDs that changed can be migrated. This could be very costly if the
  swift storage UID changed, with millions of files present on the
  system. This merge process is also not atomic and may not be
  reversible, so it is a bit scary to automate this.

* Assert ownership when registering state path. We could have any
  state-preserving elements register their desire for any important
  globs for the state drive to be owned by a particular symbolic
  username. This is just a different, more manual way to fix the UID's
  and carries the same cons.

So, what do people think?

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Kolla Blueprints

2014-10-01 Thread Angus Lees
On Wed, 1 Oct 2014 09:05:23 PM Fox, Kevin M wrote:
 Has anyone figured out a way of having a floating ip like feature with
 docker so that you can have rabbitmq, mysql, or ceph mon's at fixed ip's
 and be able to migrate them around from physical host to physical host and
 still have them at fixed locations that you can easily put in static config
 files?

This is part of the additional functionality kubernetes adds on top of docker.

kubernetes uses a proxy on every host which knows about all the published 
services.  The services share a port space (ie: every service has to have a 
unique port assigned), and the proxies know where to forward requests to find 
one of the backends for that service.

docker communicates parameters via environment variables and has a few 
standard environment variables that are used for links to other containers.  
Kubernetes also uses these link env variables but points them at the proxies 
instead of directly to the other containers.  Since oslo.config can't look up 
environment variables directly (that's something I'd like to add), I have a 
simple shell one-liner that expands environment variables in the relevant 
config files before starting the openstack server.

As a concrete example: I configure a keystone service in my kubernetes config 
and in my static config files I use values like:

   identity_uri = http://$ENV[KEYSTONE_PORT_5000_TCP_ADDR]:
$ENV[KEYSTONE_PORT_5000_TCP_PORT]/v2.0

docker/kubernetes sets those env variables to refer to the proxy on the local 
host and the port number from my service config - this information is static 
for the lifetime of that docker instance.  The proxy will reroute the requests 
dynamically to wherever the actual instances are running right now.

I hope that's enough detail - I encourage you to read the kubernetes docs 
since they have diagrams, etc that will make it much clearer than the above.

 - Gus

 Maybe iptables rules? Maybe adding another bridge? Maybe just disabling the
 docker network stack all together and binding the service to a fixed,
 static address on the host?
 
 Also, I ran across:
 http://jperrin.github.io/centos/2014/09/25/centos-docker-and-systemd/ and
 it does seem to work. I was able to get openssh-server and keystone to work
 in the same container without needing to write custom start/stop scripts.
 This kind of setup would make a nova compute container much, much easier.
 
 Thanks,
 Kevin
 
 From: Steven Dake [sd...@redhat.com]
 Sent: Wednesday, October 01, 2014 8:04 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [kolla] Kolla Blueprints
 
 On 09/30/2014 09:55 AM, Chmouel Boudjnah wrote:
 
 On Tue, Sep 30, 2014 at 6:41 PM, Steven Dake
 sd...@redhat.commailto:sd...@redhat.com wrote:
 
 I've done a first round of prioritization.  I think key things we need
 people to step up for are nova and rabbitmq containers.
 
 For the developers, please take a moment to pick a specific blueprint to
 work on.  If your already working on something, this hsould help to prevent
 duplicate work :)
 
 
 As I understand in the current implementations[1]  the containers are
 configured with a mix of shell scripts using crudini and other shell
 command. Is it the way to configure the containers? and is a deployment
 tool like Ansible (or others) is something that is planned to be used in
 the future?
 
 Chmouel
 
 Chmouel,
 
 I am not really sure what the best solution to configure the containers.  It
 is clear to me the current shell scripts are fragile in nature and do not
 handle container restart properly.  The idea of using Puppet or Ansible as
 a CM tool has been discussed with no resolution.  At the moment, I'm
 satisified with a somewhat hacky solution if we can get the containers
 operational.
 
 Regards,
 -steve
 
 
 
 
 [1] from https://github.com/jlabocki/superhappyfunshow/
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
 - Gus

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] a need to assert user ownership in preserved state

2014-10-01 Thread Gregory Haynes
Excerpts from Clint Byrum's message of 2014-10-02 01:50:33 +:
 Recently we've been testing image based updates using TripleO, and we've
 run into an interesting conundrum.
 
 Currently, our image build scripts create a user per service for the
 image. We don't, at this time, assert a UID, so it could get any UID in
 the /etc/passwd database of the image.
 
 However, if we add a service that happens to have its users created
 before a previously existing service, the UID's shift by one. When
 this new image is deployed, the username might be 'ceilometer', but
 /mnt/state/var/lib/ceilometer is now owned by 'cinder'.

Wow, nice find!

 
 Here are 3 approaches, which are not mutually exclusive to one another.
 There are likely others, and I'd be interested in hearing your ideas.
 
 * Static UID's for all state-preserving services. Basically we'd just
   allocate these UID's from a static pool and those are always the UIDs
   no matter what. This is the simplest solution, but does not help
   anybody who is already looking to update a TripleO cloud. Also, this
   would cause problems if TripleO wants to merge with any existing
   system that might also want to use similar UID's. This also provides
   no guard against non-static UID's storing things on the state
   partition.

+1 for this approach for the reasons mentioned.

 
 * Fix the UID's on image update. We can backup /etc/passwd and
   /etc/group to /mnt/state, and on bootup we can diff the two, and any
   UIDs that changed can be migrated. This could be very costly if the
   swift storage UID changed, with millions of files present on the
   system. This merge process is also not atomic and may not be
   reversible, so it is a bit scary to automate this.

If we really want to go with this type of aproach we could also just
copy the existing /etc/passwd into the image thats being built. Then
when users are added they should be added in after existing users.

I still prefer the first solution, though.

 
 * Assert ownership when registering state path. We could have any
   state-preserving elements register their desire for any important
   globs for the state drive to be owned by a particular symbolic
   username. This is just a different, more manual way to fix the UID's
   and carries the same cons.
 
 So, what do people think?
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Cinder][TripleO] Last days to elect your PTL!

2014-10-01 Thread Tristan Cacqueray
Hello Cinder and TripleO contributors,

Just a quick reminder that elections are closing soon, if you haven't
already you should use your right to vote and pick your favourite candidate!

Thanks for your time!
Tristan



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] a need to assert user ownership in preserved state

2014-10-01 Thread Mike Spreitzer
Clint Byrum cl...@fewbar.com wrote on 10/01/2014 09:50:33 PM:

 Recently we've been testing image based updates using TripleO, and we've
 run into an interesting conundrum.
 
 Currently, our image build scripts create a user per service for the
 image. We don't, at this time, assert a UID, so it could get any UID in
 the /etc/passwd database of the image.
 
 However, if we add a service that happens to have its users created
 before a previously existing service, the UID's shift by one. When
 this new image is deployed, the username might be 'ceilometer', but
 /mnt/state/var/lib/ceilometer is now owned by 'cinder'.

I do not understand the problem statement. Unfortunately, I am not 
familiar with image based updates using TripleO.  What is updating what? 
If the UIDs are not asserted, what UIDs shift by one?  Is this because 
some files keep owner UID while the some UID=name binding in /etc/passwd 
changes? Or the other way around?  Why would there be a change in either?

If there is no short answer, don't worry about it.

Thanks,
Mike___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] a need to assert user ownership in preserved state

2014-10-01 Thread Steve Kowalik
On 02/10/14 12:26, Mike Spreitzer wrote:
 I do not understand the problem statement. Unfortunately, I am not
 familiar with image based updates using TripleO.  What is updating what?
  If the UIDs are not asserted, what UIDs shift by one?  Is this because
 some files keep owner UID while the some UID=name binding in /etc/passwd
 changes? Or the other way around?  Why would there be a change in either?
 
 If there is no short answer, don't worry about it.

You build one image, which creates a 'ceilometer' user with UID 1001,
and deploy that to your cloud.

You then build a new image to deploy to your cloud, and due to element
ordering (or something else), the 'cinder' user now has UID 1001. When
you switch to this image, /mnt/state/var/lib/ceilometer is now owned by
cinder.

Cheers,
-- 
Steve
If it (dieting) was like a real time strategy game, I'd have loaded a
save game from ten years ago.
 - Greg, Columbia Internet

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [IPv6] [ironic] New API format for extra_dhcp_opts

2014-10-01 Thread Carlino, Chuck
As a 'heads up', adding ironic to the thread since they are a 'key' consumer of 
this api.


On Oct 1, 2014, at 3:15 AM, Xu Han Peng 
pengxu...@gmail.commailto:pengxu...@gmail.com wrote:

ip_version sounds great.

Currently the opt-names are written into the configuration file of dnsmasq 
directly. So I would say yes, they are coming from dnsmasq definitions.

It will make more sense when ip_version is missing or null, the option apply to 
both since we could have only ipv6 or ipv4 address on the port. However, the 
validation of opt-value should rule out the ones which are not suitable for the 
current address. For example, an IPv6 dns server should not be specified for 
IPv4 address port, etc...

Xu Han

On 09/30/2014 08:41 PM, Robert Li (baoli) wrote:
Xu Han,

That looks good to me. To keep it consistent with existing CLI, we should use 
ip-version instead of ‘version’. It seems to be identical to prefixing the 
option_name with v4 or v6, though.

Just to clarify, are the available opt-names coming from dnsmasq definitions?

With regard to the default, your suggestion version is optional (no version 
means version=4). seems to be different from Mark’s:
I’m -1 for both options because neither is properly backwards compatible.  
Instead we should add an optional 3rd value to the dictionary: “version”.  The 
version key would be used to make the option only apply to either version 4 or 
6.  If the key is missing or null, then the option would apply to both.

Thanks,
Robert

On 9/30/14, 1:46 AM, Xu Han Peng 
pengxu...@gmail.commailto:pengxu...@gmail.com wrote:

Robert,

I think the CLI will look something like based on Mark's suggestion:

neutron port-create extra_dhcp_opts 
opt_name=dhcp_option_name,opt_value=value,version=4(or 6) network

This extra_dhcp_opts can be repeated and version is optional (no version means 
version=4).

Xu Han

On 09/29/2014 08:51 PM, Robert Li (baoli) wrote:
Hi Xu Han,

My question is how the CLI user interface would look like to distinguish 
between v4 and v6 dhcp options?

Thanks,
Robert

On 9/28/14, 10:29 PM, Xu Han Peng 
pengxu...@gmail.commailto:pengxu...@gmail.com wrote:

Mark's suggestion works for me as well. If no one objects, I am going to start 
the implementation.

Thanks,
Xu Han

On 09/27/2014 01:05 AM, Mark McClain wrote:

On Sep 26, 2014, at 2:39 AM, Xu Han Peng 
pengxu...@gmail.commailto:pengxu...@gmail.com wrote:

Currently the extra_dhcp_opts has the following API interface on a port:

{
port:
{
extra_dhcp_opts: [
{opt_value: testfile.1,opt_name: bootfile-name},
{opt_value: 123.123.123.123, opt_name: tftp-server},
{opt_value: 123.123.123.45, opt_name: server-ip-address}
],

 }
}

During the development of DHCPv6 function for IPv6 subnets, we found this 
format doesn't work anymore because an port can have both IPv4 and IPv6 
address. So we need to find a new way to specify extra_dhcp_opts for DHCPv4 and 
DHCPv6, respectively. ( https://bugs.launchpad.net/neutron/+bug/1356383)

Here are some thoughts about the new format:

Option1: Change the opt_name in extra_dhcp_opts to add a prefix (v4 or v6) so 
we can distinguish opts for v4 or v6 by parsing the opt_name. For backward 
compatibility, no prefix means IPv4 dhcp opt.

extra_dhcp_opts: [
{opt_value: testfile.1,opt_name: bootfile-name},
{opt_value: 123.123.123.123, opt_name: v4:tftp-server},
{opt_value: [2001:0200:feed:7ac0::1], opt_name: 
v6:dns-server}
]

Option2: break extra_dhcp_opts into IPv4 opts and IPv6 opts. For backward 
compatibility, both old format and new format are acceptable, but old format 
means IPv4 dhcp opts.

extra_dhcp_opts: {
 ipv4: [
{opt_value: testfile.1,opt_name: bootfile-name},
{opt_value: 123.123.123.123, opt_name: tftp-server},
 ],
 ipv6: [
{opt_value: [2001:0200:feed:7ac0::1], opt_name: 
dns-server}
 ]
}

The pro of Option1 is there is no need to change API structure but only need to 
add validation and parsing to opt_name. The con of Option1 is that user need to 
input prefix for every opt_name which can be error prone. The pro of Option2 is 
that it's clearer than Option1. The con is that we need to check two formats 
for backward compatibility.

We discussed this in IPv6 sub-team meeting and we think Option2 is preferred. 
Can I also get community's feedback on which one is preferred or any other 
comments?


I’m -1 for both options because neither is properly backwards compatible.  
Instead we should add an optional 3rd value to the dictionary: “version”.  The 
version key would be used to make the option only apply to either version 4 or 
6.  If the key is missing or null, then the option would apply to both.

mark




___
OpenStack-dev mailing list

Re: [openstack-dev] [TripleO] a need to assert user ownership in preserved state

2014-10-01 Thread James Polley
All three of the options presented here seem to assume that UIDs will always be 
allocated at image-build time. I think that's because most of these UIDs will 
be used to write files into the chroot at image-create time - if I could think 
of some way around that, I think we could avoid this problem more neatly by not 
assigning the UIDs until first boot

But since we can't do that, would it be possible to compromise by having the 
UIDs read in from heat metadata, and using the current allocation process if 
none is provided?

This should allow people who prefer to have static UIDs to have simple drop-in 
config, but also allow people who want to dynamically read from existing images 
to scrape the details and then drop them in.

To aid people who have existing images, perhaps we could provide a small tool 
(if one doesn't already exist) that simply reads /etc/passwd and returns a JSON 
username:uid map, to be added into the heat local environment when building the 
next image?

 On 2 Oct 2014, at 11:50, Clint Byrum cl...@fewbar.com wrote:
 
 Recently we've been testing image based updates using TripleO, and we've
 run into an interesting conundrum.
 
 Currently, our image build scripts create a user per service for the
 image. We don't, at this time, assert a UID, so it could get any UID in
 the /etc/passwd database of the image.
 
 However, if we add a service that happens to have its users created
 before a previously existing service, the UID's shift by one. When
 this new image is deployed, the username might be 'ceilometer', but
 /mnt/state/var/lib/ceilometer is now owned by 'cinder'.
 
 Here are 3 approaches, which are not mutually exclusive to one another.
 There are likely others, and I'd be interested in hearing your ideas.
 
 * Static UID's for all state-preserving services. Basically we'd just
  allocate these UID's from a static pool and those are always the UIDs
  no matter what. This is the simplest solution, but does not help
  anybody who is already looking to update a TripleO cloud. Also, this
  would cause problems if TripleO wants to merge with any existing
  system that might also want to use similar UID's. This also provides
  no guard against non-static UID's storing things on the state
  partition.
 
 * Fix the UID's on image update. We can backup /etc/passwd and
  /etc/group to /mnt/state, and on bootup we can diff the two, and any
  UIDs that changed can be migrated. This could be very costly if the
  swift storage UID changed, with millions of files present on the
  system. This merge process is also not atomic and may not be
  reversible, so it is a bit scary to automate this.
 
 * Assert ownership when registering state path. We could have any
  state-preserving elements register their desire for any important
  globs for the state drive to be owned by a particular symbolic
  username. This is just a different, more manual way to fix the UID's
  and carries the same cons.
 
 So, what do people think?
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev