Re: [openstack-dev] 2 Minute tokens

2014-09-30 Thread Chmouel Boudjnah
On Wed, Oct 1, 2014 at 3:47 AM, Adam Young  wrote:

> 1.  Identify the roles for the APIs that Cinder is going to be calling on
> swift based on Swifts policy.json


FYI: there is no Swifts policy.json in mainline code, there is one external
middleware available that provides it here
https://github.com/cloudwatt/swiftpolicy.git.

Chmouel
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Core Reviewer Change

2014-09-30 Thread Noorul Islam K M
Adrian Otto  writes:

> Solum Core Reviewer Team,
>
> I propose the following change to our core reviewer group:
>
> -lifeless (Robert Collins) [inactive]
> +murali-allada (Murali Allada)
> +james-li (James Li)
>
> Please let me know your votes (+1, 0, or -1).
>

+1

Regards,
Noorul

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Core Reviewer Change

2014-09-30 Thread Julien Vey
+1

2014-10-01 0:20 GMT+02:00 Angus Salkeld :

> +1
> On 01/10/2014 3:08 AM, "Adrian Otto"  wrote:
>
>> Solum Core Reviewer Team,
>>
>> I propose the following change to our core reviewer group:
>>
>> -lifeless (Robert Collins) [inactive]
>> +murali-allada (Murali Allada)
>> +james-li (James Li)
>>
>> Please let me know your votes (+1, 0, or -1).
>>
>> Thanks,
>>
>> Adrian
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack cascading

2014-09-30 Thread Alex Glikson
This sounds related to the discussion on the 'Nova clustered hypervisor 
driver' which started at Juno design summit [1]. Talking to another 
OpenStack should be similar to talking to vCenter. The idea was that the 
Cells support could be refactored around this notion as well. 
Not sure whether there have been any active progress with this in Juno, 
though.

Regards,
Alex


[1] 
http://junodesignsummit.sched.org/event/a0d38e1278182eb09f06e22457d94c0c#
[2] 
https://etherpad.openstack.org/p/juno-nova-clustered-hypervisor-support




From:   joehuang 
To: "OpenStack Development Mailing List (not for usage questions)" 

Date:   30/09/2014 04:08 PM
Subject:[openstack-dev] [all] [tc] Multi-clouds integration by 
OpenStack cascading



Hello, Dear TC and all, 

Large cloud operators prefer to deploy multiple OpenStack instances(as 
different zones), rather than a single monolithic OpenStack instance 
because of these reasons:
 
1) Multiple data centers distributed geographically;
2) Multi-vendor business policy;
3) Server nodes scale up modularized from 00's up to million;
4) Fault and maintenance isolation between zones (only REST interface);
 
At the same time, they also want to integrate these OpenStack instances 
into one cloud. Instead of proprietary orchestration layer, they want to 
use standard OpenStack framework for Northbound API compatibility with 
HEAT/Horizon or other 3rd ecosystem apps.
 
We call this pattern as "OpenStack Cascading", with proposal described by 
[1][2]. PoC live demo video can be found[3][4].
 
Nova, Cinder, Neutron, Ceilometer and Glance (optional) are involved in 
the OpenStack cascading. 
 
Kindly ask for cross program design summit session to discuss OpenStack 
cascading and the contribution to Kilo. 

Kindly invite those who are interested in the OpenStack cascading to work 
together and contribute it to OpenStack. 
 
(I applied for “other projects” track [5], but it would be better to 
have a discussion as a formal cross program session, because many core 
programs are involved )
 
 
[1] wiki: https://wiki.openstack.org/wiki/OpenStack_cascading_solution
[2] PoC source code: https://github.com/stackforge/tricircle
[3] Live demo video at YouTube: 
https://www.youtube.com/watch?v=OSU6PYRz5qY
[4] Live demo video at Youku (low quality, for those who can't access 
YouTube):http://v.youku.com/v_show/id_XNzkzNDQ3MDg4.html
[5] 
http://www.mail-archive.com/openstack-dev@lists.openstack.org/msg36395.html

 
Best Regards
Chaoyi Huang ( Joe Huang )
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] VPNaaS site to site connection down.

2014-09-30 Thread masoom alam
Hi Paul,

Apologies for late response. I was having throat infection.




> Can you show the ipsec-site-connection-create command used on each end?
>


neutron ipsec-site-connection-create --name vpnconnection1 --vpnservice-id
myvpn --ikepolicy-id ikepolicy1 --ipsecpolicy-id ipsecpolicy1
--peer-address  --peer-id  --peer-cidr
10.2.0.0/24 --psk secret


   - In the above command: --peer-address is the public ip of the node
   having devstack setup -- you can call it devstack West
   - --peer-id: we are giving the ip of the q-router


Make sense?



> Can you show the topology with IP addresses used (and indicate how the two
> clouds are connected)?
> Are you using devstack? Two physical nodes? How are they interconnected?
>

We are using exactly the same topology as shown below even the floating ip
addresses are same one mentioned below. However, Our Internet gateway is a
public ip. Similarly, other Internet GW is also a public ip.

  (10.1.0.0/24 - DevStack *East*)
  |
  |  10.1.0.1
 [Quantum Router]
  |  172.24.4.226
  |
  |  172.24.4.225
 [Internet GW]
  |
  |
 [Internet GW]
  | 172.24.4.232
  |
  | 172.24.4.233
 [Quantum Router]
  |  10.2.0.1
  |
 (10.2.0.0/24 DevStack *West*)



> First thing would be to ensure that you can ping from one host to another
> over the public IPs involved. You can then go to the namespace of the
> router and see if you can ping the public I/F of the other end’s router.
>

We can ping anything on the host having devstack setup for example
google.com, but GW of the other host. However, we cannot ping from within
the CirrOS instance. I have run the traceroute command and we are reaching
till 172.24.4.225 but not beyond this point. BTW we did some other
experiments as well. For example, when we tried to explicitly link our
br-ex (172.24.4.225) with eth0 (Internet GW), machine got corrupted. Same
is the issue if we do a hard reboot, Neutron gets corrupted :)


>
> You can look at the screen-q-vpn.log (assuming devstack used) to see if
> any errors during setup.
>
> Note: When I stack, I turn off neutron security groups and then set nova
> security groups to allow SSH and ICMP. I imagine the alternative would be
> to setup neutron security groups to allow these two protocols.
>
> I didn’t quite follow what you meant by "Please note that my two devstack
> nodes are on different public addresses, so scenario is a little different
> than the one described here:
> https://wiki.openstack.org/wiki/Neutron/VPNaaS/HowToInstall”. Can you
> elaborate (showing the commands and topology will help)?
>
> Germy,
>
> I have created this BP during Juno (unfortunately no progress on it
> however), regarding being able to see more status information for
> troubleshooting:
> https://blueprints.launchpad.net/neutron/+spec/l3-svcs-vendor-status-report
>
> It was targeted for vendor implementations, but would include reference
> implementation status too. Right now, if a VPN connection negotiation
> fails, there’s no indication of what went wrong.
>
> Regards,
>
>
> PCM (Paul Michali)
>
> MAIL …..…. p...@cisco.com
> IRC ……..… pcm_ (irc.freenode.com)
> TW ………... @pmichali
> GPG Key … 4525ECC253E31A83
> Fingerprint .. 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83
>
>
>
> On Sep 29, 2014, at 1:38 AM, masoom alam  wrote:
>
> Hi Germy
>
> We cannot ping the public interface of the 2nd devstack setup (devstack
> West). From our Cirros instance (First devstack -- devstack east), we can
> ping our own public ip, but cannot ping the other public ip. I think
> problem lies here, if we are reaching the devstack west, how can we make a
> VPN connection
>
> Our topology looks like:
>
> *CirrOS --->Qrouter>Public IP ---publicIP>Qrouter->CirrOS*
> _ _
>devstack EASTdevstack WEST
>
>
> Also it is important to note that we are not able to ssh the instance
> private ip, without *sudo ip netns qrouter id *so this means we cannot
> even ssh with floating ip.
>
>
> it seems there is a problem in firewall or iptables.
>
> Please guide
>
>
>
> On Sunday, September 28, 2014, Germy Lure  wrote:
>
>> Hi,
>>
>> masoom:
>> I think firstly you can just check that if you could ping from left to
>> right without installing VPN connection.
>> If it worked, then you should cat the system logs to confirm the
>> configure's OK.
>> You can ping and tcpdump to dialog where packets are blocked.
>>
>> stackers:
>> I think we should give mechanism to show the cause when vpn-connection is
>> down. At least, we could extend an attribute to explain this. Maybe the
>> VPN-incubator project is a chance?
>>
>> BR,
>> Germy
>>
>>
>> On Sat, Sep 27, 2014 at 7:04 PM, masoom alam 
>> wrote:
>>
>>> Hi Every one,
>>>
>>> I am trying to establish the VPN conne

Re: [openstack-dev] 2 Minute tokens

2014-09-30 Thread Adam Young


> This is comparable to the HEAT use case that Keystone Trusts were 
originally designed to solve.

>
> If the glance client knows the roles required to perform those 
operations, it could create the trust up front, with the Glance 
Service user as the trustee; the trustee execute the trust when it 
needs the token.

>
> Are there other cases besides the glance one that require long lived 
tokens?


Cinder backups. These do many swift operations over a long period, 
often hours. They should probably be converted to trusts, but I'd need 
some guidance do so.




1.  Identify the roles for the APIs that Cinder is going to be calling 
on swift based on Swifts policy.json
2.  The user Creates trust:  this could be done by the cinder client.  
The create trust call is here:


trustee user is the Cinder service user
trustor user is the actual human being (or reasonable facimile thereof) 
that needs this work done.



https://github.com/openstack/identity-api/blob/master/v3/src/markdown/identity-api-v3-os-trust-ext.md#create-trust-post-os-trusttrusts

impersonation should only be set if the API requires the tokens user_id 
to reflect the trustor.  I know that swift was one of the use cases for 
impersonation, but I don't think it is required.


Expiration date time is optional, but probably appropriate;  chose a 
value that is guaranteed to be long enough for the workflow, but that 
still limits the window for attack.   If this backup routinely takes 12 
hours, perhaps 24 hours is appropriate.


When the user requests the Cinder backup, they will have to include the 
trust ID in the request.  The cinder endpoint, prior to making a call to 
Swift, will get a token from Keystone, passing in the trust id  as per


https://github.com/openstack/identity-api/blob/master/v3/src/markdown/identity-api-v3-os-trust-ext.md#consuming-a-trust-with-post-authtokens




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] Welcome three new members to project-config-core

2014-09-30 Thread Kurt Taylor
Congratulations everyone, well deserved!

Kurt Taylor (krtaylor)

On Tue, Sep 30, 2014 at 9:54 AM, Jeremy Stanley  wrote:
> With unanimous consent[1][2][3] of the OpenStack Project
> Infrastructure core team (infra-core), I'm pleased to welcome
> Andreas Jaeger, Anita Kuno and Sean Dague as members of the
> newly-formed project-config-core team. Their assistance has been
> invaluable in reviewing changes to our project-specific
> configuration data, and I predict their addition to the core team
> for the newly split-out openstack-infra/project-config repository
> represents an immense benefit to everyone in OpenStack.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack cascading

2014-09-30 Thread joehuang
Hello, Adam,



Nice post. With KeyStone federation and multiple-signers, and plus OpenStack 
cascading, it would be helpful to delivery hybrid cloud for which both private 
cloud and public cloud are built upon OpenStack instances.



It would be a great picture.



Best Regards



Chaoyi Hunag ( joehuang )




From: Adam Young [ayo...@redhat.com]
Sent: 01 October 2014 4:25
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack 
cascading

On 09/30/2014 12:10 PM, John Griffith wrote:


On Tue, Sep 30, 2014 at 7:35 AM, John Garbutt 
mailto:j...@johngarbutt.com>> wrote:
On 30 September 2014 14:04, joehuang 
mailto:joehu...@huawei.com>> wrote:
> Hello, Dear TC and all,
>
> Large cloud operators prefer to deploy multiple OpenStack instances(as 
> different zones), rather than a single monolithic OpenStack instance because 
> of these reasons:
>
> 1) Multiple data centers distributed geographically;
> 2) Multi-vendor business policy;
> 3) Server nodes scale up modularized from 00's up to million;
> 4) Fault and maintenance isolation between zones (only REST interface);
>
> At the same time, they also want to integrate these OpenStack instances into 
> one cloud. Instead of proprietary orchestration layer, they want to use 
> standard OpenStack framework for Northbound API compatibility with 
> HEAT/Horizon or other 3rd ecosystem apps.
>
> We call this pattern as "OpenStack Cascading", with proposal described by 
> [1][2]. PoC live demo video can be found[3][4].
>
> Nova, Cinder, Neutron, Ceilometer and Glance (optional) are involved in the 
> OpenStack cascading.
>
> Kindly ask for cross program design summit session to discuss OpenStack 
> cascading and the contribution to Kilo.
>
> Kindly invite those who are interested in the OpenStack cascading to work 
> together and contribute it to OpenStack.
>
> (I applied for “other projects” track [5], but it would be better to have a 
> discussion as a formal cross program session, because many core programs are 
> involved )
>
>
> [1] wiki: https://wiki.openstack.org/wiki/OpenStack_cascading_solution
> [2] PoC source code: https://github.com/stackforge/tricircle
> [3] Live demo video at YouTube: https://www.youtube.com/watch?v=OSU6PYRz5qY
> [4] Live demo video at Youku (low quality, for those who can't access 
> YouTube):http://v.youku.com/v_show/id_XNzkzNDQ3MDg4.html
> [5] 
> http://www.mail-archive.com/openstack-dev@lists.openstack.org/msg36395.html

There are etherpads for suggesting cross project sessions here:
https://wiki.openstack.org/wiki/Summit/Planning
https://etherpad.openstack.org/p/kilo-crossproject-summit-topics

I am interested at comparing this to Nova's cells concept:
http://docs.openstack.org/trunk/config-reference/content/section_compute-cells.html

Cells basically scales out a single datacenter region by aggregating
multiple child Nova installations with an API cell.

Each child cell can be tested in isolation, via its own API, before
joining it up to an API cell, that adds it into the region. Each cell
logically has its own database and message queue, which helps get more
independent failure domains. You can use cell level scheduling to
restrict people or types of instances to particular subsets of the
cloud, if required.

It doesn't attempt to aggregate between regions, they are kept
independent. Except, the usual assumption that you have a common
identity between all regions.

It also keeps a single Cinder, Glance, Neutron deployment per region.


I'm starting on work to support a comparable mechanism to share data between 
Keystone servers.

http://adam.younglogic.com/2014/09/multiple-signers/


It would be great to get some help hardening, testing, and building
out more of the cells vision. I suspect we may form a new Nova subteam
to trying and drive this work forward in kilo, if we can build up
enough people wanting to work on improving cells.

Thanks,
John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

​Interesting idea, to be honest when TripleO was first announced what you have 
here is more along the lines of what I envisioned.  It seems that this would 
have some interesting wins in terms of upgrades, migrations and scaling in 
general.  Anyway, you should propose it to the etherpad as John G ( the other 
John G :) ) recommended, I'd love to dig deeper into this.






Thanks,
John​




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack cascading

2014-09-30 Thread joehuang
Hello, Andrew and Tim,

I understand CERN has Cells solution installation and there is a subteam to 
solve Cells challenge.

I copy the reply to John Garbutt to clarify the difference:

The major difference between Cells and OpenStack cascading is the  problem 
domain:
OpenStack cascading: to integrate multi-site / multi-vendor OpenStack instances 
into one cloud with OpenStack API exposed.
Cells: a single OpenStack instance scale up methodology
Therefore, no conflict between Cells and OpenStack cascading. They can be used 
for different scenario, and Cells can also be used as the cascaded OpenStack 
(the child OpenStack).

And OpenStack cascading also provide the capability for cross data center L2/L3 
networking for a tennat.

"Flavor", "Server Group" (Host Aggregate?), "Security Group"(not clear the 
concrete problem) issue could be solved in OpenStack cascading solution from 
architecure point of view.

Best Regards

Chaoyi Huang ( joehuang )

From: Andrew Laski [andrew.la...@rackspace.com]
Sent: 01 October 2014 3:49
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack 
cascading

On 09/30/2014 03:07 PM, Tim Bell wrote:
>> -Original Message-
>> From: John Garbutt [mailto:j...@johngarbutt.com]
>> Sent: 30 September 2014 15:35
>> To: OpenStack Development Mailing List (not for usage questions)
>> Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack
>> cascading
>>
>> On 30 September 2014 14:04, joehuang  wrote:
>>> Hello, Dear TC and all,
>>>
>>> Large cloud operators prefer to deploy multiple OpenStack instances(as
>> different zones), rather than a single monolithic OpenStack instance because 
>> of
>> these reasons:
>>> 1) Multiple data centers distributed geographically;
>>> 2) Multi-vendor business policy;
>>> 3) Server nodes scale up modularized from 00's up to million;
>>> 4) Fault and maintenance isolation between zones (only REST
>>> interface);
>>>
>>> At the same time, they also want to integrate these OpenStack instances into
>> one cloud. Instead of proprietary orchestration layer, they want to use 
>> standard
>> OpenStack framework for Northbound API compatibility with HEAT/Horizon or
>> other 3rd ecosystem apps.
>>> We call this pattern as "OpenStack Cascading", with proposal described by
>> [1][2]. PoC live demo video can be found[3][4].
>>> Nova, Cinder, Neutron, Ceilometer and Glance (optional) are involved in the
>> OpenStack cascading.
>>> Kindly ask for cross program design summit session to discuss OpenStack
>> cascading and the contribution to Kilo.
>>> Kindly invite those who are interested in the OpenStack cascading to work
>> together and contribute it to OpenStack.
>>> (I applied for “other projects” track [5], but it would be better to
>>> have a discussion as a formal cross program session, because many core
>>> programs are involved )
>>>
>>>
>>> [1] wiki: https://wiki.openstack.org/wiki/OpenStack_cascading_solution
>>> [2] PoC source code: https://github.com/stackforge/tricircle
>>> [3] Live demo video at YouTube:
>>> https://www.youtube.com/watch?v=OSU6PYRz5qY
>>> [4] Live demo video at Youku (low quality, for those who can't access
>>> YouTube):http://v.youku.com/v_show/id_XNzkzNDQ3MDg4.html
>>> [5]
>>> http://www.mail-archive.com/openstack-dev@lists.openstack.org/msg36395
>>> .html
>> There are etherpads for suggesting cross project sessions here:
>> https://wiki.openstack.org/wiki/Summit/Planning
>> https://etherpad.openstack.org/p/kilo-crossproject-summit-topics
>>
>> I am interested at comparing this to Nova's cells concept:
>> http://docs.openstack.org/trunk/config-reference/content/section_compute-
>> cells.html
>>
>> Cells basically scales out a single datacenter region by aggregating 
>> multiple child
>> Nova installations with an API cell.
>>
>> Each child cell can be tested in isolation, via its own API, before joining 
>> it up to
>> an API cell, that adds it into the region. Each cell logically has its own 
>> database
>> and message queue, which helps get more independent failure domains. You can
>> use cell level scheduling to restrict people or types of instances to 
>> particular
>> subsets of the cloud, if required.
>>
>> It doesn't attempt to aggregate between regions, they are kept independent.
>> Except, the usual assumption that you have a common identity between all
>> regions.
>>
>> It also keeps a single Cinder, Glance, Neutron deployment per region.
>>
>> It would be great to get some help hardening, testing, and building out more 
>> of
>> the cells vision. I suspect we may form a new Nova subteam to trying and 
>> drive
>> this work forward in kilo, if we can build up enough people wanting to work 
>> on
>> improving cells.
>>
> At CERN, we've deployed cells at scale but are finding a number of 
> architectural issues that need resolution in the short term to attain feature 
> parity. A vision of "we all run c

Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack cascading

2014-09-30 Thread joehuang
Hello, Joshua,

Thank you very much for your deep thinking.

1. Quite different with cells. I have to copy the content from the mail to John 
Garbutt:

The major difference between Cells and OpenStack cascading is the  problem 
domain:
OpenStack cascading: to integrate multi-site / multi-vendor OpenStack instances 
into one cloud with OpenStack API exposed.
Cells: a single OpenStack instance scale up methodology

2. For quota, it is controlled by the cascading OpenStack (the parent 
OpenStack). Because the cascading OpenStack has all logical objects.

3. Race condition: what's the concrete race condition issue.

4. Inconsistency. Because there are object uuid mapping between the cascading 
OpenStack and cascaded OpenStack, so to track the consistency is possible and 
easy to solve, although we did not implement in the PoC source code.

5. "I'd rather stick with the less scalable distributed system we have", no 
conflict, no matter OpenStack cascading introduced or not, we need a solid, 
stable and scalable OpenStack.

6. "How I imagine this working out (in my view)", all these things are good, I 
also like it.

Best Regards

Chaoyi Hunag ( joehuang )


From: Joshua Harlow [harlo...@outlook.com]
Sent: 01 October 2014 3:17
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by 
OpenStack cascading

So this does seem a-lot like cells but makes cells appear in the other projects.

IMHO the same problems that occur in cells appear here in that we are 
sacrificing consistency of the already problematic systems that already exist 
to gain scale (and to gain more inconsistency). Every time I see a 'the parent 
OpenStack manage many child OpenStacks by using standard OpenStack API' in that 
wiki I wonder how the parent will resolve inconsistencies that exist in 
children (likely it can't). How do quotas work across parent/children, how do 
race conditions get resolved...

IMHO I'd rather stick with the less scalable distributed system we have, iron 
out its quirks, fix the quota (via whatever that project is named now), split 
out the nova/... drivers so they can be maintainable in various projects, fix 
the various already inconsistent state machines that exist, split out the 
scheduler into its own project so that can be shared... All of the mentioned 
things improve scale and improve tolerance to individual failures rather than 
create a whole new level of 'pain' via a tightly bound set of proxies, 
cascading hierarchies Managing this whole cascading clusters and such also 
would seem to be operational management nightmare that I'm not sure is 
justified at the current time being (when operators already have enough trouble 
with the current code bases).

How I imagine this working out (in my view):

* Split out the shared services (gantt, scheduler, quotas...) into real SOA 
services that everyone can use.
* Have cinder-api, nova-api, neutron-api integrate with the split out services 
to obtain consistent views of the world when performing API operations.
* Have cinder, nova, neutron provide 'workers' (nova-compute is a basic worker) 
that can be scaled out across all your clusters and interconnected to a type of 
conductor node in some manner (mq?), and have the outcome of cinder-api, 
nova-api, neutron-api be a workflow that some service (conductor/s?) ensures 
occurs reliably (or aborts). This makes it so that cinder-api, nova-api... can 
scale at will, conductors can scale at will and so can worker nodes...
* Profit!

TLDR; It would seem like this adds more complexity, not less, and I'm not sure 
complexity is what openstack needs more of right now...

-Josh

On Sep 30, 2014, at 6:04 AM, joehuang  wrote:

> Hello, Dear TC and all,
>
> Large cloud operators prefer to deploy multiple OpenStack instances(as 
> different zones), rather than a single monolithic OpenStack instance because 
> of these reasons:
>
> 1) Multiple data centers distributed geographically;
> 2) Multi-vendor business policy;
> 3) Server nodes scale up modularized from 00's up to million;
> 4) Fault and maintenance isolation between zones (only REST interface);
>
> At the same time, they also want to integrate these OpenStack instances into 
> one cloud. Instead of proprietary orchestration layer, they want to use 
> standard OpenStack framework for Northbound API compatibility with 
> HEAT/Horizon or other 3rd ecosystem apps.
>
> We call this pattern as "OpenStack Cascading", with proposal described by 
> [1][2]. PoC live demo video can be found[3][4].
>
> Nova, Cinder, Neutron, Ceilometer and Glance (optional) are involved in the 
> OpenStack cascading.
>
> Kindly ask for cross program design summit session to discuss OpenStack 
> cascading and the contribution to Kilo.
>
> Kindly invite those who are interested in the OpenStack cascading to work 
> together and contribute it to OpenStack.
>
> (I applied for “other

Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack cascading

2014-09-30 Thread joehuang
Hello, Joe,



Thank your encourage and good suggestion. That means this thread is a good 
start.



So if anyone has any doubts about OpenStack cascading, please following this 
thread, so that we can colloect all things could not be solved in the mail, and 
then discussed in the design summit session.



Best Regards



Chaoyi Hunag ( joehuang )




From: Joe Gordon [joe.gord...@gmail.com]
Sent: 01 October 2014 2:06
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack 
cascading



On Tue, Sep 30, 2014 at 6:04 AM, joehuang 
mailto:joehu...@huawei.com>> wrote:
Hello, Dear TC and all,

Large cloud operators prefer to deploy multiple OpenStack instances(as 
different zones), rather than a single monolithic OpenStack instance because of 
these reasons:

1) Multiple data centers distributed geographically;
2) Multi-vendor business policy;
3) Server nodes scale up modularized from 00's up to million;
4) Fault and maintenance isolation between zones (only REST interface);

At the same time, they also want to integrate these OpenStack instances into 
one cloud. Instead of proprietary orchestration layer, they want to use 
standard OpenStack framework for Northbound API compatibility with HEAT/Horizon 
or other 3rd ecosystem apps.

We call this pattern as "OpenStack Cascading", with proposal described by 
[1][2]. PoC live demo video can be found[3][4].

Nova, Cinder, Neutron, Ceilometer and Glance (optional) are involved in the 
OpenStack cascading.

Kindly ask for cross program design summit session to discuss OpenStack 
cascading and the contribution to Kilo.

Cross program design summit sessions should be used for things that we are 
unable to make progress on via this mailing list, and not as a way to begin new 
conversations. With that in mind, I think this thread is a good place to get 
initial feedback on the idea and possible make a plan for how to tackle this.


Kindly invite those who are interested in the OpenStack cascading to work 
together and contribute it to OpenStack.

(I applied for “other projects” track [5], but it would be better to have a 
discussion as a formal cross program session, because many core programs are 
involved )


[1] wiki: https://wiki.openstack.org/wiki/OpenStack_cascading_solution
[2] PoC source code: https://github.com/stackforge/tricircle
[3] Live demo video at YouTube: https://www.youtube.com/watch?v=OSU6PYRz5qY
[4] Live demo video at Youku (low quality, for those who can't access 
YouTube):http://v.youku.com/v_show/id_XNzkzNDQ3MDg4.html
[5] http://www.mail-archive.com/openstack-dev@lists.openstack.org/msg36395.html

Best Regards
Chaoyi Huang ( Joe Huang )
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack cascading

2014-09-30 Thread joehuang
Hello, John Griffith,



Thank you very much for your funny mail. Now I see 2 "John G" ;)



I would like to say that TrippleO is the pioneer to handle the correlationship 
among the OpenStack instances. Cheer.



The problem domain for OpenStack cascading is multi-site / multi-vendor 
OpenStack instances integration. Based on this, the large scale cloud can be 
distributed in many data centers, and fault isolation / trouble shooting / 
configuration change / upgrade / patch /... can be done seperated by different 
OpenStack instance.



For example, a cloud includes 2 data center, vendor A sold their OpenStack 
solution in data center A, vendor B sold their OpenStack solution in data 
center B. If a criticle bug found in data center B, then the vendor B is 
reponsible for the bug fix and patch update. Clear duty responsiblity with 
independent OpenStack instances, even for the integration of software / 
hardware  .



Best Regards

Chaoyi Huang ( joehuang)





From: John Griffith [john.griff...@solidfire.com]
Sent: 01 October 2014 0:10
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack 
cascading



On Tue, Sep 30, 2014 at 7:35 AM, John Garbutt 
mailto:j...@johngarbutt.com>> wrote:
On 30 September 2014 14:04, joehuang 
mailto:joehu...@huawei.com>> wrote:
> Hello, Dear TC and all,
>
> Large cloud operators prefer to deploy multiple OpenStack instances(as 
> different zones), rather than a single monolithic OpenStack instance because 
> of these reasons:
>
> 1) Multiple data centers distributed geographically;
> 2) Multi-vendor business policy;
> 3) Server nodes scale up modularized from 00's up to million;
> 4) Fault and maintenance isolation between zones (only REST interface);
>
> At the same time, they also want to integrate these OpenStack instances into 
> one cloud. Instead of proprietary orchestration layer, they want to use 
> standard OpenStack framework for Northbound API compatibility with 
> HEAT/Horizon or other 3rd ecosystem apps.
>
> We call this pattern as "OpenStack Cascading", with proposal described by 
> [1][2]. PoC live demo video can be found[3][4].
>
> Nova, Cinder, Neutron, Ceilometer and Glance (optional) are involved in the 
> OpenStack cascading.
>
> Kindly ask for cross program design summit session to discuss OpenStack 
> cascading and the contribution to Kilo.
>
> Kindly invite those who are interested in the OpenStack cascading to work 
> together and contribute it to OpenStack.
>
> (I applied for “other projects” track [5], but it would be better to have a 
> discussion as a formal cross program session, because many core programs are 
> involved )
>
>
> [1] wiki: https://wiki.openstack.org/wiki/OpenStack_cascading_solution
> [2] PoC source code: https://github.com/stackforge/tricircle
> [3] Live demo video at YouTube: https://www.youtube.com/watch?v=OSU6PYRz5qY
> [4] Live demo video at Youku (low quality, for those who can't access 
> YouTube):http://v.youku.com/v_show/id_XNzkzNDQ3MDg4.html
> [5] 
> http://www.mail-archive.com/openstack-dev@lists.openstack.org/msg36395.html

There are etherpads for suggesting cross project sessions here:
https://wiki.openstack.org/wiki/Summit/Planning
https://etherpad.openstack.org/p/kilo-crossproject-summit-topics

I am interested at comparing this to Nova's cells concept:
http://docs.openstack.org/trunk/config-reference/content/section_compute-cells.html

Cells basically scales out a single datacenter region by aggregating
multiple child Nova installations with an API cell.

Each child cell can be tested in isolation, via its own API, before
joining it up to an API cell, that adds it into the region. Each cell
logically has its own database and message queue, which helps get more
independent failure domains. You can use cell level scheduling to
restrict people or types of instances to particular subsets of the
cloud, if required.

It doesn't attempt to aggregate between regions, they are kept
independent. Except, the usual assumption that you have a common
identity between all regions.

It also keeps a single Cinder, Glance, Neutron deployment per region.

It would be great to get some help hardening, testing, and building
out more of the cells vision. I suspect we may form a new Nova subteam
to trying and drive this work forward in kilo, if we can build up
enough people wanting to work on improving cells.

Thanks,
John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

​Interesting idea, to be honest when TripleO was first announced what you have 
here is more along the lines of what I envisioned.  It seems that this would 
have some interesting wins in terms of upgrades, migrations and scaling in 
general.  Anyway, you should propose it to the ether

Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack cascading

2014-09-30 Thread joehuang
Hello, John Garbutt

Thank you for your message, I will register cross project topic following the 
link.

The major difference between Cells and OpenStack cascading is the  problem 
domain:

OpenStack cascading: to integrate multi-site / multi-vendor OpenStack instances 
into one cloud with OpenStack API exposed.
Cells: a single OpenStack instance scale up methodology

Therefore, no conflict between Cells and OpenStack cascading. They can be used 
for different scenario, and Cells can also be used as the cascaded OpenStack 
(the child OpenStack).

Best Regards
 
Chaoyi Huang ( joehuang)


From: John Garbutt [j...@johngarbutt.com]
Sent: 30 September 2014 21:35
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by 
OpenStack cascading

On 30 September 2014 14:04, joehuang  wrote:
> Hello, Dear TC and all,
>
> Large cloud operators prefer to deploy multiple OpenStack instances(as 
> different zones), rather than a single monolithic OpenStack instance because 
> of these reasons:
>
> 1) Multiple data centers distributed geographically;
> 2) Multi-vendor business policy;
> 3) Server nodes scale up modularized from 00's up to million;
> 4) Fault and maintenance isolation between zones (only REST interface);
>
> At the same time, they also want to integrate these OpenStack instances into 
> one cloud. Instead of proprietary orchestration layer, they want to use 
> standard OpenStack framework for Northbound API compatibility with 
> HEAT/Horizon or other 3rd ecosystem apps.
>
> We call this pattern as "OpenStack Cascading", with proposal described by 
> [1][2]. PoC live demo video can be found[3][4].
>
> Nova, Cinder, Neutron, Ceilometer and Glance (optional) are involved in the 
> OpenStack cascading.
>
> Kindly ask for cross program design summit session to discuss OpenStack 
> cascading and the contribution to Kilo.
>
> Kindly invite those who are interested in the OpenStack cascading to work 
> together and contribute it to OpenStack.
>
> (I applied for “other projects” track [5], but it would be better to have a 
> discussion as a formal cross program session, because many core programs are 
> involved )
>
>
> [1] wiki: https://wiki.openstack.org/wiki/OpenStack_cascading_solution
> [2] PoC source code: https://github.com/stackforge/tricircle
> [3] Live demo video at YouTube: https://www.youtube.com/watch?v=OSU6PYRz5qY
> [4] Live demo video at Youku (low quality, for those who can't access 
> YouTube):http://v.youku.com/v_show/id_XNzkzNDQ3MDg4.html
> [5] 
> http://www.mail-archive.com/openstack-dev@lists.openstack.org/msg36395.html

There are etherpads for suggesting cross project sessions here:
https://wiki.openstack.org/wiki/Summit/Planning
https://etherpad.openstack.org/p/kilo-crossproject-summit-topics

I am interested at comparing this to Nova's cells concept:
http://docs.openstack.org/trunk/config-reference/content/section_compute-cells.html

Cells basically scales out a single datacenter region by aggregating
multiple child Nova installations with an API cell.

Each child cell can be tested in isolation, via its own API, before
joining it up to an API cell, that adds it into the region. Each cell
logically has its own database and message queue, which helps get more
independent failure domains. You can use cell level scheduling to
restrict people or types of instances to particular subsets of the
cloud, if required.

It doesn't attempt to aggregate between regions, they are kept
independent. Except, the usual assumption that you have a common
identity between all regions.

It also keeps a single Cinder, Glance, Neutron deployment per region.

It would be great to get some help hardening, testing, and building
out more of the cells vision. I suspect we may form a new Nova subteam
to trying and drive this work forward in kilo, if we can build up
enough people wanting to work on improving cells.

Thanks,
John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] PTL leave of absence

2014-09-30 Thread Clint Byrum
I will try my best to hold things down until then, and I look forward
to your return. Take care.

Excerpts from Robert Collins's message of 2014-09-30 12:22:44 -0700:
> I have had an unexpected family matter turn up, and may be absent at
> fairly arbitrary points for a couple weeks while we deal with the
> fallout.
> 
> I've asked Clint to wear my PTL hat between now and the 3rd when
> voting closes and we find out whether he or James are the new PTL.
> 
> He has my real world contact details should something urgent which
> only I can do [there should be no such things] turn up.
> 
> -Rob
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] 2 Minute tokens

2014-09-30 Thread Duncan Thomas
On Oct 1, 2014 12:37 AM, "Adam Young"  wrote:
>
> On 09/30/2014 12:21 PM, Sean Dague wrote:
>>
>> On 09/30/2014 11:58 AM, Jay Pipes wrote:
>>>
>>> On 09/30/2014 11:37 AM, Adam Young wrote:

 On 09/30/2014 11:06 AM, Louis Taylor wrote:
>
> On Tue, Sep 30, 2014 at 10:44:51AM -0400, Adam Young wrote:
>>
>> What are the uses that require long lived tokens?
>
> Glance has operations which can take a long time, such as uploading
and
> downloading large images.

 Yes, but the token is only authenticated at the start of the operation.
 Does anything need to happen afterwards?
>>>
>>> Funny you mention it... :) We were just having this conversation on IRC
>>> about Nikesh's issues with some Tempest volume tests and a token
>>> expiration problem.
>>>
>>> So, yes, a Glance upload operation makes a series of HTTP calls in the
>>> course of the upload:
>>>
>>>   POST $registry/images <-- Creates the queued image record
>>>   ...  upload of chunked body of HTTP request to backend like Swift ..
>>>   PUT $registry/images/ <-- update image status and checksum
>>>
>>> So, what seems to be happening here is that the PUT call at the end of
>>> uploading the snapshot is using the same token that was created in the
>>> keystone client of the tempest test case during the test classes'
>>> setUpClass() method, and the test class ends up running for >1 hour, and
>>> by the time the PUT call is reached, the token has expired.
>>
>> Yes... and there is this whole unresolved dev thread on this -
>>
http://lists.openstack.org/pipermail/openstack-dev/2014-September/045567.html
>>
>> -Sean
>>
>
> This is comparable to the HEAT use case that Keystone Trusts were
originally designed to solve.
>
> If the glance client knows the roles required to perform those
operations, it could create the trust up front, with the  Glance Service
user as the trustee; the trustee execute the trust when it needs the token.
>
> Are there other cases besides the glance one that require long lived
tokens?

Cinder backups. These do many swift operations over a long period, often
hours. They should probably be converted to trusts, but I'd need some
guidance do so.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Core Reviewer Change

2014-09-30 Thread Angus Salkeld
+1
On 01/10/2014 3:08 AM, "Adrian Otto"  wrote:

> Solum Core Reviewer Team,
>
> I propose the following change to our core reviewer group:
>
> -lifeless (Robert Collins) [inactive]
> +murali-allada (Murali Allada)
> +james-li (James Li)
>
> Please let me know your votes (+1, 0, or -1).
>
> Thanks,
>
> Adrian
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] 2 Minute tokens

2014-09-30 Thread Andrew Laski


On 09/30/2014 05:33 PM, Adam Young wrote:

On 09/30/2014 12:21 PM, Sean Dague wrote:

On 09/30/2014 11:58 AM, Jay Pipes wrote:

On 09/30/2014 11:37 AM, Adam Young wrote:

On 09/30/2014 11:06 AM, Louis Taylor wrote:

On Tue, Sep 30, 2014 at 10:44:51AM -0400, Adam Young wrote:

What are the uses that require long lived tokens?
Glance has operations which can take a long time, such as 
uploading and

downloading large images.
Yes, but the token is only authenticated at the start of the 
operation.

Does anything need to happen afterwards?

Funny you mention it... :) We were just having this conversation on IRC
about Nikesh's issues with some Tempest volume tests and a token
expiration problem.

So, yes, a Glance upload operation makes a series of HTTP calls in the
course of the upload:

  POST $registry/images <-- Creates the queued image record
  ...  upload of chunked body of HTTP request to backend like Swift ..
  PUT $registry/images/ <-- update image status and checksum

So, what seems to be happening here is that the PUT call at the end of
uploading the snapshot is using the same token that was created in the
keystone client of the tempest test case during the test classes'
setUpClass() method, and the test class ends up running for >1 hour, 
and

by the time the PUT call is reached, the token has expired.

Yes... and there is this whole unresolved dev thread on this -
http://lists.openstack.org/pipermail/openstack-dev/2014-September/045567.html 



-Sean



This is comparable to the HEAT use case that Keystone Trusts were 
originally designed to solve.


If the glance client knows the roles required to perform those 
operations, it could create the trust up front, with the  Glance 
Service user as the trustee; the trustee execute the trust when it 
needs the token.


Are there other cases besides the glance one that require long lived 
tokens?


Another potential case would be Nova interactions with Cinder when Nova 
is asked to create a volume on a users behalf in order to boot an 
instance from it.  The creation of the volume can take a long time and 
token expiration could be an issue in that process.





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] 2 Minute tokens

2014-09-30 Thread Adam Young

On 09/30/2014 12:21 PM, Sean Dague wrote:

On 09/30/2014 11:58 AM, Jay Pipes wrote:

On 09/30/2014 11:37 AM, Adam Young wrote:

On 09/30/2014 11:06 AM, Louis Taylor wrote:

On Tue, Sep 30, 2014 at 10:44:51AM -0400, Adam Young wrote:

What are the uses that require long lived tokens?

Glance has operations which can take a long time, such as uploading and
downloading large images.

Yes, but the token is only authenticated at the start of the operation.
Does anything need to happen afterwards?

Funny you mention it... :) We were just having this conversation on IRC
about Nikesh's issues with some Tempest volume tests and a token
expiration problem.

So, yes, a Glance upload operation makes a series of HTTP calls in the
course of the upload:

  POST $registry/images <-- Creates the queued image record
  ...  upload of chunked body of HTTP request to backend like Swift ..
  PUT $registry/images/ <-- update image status and checksum

So, what seems to be happening here is that the PUT call at the end of
uploading the snapshot is using the same token that was created in the
keystone client of the tempest test case during the test classes'
setUpClass() method, and the test class ends up running for >1 hour, and
by the time the PUT call is reached, the token has expired.

Yes... and there is this whole unresolved dev thread on this -
http://lists.openstack.org/pipermail/openstack-dev/2014-September/045567.html

-Sean



This is comparable to the HEAT use case that Keystone Trusts were 
originally designed to solve.


If the glance client knows the roles required to perform those 
operations, it could create the trust up front, with the  Glance Service 
user as the trustee; the trustee execute the trust when it needs the token.


Are there other cases besides the glance one that require long lived tokens?

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] 2 Minute tokens

2014-09-30 Thread Matthew Treinish
On Tue, Sep 30, 2014 at 04:23:37PM -0400, Adam Young wrote:
> On 09/30/2014 12:21 PM, Sean Dague wrote:
> >On 09/30/2014 11:58 AM, Jay Pipes wrote:
> >>On 09/30/2014 11:37 AM, Adam Young wrote:
> >>>On 09/30/2014 11:06 AM, Louis Taylor wrote:
> On Tue, Sep 30, 2014 at 10:44:51AM -0400, Adam Young wrote:
> >What are the uses that require long lived tokens?
> Glance has operations which can take a long time, such as uploading and
> downloading large images.
> >>>Yes, but the token is only authenticated at the start of the operation.
> >>>Does anything need to happen afterwards?
> >>Funny you mention it... :) We were just having this conversation on IRC
> >>about Nikesh's issues with some Tempest volume tests and a token
> >>expiration problem.
> >>
> >>So, yes, a Glance upload operation makes a series of HTTP calls in the
> >>course of the upload:
> >>
> >>  POST $registry/images <-- Creates the queued image record
> >>  ...  upload of chunked body of HTTP request to backend like Swift ..
> >>  PUT $registry/images/ <-- update image status and checksum
> >>
> >>So, what seems to be happening here is that the PUT call at the end of
> >>uploading the snapshot is using the same token that was created in the
> >>keystone client of the tempest test case during the test classes'
> >>setUpClass() method, and the test class ends up running for >1 hour, and
> >>by the time the PUT call is reached, the token has expired.
> >Yes... and there is this whole unresolved dev thread on this -
> >http://lists.openstack.org/pipermail/openstack-dev/2014-September/045567.html
> >
> > -Sean
> >
> 
> This is a test case, so the tempest test has enough information to request a
> new token, it just does not request it?
> 

No, I don't think that's the case. The tempest auth code handles token
expiration and prempts it. See:

http://git.openstack.org/cgit/openstack/tempest/tree/tempest/auth.py#n340

and 

http://git.openstack.org/cgit/openstack/tempest/tree/tempest/auth.py#n464

depending on which keystone api version is being used. These get called before
each outgoing http call from tempest is made and if it's expired it'll get a
new token. There could be a bug in that code, but I think it is probably
something else.

I think the issue here is probably caused by the token that glance is using is
expiring because the upload takes too long and it doesn't know how to handle
that. (unless I'm misreading Jay's comment) So things fail when it tries to use
the same token for another operation which is part of the same api request after
the upload is finished. This is the topic of that thread which Sean pointed out
above.

-Matt Treinish


pgpbcLO_fAH5b.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] PTL leave of absence

2014-09-30 Thread Ed Leafe
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 09/30/2014 02:22 PM, Robert Collins wrote:
> I have had an unexpected family matter turn up, and may be absent
> at fairly arbitrary points for a couple weeks while we deal with
> the fallout.

Hope everything resolves OK. I've had similar interruptions, and
taking care of family always comes first.


- -- Ed Leafe
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.14 (GNU/Linux)

iQIcBAEBAgAGBQJUKxVqAAoJEKMgtcocwZqLsAMQAJYH9j3xx1YUA7eXCIu01cbT
MfpP3zSgSjKG8zLYUHW0hkIuPjz6y21DWA+XbYZFc25pww0iKB7xrkJg+zBWj8ze
6monYcUMbMH5/F5GfOm3Z5Vp4Dd3aZBVLkUaidBRarev8cmeDOVZArWoA7JQtkz4
pX5wx0QWUdWrZwI/HGi4udf9ZpOWaMoFJD7RLdKZy0f8PTIFQ0LZBemlMAx4vsUO
WZNl2Zmq+sV3n+5fMtuuVHJ5hOSSCJebFxIR/kd3ckJj9xkQ24H8aRqfZDpXfchA
hBzaH1hHF3XMFtOdgRZWJ5VMuMR3DWrvHdipHak7a5R947zNJjmxmnT0UB7/5ISx
5nLJqVvnuDUzUEOvYU1gBGTBZfaraprqnVYYZrlx5kienmdxfmYNZ8tbDKZ0IW7F
Hr/uBKyTCIocYcI1xuV3fjrVlXIE/XSNU949PPGL9nMGxfT/b64wucAeLz00+zCo
6yeyHFbvrV67B6q5IQD/IB0IPdMqsNCAuALLMmd8mArrIJwAXX3LxODOf3svy85e
pMv1L9NNuOPQN5+P90EqmjsIuQkbasS/8XK59TFB1G3HdMfur0A50nWSnYkiL1Mq
OPlxLIDA6ci/wb6LOo6yu/W7MNRiZR3KiU/jH03u9PIDxnMp8Uf30MTMz5llJV2S
MghSGnRIs7FYYgYAIg/w
=x0wM
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack cascading

2014-09-30 Thread Adam Young

On 09/30/2014 12:10 PM, John Griffith wrote:



On Tue, Sep 30, 2014 at 7:35 AM, John Garbutt > wrote:


On 30 September 2014 14:04, joehuang mailto:joehu...@huawei.com>> wrote:
> Hello, Dear TC and all,
>
> Large cloud operators prefer to deploy multiple OpenStack
instances(as different zones), rather than a single monolithic
OpenStack instance because of these reasons:
>
> 1) Multiple data centers distributed geographically;
> 2) Multi-vendor business policy;
> 3) Server nodes scale up modularized from 00's up to million;
> 4) Fault and maintenance isolation between zones (only REST
interface);
>
> At the same time, they also want to integrate these OpenStack
instances into one cloud. Instead of proprietary orchestration
layer, they want to use standard OpenStack framework for
Northbound API compatibility with HEAT/Horizon or other 3rd
ecosystem apps.
>
> We call this pattern as "OpenStack Cascading", with proposal
described by [1][2]. PoC live demo video can be found[3][4].
>
> Nova, Cinder, Neutron, Ceilometer and Glance (optional) are
involved in the OpenStack cascading.
>
> Kindly ask for cross program design summit session to discuss
OpenStack cascading and the contribution to Kilo.
>
> Kindly invite those who are interested in the OpenStack
cascading to work together and contribute it to OpenStack.
>
> (I applied for “other projects” track [5], but it would be
better to have a discussion as a formal cross program session,
because many core programs are involved )
>
>
> [1] wiki:
https://wiki.openstack.org/wiki/OpenStack_cascading_solution
> [2] PoC source code: https://github.com/stackforge/tricircle
> [3] Live demo video at YouTube:
https://www.youtube.com/watch?v=OSU6PYRz5qY
> [4] Live demo video at Youku (low quality, for those who can't
access YouTube):http://v.youku.com/v_show/id_XNzkzNDQ3MDg4.html
> [5]
http://www.mail-archive.com/openstack-dev@lists.openstack.org/msg36395.html

There are etherpads for suggesting cross project sessions here:
https://wiki.openstack.org/wiki/Summit/Planning
https://etherpad.openstack.org/p/kilo-crossproject-summit-topics

I am interested at comparing this to Nova's cells concept:

http://docs.openstack.org/trunk/config-reference/content/section_compute-cells.html

Cells basically scales out a single datacenter region by aggregating
multiple child Nova installations with an API cell.

Each child cell can be tested in isolation, via its own API, before
joining it up to an API cell, that adds it into the region. Each cell
logically has its own database and message queue, which helps get more
independent failure domains. You can use cell level scheduling to
restrict people or types of instances to particular subsets of the
cloud, if required.

It doesn't attempt to aggregate between regions, they are kept
independent. Except, the usual assumption that you have a common
identity between all regions.

It also keeps a single Cinder, Glance, Neutron deployment per region.



I'm starting on work to support a comparable mechanism to share data 
between Keystone servers.


http://adam.younglogic.com/2014/09/multiple-signers/



It would be great to get some help hardening, testing, and building
out more of the cells vision. I suspect we may form a new Nova subteam
to trying and drive this work forward in kilo, if we can build up
enough people wanting to work on improving cells.

Thanks,
John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


​Interesting idea, to be honest when TripleO was first announced what 
you have here is more along the lines of what I envisioned.  It seems 
that this would have some interesting wins in terms of upgrades, 
migrations and scaling in general.  Anyway, you should propose it to 
the etherpad as John G ( the other John G :) ) recommended, I'd love 
to dig deeper into this.








Thanks,
John​



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] 2 Minute tokens

2014-09-30 Thread Adam Young

On 09/30/2014 12:21 PM, Sean Dague wrote:

On 09/30/2014 11:58 AM, Jay Pipes wrote:

On 09/30/2014 11:37 AM, Adam Young wrote:

On 09/30/2014 11:06 AM, Louis Taylor wrote:

On Tue, Sep 30, 2014 at 10:44:51AM -0400, Adam Young wrote:

What are the uses that require long lived tokens?

Glance has operations which can take a long time, such as uploading and
downloading large images.

Yes, but the token is only authenticated at the start of the operation.
Does anything need to happen afterwards?

Funny you mention it... :) We were just having this conversation on IRC
about Nikesh's issues with some Tempest volume tests and a token
expiration problem.

So, yes, a Glance upload operation makes a series of HTTP calls in the
course of the upload:

  POST $registry/images <-- Creates the queued image record
  ...  upload of chunked body of HTTP request to backend like Swift ..
  PUT $registry/images/ <-- update image status and checksum

So, what seems to be happening here is that the PUT call at the end of
uploading the snapshot is using the same token that was created in the
keystone client of the tempest test case during the test classes'
setUpClass() method, and the test class ends up running for >1 hour, and
by the time the PUT call is reached, the token has expired.

Yes... and there is this whole unresolved dev thread on this -
http://lists.openstack.org/pipermail/openstack-dev/2014-September/045567.html

-Sean



This is a test case, so the tempest test has enough information to 
request a new token, it just does not request it?






___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Infra] Meeting Tuesday September 30th at 19:00 UTC

2014-09-30 Thread Elizabeth K. Joseph
On Mon, Sep 29, 2014 at 9:50 AM, Elizabeth K. Joseph
 wrote:
> Hi everyone,
>
> The OpenStack Infrastructure (Infra) team is hosting our weekly
> meeting on Tuesday September 30th, at 19:00 UTC in #openstack-meeting

Meeting minutes and log are now up:

Minutes: 
http://eavesdrop.openstack.org/meetings/infra/2014/infra.2014-09-30-19.00.html
Minutes (text):
http://eavesdrop.openstack.org/meetings/infra/2014/infra.2014-09-30-19.00.txt
Log: 
http://eavesdrop.openstack.org/meetings/infra/2014/infra.2014-09-30-19.00.log.html

-- 
Elizabeth Krumbach Joseph || Lyz || pleia2
http://www.princessleia.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack cascading

2014-09-30 Thread Andrew Laski


On 09/30/2014 03:07 PM, Tim Bell wrote:

-Original Message-
From: John Garbutt [mailto:j...@johngarbutt.com]
Sent: 30 September 2014 15:35
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack
cascading

On 30 September 2014 14:04, joehuang  wrote:

Hello, Dear TC and all,

Large cloud operators prefer to deploy multiple OpenStack instances(as

different zones), rather than a single monolithic OpenStack instance because of
these reasons:

1) Multiple data centers distributed geographically;
2) Multi-vendor business policy;
3) Server nodes scale up modularized from 00's up to million;
4) Fault and maintenance isolation between zones (only REST
interface);

At the same time, they also want to integrate these OpenStack instances into

one cloud. Instead of proprietary orchestration layer, they want to use standard
OpenStack framework for Northbound API compatibility with HEAT/Horizon or
other 3rd ecosystem apps.

We call this pattern as "OpenStack Cascading", with proposal described by

[1][2]. PoC live demo video can be found[3][4].

Nova, Cinder, Neutron, Ceilometer and Glance (optional) are involved in the

OpenStack cascading.

Kindly ask for cross program design summit session to discuss OpenStack

cascading and the contribution to Kilo.

Kindly invite those who are interested in the OpenStack cascading to work

together and contribute it to OpenStack.

(I applied for “other projects” track [5], but it would be better to
have a discussion as a formal cross program session, because many core
programs are involved )


[1] wiki: https://wiki.openstack.org/wiki/OpenStack_cascading_solution
[2] PoC source code: https://github.com/stackforge/tricircle
[3] Live demo video at YouTube:
https://www.youtube.com/watch?v=OSU6PYRz5qY
[4] Live demo video at Youku (low quality, for those who can't access
YouTube):http://v.youku.com/v_show/id_XNzkzNDQ3MDg4.html
[5]
http://www.mail-archive.com/openstack-dev@lists.openstack.org/msg36395
.html

There are etherpads for suggesting cross project sessions here:
https://wiki.openstack.org/wiki/Summit/Planning
https://etherpad.openstack.org/p/kilo-crossproject-summit-topics

I am interested at comparing this to Nova's cells concept:
http://docs.openstack.org/trunk/config-reference/content/section_compute-
cells.html

Cells basically scales out a single datacenter region by aggregating multiple 
child
Nova installations with an API cell.

Each child cell can be tested in isolation, via its own API, before joining it 
up to
an API cell, that adds it into the region. Each cell logically has its own 
database
and message queue, which helps get more independent failure domains. You can
use cell level scheduling to restrict people or types of instances to particular
subsets of the cloud, if required.

It doesn't attempt to aggregate between regions, they are kept independent.
Except, the usual assumption that you have a common identity between all
regions.

It also keeps a single Cinder, Glance, Neutron deployment per region.

It would be great to get some help hardening, testing, and building out more of
the cells vision. I suspect we may form a new Nova subteam to trying and drive
this work forward in kilo, if we can build up enough people wanting to work on
improving cells.


At CERN, we've deployed cells at scale but are finding a number of architectural issues 
that need resolution in the short term to attain feature parity. A vision of "we all 
run cells but some of us have only one" is not there yet. Typical examples are 
flavors, security groups and server groups, all of which are not yet implemented to the 
necessary levels for cell parent/child.

We would be very keen on agreeing the strategy in Paris so that we can ensure 
the gap is closed, test it in the gate and that future features cannot 
'wishlist' cell support.

Resources can be made available if we can agree the direction but current 
reviews are not progressing (such as 
https://bugs.launchpad.net/nova/+bug/1211011)


I am working on putting together this strategy so we can discuss it in 
Paris.  I, and perhaps a few others, will be spending time on this in 
Kilo so that these thing do progress.


There are some good ideas in this thread and scaling out is a concern we 
need to continually work on.  But we do have a solution that addresses 
this to an extent so I think the conversation should be about how we 
scale past cells, not replicate it.





Tim


Thanks,
John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack

Re: [openstack-dev] [keystone][swift] Has anybody considered storing tokens in Swift?

2014-09-30 Thread Morgan Fainberg
Comments in-line.

-Original Message-
From: Joshua Harlow 
Reply: OpenStack Development Mailing List (not for usage questions) 
>
Date: September 29, 2014 at 21:52:20
To: OpenStack Development Mailing List (not for usage questions) 
>
Subject:  Re: [openstack-dev] [keystone][swift] Has anybody considered storing 
tokens in Swift?

> +1 Lets not continue to expand the usage of persisted tokens :-/

We can look at moving away from persisted tokens, but as of the Kilo cycle 
there is no plan to eliminate UUID tokens (or PKI/PKIZ tokens for that matter). 
This means that we will need to continue to support them. If there is a 
legitimate desire to utilize swift or something similar and make this 
experience better, it behooves us to consider accepting these improvements. I 
believe that non-persistent tokens are a great place to strive for, but it 
won’t meet every deployment need. Moreover, currently PKI tokens (the 
“persistent-less” capable version) adds significant overhead to each request. 
While 5k-10k of data doesn’t seem like a lot, when there is a large volume of 
actions taken it adds up fast. This is putting the onus on the end user to 
acquire the “authorization” and transmit that same information over and over 
again to each of the APIs (especially in the swift use-case of many, many, many 
small object requests). 5-10k for a request consisting of a few hundred bytes 
is a significant overhead. Asking a service local to your datacenter might be 
in some cases more efficient and produce less edge traffic.

There is a reason why websites often use separate domains (top-level) for their 
CDNs rather than just sourcing from the primary domain. This is to (especially 
in the case of large cookies such as are generated by Social Media sites) avoid 
having to transit the primary user cookie when gathering an asset, especially 
if that asset is publicly viewable. 

> We should be trying to move away from such types of persistence and its 
> associated complexity  
> IMHO. Most major websites don't need to have tokens that are saved around 
> regions in there  
> internal backend databases, just so people can use a REST webservice (or a 
> website...)  
> so I would hope that we don't need to (if it works for major websites why 
> doesn't it work  
> for us?).

As of today, PKI tokens (and even UUID token bodies) convey a lot of 
information. In fact, the tokens convey all the information (except if the 
token is revoked) needed to interact with the API endpoint. A lot of the “big” 
web apps will not encode all of that information into the signed cookie, some 
of that information can be sourced directly (often things like 
ACL-like-constructs or RBAC information won’t be in the cookie itself). PKI 
tokens are also significantly more difficult to deploy (needing certificates, 
CAs, and ensuring everything is in sync is definitely more work than saying 
“talk to URL xxx and ask if the token is valid”; you’re trading one set of 
operational concerns and scaling points for another).

Now if we start looking at OAuth and some of the other SSO technology we start 
getting a little bit closer to what we need to be able to eliminate the backing 
store with the exception of two critical items:

1) OAuth-type workflow would be a reauthorization for each endpoint (provided 
they are on separate domains/hosts/etc, which is the case for each of our 
services). You don’t (for example) Authenticate with UbuntuOne for 
https://review.openstack.org and without re-authenticating go to the wiki pages 
and edit the wiki. The wiki, while it uses the same authentication source, same 
identity source, and is even on an associated host/domain, requires a re-auth 
to use.

If you were to get an SSO login to Keystone and then each endpoint requires 
another bounce back to Keystone for AuthZ it isn’t really making the REST API 
any more usable nor making the user experience any better.

2) All of the OpenStack specific permissions are not necessarily conveyed by 
the Identity Provider (Keystone is not typically the Identity provider in this 
case), and most Identity Providers do not have the concept of Projects, 
Domains, or even Roles that are always going to make sense within OpenStack. 
We’re still back to needing to ask for information from Keystone.


> My 2 cents is that we should really think about why this is needed and why we 
> can't operate  
> using signed-cookie like mechanisms (after all it works for everyone else). 
> If cross-region  
> tokens are a problem, then maybe we should solve the root of the issue 
> (having a token that  
> works across regions) so that no replication is needed at all…

With PKI tokens (if you share the signing certificate) it is possible to handle 
cross region support. There are some limitations here (such as possibly needing 
to replicate the Identity/Assignment information between regions so an Auth in 
region X would work in region Y as well as vice-versa).

There are other opt

Re: [openstack-dev] [gantt] Scheduler group meeting - Agenda 9/30

2014-09-30 Thread Dugger, Donald D
Uri-

Sorry you missed this week.  It's at 1500UTC, here's a link with the details:


https://wiki.openstack.org/wiki/Meetings#Gantt_.28Scheduler.29_team_meeting


--
Don Dugger
"Censeo Toto nos in Kansa esse decisse." - D. Gale
Ph: 303/443-3786

-Original Message-
From: Elzur, Uri [mailto:uri.el...@intel.com] 
Sent: Tuesday, September 30, 2014 12:41 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [gantt] Scheduler group meeting - Agenda 9/30

Don

When is the call?

Thx

Uri ("Oo-Ree")
C: 949-378-7568

-Original Message-
From: Dugger, Donald D [mailto:donald.d.dug...@intel.com] 
Sent: Monday, September 29, 2014 10:08 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [gantt] Scheduler group meeting - Agenda 9/30

1) Forklift status
2) Opens

--
Don Dugger
"Censeo Toto nos in Kansa esse decisse." - D. Gale
Ph: 303/443-3786


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] PTL leave of absence

2014-09-30 Thread Robert Collins
I have had an unexpected family matter turn up, and may be absent at
fairly arbitrary points for a couple weeks while we deal with the
fallout.

I've asked Clint to wear my PTL hat between now and the 3rd when
voting closes and we find out whether he or James are the new PTL.

He has my real world contact details should something urgent which
only I can do [there should be no such things] turn up.

-Rob

-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack cascading

2014-09-30 Thread Joshua Harlow
So this does seem a-lot like cells but makes cells appear in the other projects.

IMHO the same problems that occur in cells appear here in that we are 
sacrificing consistency of the already problematic systems that already exist 
to gain scale (and to gain more inconsistency). Every time I see a 'the parent 
OpenStack manage many child OpenStacks by using standard OpenStack API' in that 
wiki I wonder how the parent will resolve inconsistencies that exist in 
children (likely it can't). How do quotas work across parent/children, how do 
race conditions get resolved...

IMHO I'd rather stick with the less scalable distributed system we have, iron 
out its quirks, fix the quota (via whatever that project is named now), split 
out the nova/... drivers so they can be maintainable in various projects, fix 
the various already inconsistent state machines that exist, split out the 
scheduler into its own project so that can be shared... All of the mentioned 
things improve scale and improve tolerance to individual failures rather than 
create a whole new level of 'pain' via a tightly bound set of proxies, 
cascading hierarchies Managing this whole cascading clusters and such also 
would seem to be operational management nightmare that I'm not sure is 
justified at the current time being (when operators already have enough trouble 
with the current code bases).

How I imagine this working out (in my view):

* Split out the shared services (gantt, scheduler, quotas...) into real SOA 
services that everyone can use.
* Have cinder-api, nova-api, neutron-api integrate with the split out services 
to obtain consistent views of the world when performing API operations.
* Have cinder, nova, neutron provide 'workers' (nova-compute is a basic worker) 
that can be scaled out across all your clusters and interconnected to a type of 
conductor node in some manner (mq?), and have the outcome of cinder-api, 
nova-api, neutron-api be a workflow that some service (conductor/s?) ensures 
occurs reliably (or aborts). This makes it so that cinder-api, nova-api... can 
scale at will, conductors can scale at will and so can worker nodes...
* Profit!

TLDR; It would seem like this adds more complexity, not less, and I'm not sure 
complexity is what openstack needs more of right now...

-Josh

On Sep 30, 2014, at 6:04 AM, joehuang  wrote:

> Hello, Dear TC and all, 
> 
> Large cloud operators prefer to deploy multiple OpenStack instances(as 
> different zones), rather than a single monolithic OpenStack instance because 
> of these reasons:
> 
> 1) Multiple data centers distributed geographically;
> 2) Multi-vendor business policy;
> 3) Server nodes scale up modularized from 00's up to million;
> 4) Fault and maintenance isolation between zones (only REST interface);
> 
> At the same time, they also want to integrate these OpenStack instances into 
> one cloud. Instead of proprietary orchestration layer, they want to use 
> standard OpenStack framework for Northbound API compatibility with 
> HEAT/Horizon or other 3rd ecosystem apps.
> 
> We call this pattern as "OpenStack Cascading", with proposal described by 
> [1][2]. PoC live demo video can be found[3][4].
> 
> Nova, Cinder, Neutron, Ceilometer and Glance (optional) are involved in the 
> OpenStack cascading. 
> 
> Kindly ask for cross program design summit session to discuss OpenStack 
> cascading and the contribution to Kilo. 
> 
> Kindly invite those who are interested in the OpenStack cascading to work 
> together and contribute it to OpenStack. 
> 
> (I applied for “other projects” track [5], but it would be better to have a 
> discussion as a formal cross program session, because many core programs are 
> involved )
> 
> 
> [1] wiki: https://wiki.openstack.org/wiki/OpenStack_cascading_solution
> [2] PoC source code: https://github.com/stackforge/tricircle
> [3] Live demo video at YouTube: https://www.youtube.com/watch?v=OSU6PYRz5qY
> [4] Live demo video at Youku (low quality, for those who can't access 
> YouTube):http://v.youku.com/v_show/id_XNzkzNDQ3MDg4.html
> [5] 
> http://www.mail-archive.com/openstack-dev@lists.openstack.org/msg36395.html
> 
> Best Regards
> Chaoyi Huang ( Joe Huang )
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Create an instance with a custom uuid

2014-09-30 Thread Jay Pipes

On 09/30/2014 06:53 AM, Pasquale Porreca wrote:

Going back to my original question, I would like to know:

1) Is it acceptable to have the UUID passed from client side?


FWIW, Glance has supported supplying the newly-created image's ID in its 
API for a long time, and it's never been an issue. On the database side, 
you still need to do a primary key lookup to ensure you aren't violating 
any constraints, regardless of whether you are doing:


 obj.id = uuid.uuid4()

on the controller side or whether you are doing:

 req_payload = {
   "id": uuid.uuid4(),
   "name": "blah"
 }
 client.do_post(jsonutils.dumps(req_payload)..)

on the client side.

I don't really see much of an issue with allowing a user to pass an 
opaque identifier for objects on creation.


-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack cascading

2014-09-30 Thread Tim Bell
> -Original Message-
> From: John Garbutt [mailto:j...@johngarbutt.com]
> Sent: 30 September 2014 15:35
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack
> cascading
> 
> On 30 September 2014 14:04, joehuang  wrote:
> > Hello, Dear TC and all,
> >
> > Large cloud operators prefer to deploy multiple OpenStack instances(as
> different zones), rather than a single monolithic OpenStack instance because 
> of
> these reasons:
> >
> > 1) Multiple data centers distributed geographically;
> > 2) Multi-vendor business policy;
> > 3) Server nodes scale up modularized from 00's up to million;
> > 4) Fault and maintenance isolation between zones (only REST
> > interface);
> >
> > At the same time, they also want to integrate these OpenStack instances into
> one cloud. Instead of proprietary orchestration layer, they want to use 
> standard
> OpenStack framework for Northbound API compatibility with HEAT/Horizon or
> other 3rd ecosystem apps.
> >
> > We call this pattern as "OpenStack Cascading", with proposal described by
> [1][2]. PoC live demo video can be found[3][4].
> >
> > Nova, Cinder, Neutron, Ceilometer and Glance (optional) are involved in the
> OpenStack cascading.
> >
> > Kindly ask for cross program design summit session to discuss OpenStack
> cascading and the contribution to Kilo.
> >
> > Kindly invite those who are interested in the OpenStack cascading to work
> together and contribute it to OpenStack.
> >
> > (I applied for “other projects” track [5], but it would be better to
> > have a discussion as a formal cross program session, because many core
> > programs are involved )
> >
> >
> > [1] wiki: https://wiki.openstack.org/wiki/OpenStack_cascading_solution
> > [2] PoC source code: https://github.com/stackforge/tricircle
> > [3] Live demo video at YouTube:
> > https://www.youtube.com/watch?v=OSU6PYRz5qY
> > [4] Live demo video at Youku (low quality, for those who can't access
> > YouTube):http://v.youku.com/v_show/id_XNzkzNDQ3MDg4.html
> > [5]
> > http://www.mail-archive.com/openstack-dev@lists.openstack.org/msg36395
> > .html
> 
> There are etherpads for suggesting cross project sessions here:
> https://wiki.openstack.org/wiki/Summit/Planning
> https://etherpad.openstack.org/p/kilo-crossproject-summit-topics
> 
> I am interested at comparing this to Nova's cells concept:
> http://docs.openstack.org/trunk/config-reference/content/section_compute-
> cells.html
> 
> Cells basically scales out a single datacenter region by aggregating multiple 
> child
> Nova installations with an API cell.
> 
> Each child cell can be tested in isolation, via its own API, before joining 
> it up to
> an API cell, that adds it into the region. Each cell logically has its own 
> database
> and message queue, which helps get more independent failure domains. You can
> use cell level scheduling to restrict people or types of instances to 
> particular
> subsets of the cloud, if required.
> 
> It doesn't attempt to aggregate between regions, they are kept independent.
> Except, the usual assumption that you have a common identity between all
> regions.
> 
> It also keeps a single Cinder, Glance, Neutron deployment per region.
> 
> It would be great to get some help hardening, testing, and building out more 
> of
> the cells vision. I suspect we may form a new Nova subteam to trying and drive
> this work forward in kilo, if we can build up enough people wanting to work on
> improving cells.
> 

At CERN, we've deployed cells at scale but are finding a number of 
architectural issues that need resolution in the short term to attain feature 
parity. A vision of "we all run cells but some of us have only one" is not 
there yet. Typical examples are flavors, security groups and server groups, all 
of which are not yet implemented to the necessary levels for cell parent/child.

We would be very keen on agreeing the strategy in Paris so that we can ensure 
the gap is closed, test it in the gate and that future features cannot 
'wishlist' cell support.

Resources can be made available if we can agree the direction but current 
reviews are not progressing (such as 
https://bugs.launchpad.net/nova/+bug/1211011)

Tim

> Thanks,
> John
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceph] Why performance of benchmarks with small blocks is extremely small?

2014-09-30 Thread Gregory Farnum
On Sat, Sep 27, 2014 at 8:14 AM, Timur Nurlygayanov
 wrote:
> Hello all,
>
> I installed OpenStack with Glance + Ceph OSD with replication factor 2 and
> now I can see the write operations are extremly slow.
> For example, I can see only 0.04 MB/s write speed when I run rados bench
> with 512b blocks:
>
> rados bench -p test 60 write --no-cleanup -t 1 -b 512
>
>  Maintaining 1 concurrent writes of 512 bytes for up to 60 seconds or 0
> objects
>  Object prefix: benchmark_data_node-17.domain.tld_15862
>sec Cur ops   started  finishedavg MB/s cur MB/s   last lat
> avg lat
>  0   0 0 0  00
> -   0
>  1   183820.0400341   0.0400391
> 0.008465   0.0120985
>  2   1   169   168  0.04101110.0419922
> 0.080433   0.0118995
>  3   1   240   239  0.03889590.034668
> 0.008052   0.0125385
>  4   1   356   355  0.0433309   0.0566406
> 0.00837 0.0112662
>  5   1   472   471  0.0459919   0.0566406
> 0.008343   0.0106034
>  6   1   550   549  0.0446735   0.0380859
> 0.036639   0.0108791
>  7   1   581   580  0.0404538   0.0151367
> 0.008614   0.0120654
>
>
> My test environment configuration:
> Hardware servers with 1Gb network interfaces, 64Gb RAM and 16 CPU cores per
> node, HDDs WDC WD5003ABYX-01WERA0.
> OpenStack with 1 controller, 1 compute and 2 ceph nodes (ceph on separate
> nodes).
> CentOS 6.5, kernel 2.6.32-431.el6.x86_64.
>
> I tested several config options for optimizations, like in
> /etc/ceph/ceph.conf:
>
> [default]
> ...
> osd_pool_default_pg_num = 1024
> osd_pool_default_pgp_num = 1024
> osd_pool_default_flag_hashpspool = true
> ...
> [osd]
> osd recovery max active = 1
> osd max backfills = 1
> filestore max sync interval = 30
> filestore min sync interval = 29
> filestore flusher = false
> filestore queue max ops = 1
> filestore op threads = 16
> osd op threads = 16
> ...
> [client]
> rbd_cache = true
> rbd_cache_writethrough_until_flush = true
>
> and in /etc/cinder/cinder.conf:
>
> [DEFAULT]
> volume_tmp_dir=/tmp
>
> but in the result performance was increased only on ~30 % and it not looks
> like huge success.
>
> Non-default mount options and TCP optimization increase the speed in about
> 1%:
>
> [root@node-17 ~]# mount | grep ceph
> /dev/sda4 on /var/lib/ceph/osd/ceph-0 type xfs
> (rw,noexec,nodev,noatime,nodiratime,user_xattr,data=writeback,barrier=0)
>
> [root@node-17 ~]# cat /etc/sysctl.conf
> net.core.rmem_max = 16777216
> net.core.wmem_max = 16777216
> net.ipv4.tcp_rmem = 4096 87380 16777216
> net.ipv4.tcp_wmem = 4096 65536 16777216
> net.ipv4.tcp_window_scaling = 1
> net.ipv4.tcp_timestamps = 1
> net.ipv4.tcp_sack = 1
>
>
> Do we have other ways to significantly improve CEPH storage performance?
> Any feedback and comments are welcome!

This is entirely latency dominated and OpenStack configuration changes
aren't going to be able to do much — you're getting 80 sequential ops
a second out of a system that has to do two round trips over a network
and hit two hard drives on every operation. You might want to spend
some time looking at how latency, bandwidth, and concurrency are
(often not) related. :)
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gantt] Scheduler group meeting - Agenda 9/30

2014-09-30 Thread Elzur, Uri
Don

When is the call?

Thx

Uri ("Oo-Ree")
C: 949-378-7568

-Original Message-
From: Dugger, Donald D [mailto:donald.d.dug...@intel.com] 
Sent: Monday, September 29, 2014 10:08 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [gantt] Scheduler group meeting - Agenda 9/30

1) Forklift status
2) Opens

--
Don Dugger
"Censeo Toto nos in Kansa esse decisse." - D. Gale
Ph: 303/443-3786


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] Glance on swift problem

2014-09-30 Thread Timur Nurlygayanov
Hi Slawek,

we faced the same error and this is issue with Swift.
We can see 100% disk usage on the Swift node during the file upload and
looks like Swift can't send info about status of the file loading in time.

On our environments we found the workaround for this issue:
1. Set  swift_store_large_object_size = 200 in glance.conf.
2. Add to Swift proxy-server.conf:

[DEFAULT]
...
node_timeout = 90

Probably we can set this value as default value for this parameter instead
of '30'?


Regards,
Timur


On Tue, Sep 30, 2014 at 7:41 PM, Sławek Kapłoński 
wrote:

> Hello,
>
> I can't find that upload from was previous logs but I now try to upload
> same image once again. In glance there was exactly same error. In swift
> logs I have:
>
> Sep 30 17:35:10 127.0.0.1 proxy-server X.X.X.X Y.Y.Y.Y
> 30/Sep/2014/15/35/10 HEAD /v1/AUTH_7ef5a7661ccd4c069e3ad387a6dceebd/glance
> HTTP/1.0 204
> Sep 30 17:35:16 127.0.0.1 proxy-server X.X.X.X Y.Y.Y.Y
> 30/Sep/2014/15/35/16 PUT /v1/AUTH_7ef5a7661ccd4c069e3ad387a6dcee
> bd/glance/fa5dfe09-74f5-4287-9852-d2f1991eebc0-1 HTTP/1.0 201 - -
>
> Best regards
> Slawek Kaplonski
>
> W dniu 2014-09-30 17:03, Kuo Hugo napisał(a):
>
>> Hi ,
>>
>> Could you please post the log of related requests in Swift's log ???
>>
>> Thanks // Hugo
>>
>> 2014-09-30 22:20 GMT+08:00 Sławek Kapłoński :
>>
>>  Hello,
>>>
>>> I'm using openstack havana release and glance with swift backend.
>>> Today I found that I have problem when I create image with url in
>>> "--copy-from" when image is bigger than my
>>> "swift_store_large_object_size" because then glance is trying to
>>> split image to chunks with size given in
>>> "swift_store_large_object_chunk_size" and when try to upload first
>>> chunk to swift I have error:
>>>
>>> 2014-09-30 15:05:29.361 18023 ERROR glance.store.swift [-] Error
>>> during chunked upload to backend, deleting stale chunks
>>> 2014-09-30 15:05:29.361 18023 TRACE glance.store.swift Traceback
>>> (most recent call last):
>>> 2014-09-30 15:05:29.361 18023 TRACE glance.store.swift   File
>>> "/usr/lib/python2.7/dist-packages/glance/store/swift.py", line 384,
>>> in add
>>> 2014-09-30 15:05:29.361 18023 TRACE glance.store.swift
>>>  content_length=content_length)
>>> 2014-09-30 15:05:29.361 18023 TRACE glance.store.swift   File
>>> "/usr/lib/python2.7/dist-packages/swiftclient/client.py", line 1234,
>>> in put_object
>>> 2014-09-30 15:05:29.361 18023 TRACE glance.store.swift
>>>  response_dict=response_dict)
>>> 2014-09-30 15:05:29.361 18023 TRACE glance.store.swift   File
>>> "/usr/lib/python2.7/dist-packages/swiftclient/client.py", line 1143,
>>> in _retry
>>> 2014-09-30 15:05:29.361 18023 TRACE glance.store.swift
>>>  reset_func(func, *args, **kwargs)
>>> 2014-09-30 15:05:29.361 18023 TRACE glance.store.swift   File
>>> "/usr/lib/python2.7/dist-packages/swiftclient/client.py", line 1215,
>>> in _default_reset
>>> 2014-09-30 15:05:29.361 18023 TRACE glance.store.swift %
>>> (container, obj))
>>> 2014-09-30 15:05:29.361 18023 TRACE glance.store.swift
>>> ClientException: put_object('glance',
>>> '9f56ccec-deeb-4020-95ba-ca7bf1170056-1', ...) failure and no
>>> ability to reset contents for reupload.
>>> 2014-09-30 15:05:29.361 18023 TRACE glance.store.swift
>>> 2014-09-30 15:05:29.362 18023 ERROR glance.store.swift [-] Failed
>>> to add object to Swift.
>>> Got error from Swift: put_object('glance',
>>> '9f56ccec-deeb-4020-95ba-ca7bf1170056-1', ...) failure and no
>>> ability to reset contents for reupload.
>>> 2014-09-30 15:05:29.362 18023 ERROR glance.api.v1.upload_utils [-]
>>> Failed to upload image 9f56ccec-deeb-4020-95ba-ca7bf1170056
>>> 2014-09-30 15:05:29.362 18023 TRACE glance.api.v1.upload_utils
>>> Traceback (most recent call last):
>>> 2014-09-30 15:05:29.362 18023 TRACE glance.api.v1.upload_utils
>>>  File
>>> "/usr/lib/python2.7/dist-packages/glance/api/v1/upload_utils.py",
>>> line 101, in upload_data_to_store
>>> 2014-09-30 15:05:29.362 18023 TRACE glance.api.v1.upload_utils
>>>  store)
>>> 2014-09-30 15:05:29.362 18023 TRACE glance.api.v1.upload_utils
>>>  File "/usr/lib/python2.7/dist-packages/glance/store/__init__.py",
>>> line 333, in store_add_to_backend
>>> 2014-09-30 15:05:29.362 18023 TRACE glance.api.v1.upload_utils
>>>  (location, size, checksum, metadata) = store.add(image_id, data,
>>> size)
>>> 2014-09-30 15:05:29.362 18023 TRACE glance.api.v1.upload_utils
>>>  File "/usr/lib/python2.7/dist-packages/glance/store/swift.py",
>>> line 447, in add
>>> 2014-09-30 15:05:29.362 18023 TRACE glance.api.v1.upload_utils
>>>  raise glance.store.BackendException(msg)
>>> 2014-09-30 15:05:29.362 18023 TRACE glance.api.v1.upload_utils
>>> BackendException: Failed to add object to Swift.
>>> 2014-09-30 15:05:29.362 18023 TRACE glance.api.v1.upload_utils Got
>>> error from Swift: put_object('glance',
>>> '9f56ccec-deeb-4020-95ba-ca7bf1170056-1', ...) failure and no
>>> ability to reset contents for reupload.
>>>
>>> Does someone of You got same error a

Re: [openstack-dev] [Openstack] Glance on swift problem

2014-09-30 Thread Kuo Hugo
Hi ,

Could you please post the log of related requests in Swift's log ???


Thanks // Hugo

2014-09-30 22:20 GMT+08:00 Sławek Kapłoński :

> Hello,
>
> I'm using openstack havana release and glance with swift backend. Today I
> found that I have problem when I create image with url in "--copy-from"
> when image is bigger than my "swift_store_large_object_size" because then
> glance is trying to split image to chunks with size given in
> "swift_store_large_object_chunk_size" and when try to upload first chunk
> to swift I have error:
>
> 2014-09-30 15:05:29.361 18023 ERROR glance.store.swift [-] Error during
> chunked upload to backend, deleting stale chunks
> 2014-09-30 15:05:29.361 18023 TRACE glance.store.swift Traceback (most
> recent call last):
> 2014-09-30 15:05:29.361 18023 TRACE glance.store.swift   File
> "/usr/lib/python2.7/dist-packages/glance/store/swift.py", line 384, in add
> 2014-09-30 15:05:29.361 18023 TRACE glance.store.swift
>  content_length=content_length)
> 2014-09-30 15:05:29.361 18023 TRACE glance.store.swift   File
> "/usr/lib/python2.7/dist-packages/swiftclient/client.py", line 1234, in
> put_object
> 2014-09-30 15:05:29.361 18023 TRACE glance.store.swift
>  response_dict=response_dict)
> 2014-09-30 15:05:29.361 18023 TRACE glance.store.swift   File
> "/usr/lib/python2.7/dist-packages/swiftclient/client.py", line 1143, in
> _retry
> 2014-09-30 15:05:29.361 18023 TRACE glance.store.swift
>  reset_func(func, *args, **kwargs)
> 2014-09-30 15:05:29.361 18023 TRACE glance.store.swift   File
> "/usr/lib/python2.7/dist-packages/swiftclient/client.py", line 1215, in
> _default_reset
> 2014-09-30 15:05:29.361 18023 TRACE glance.store.swift % (container,
> obj))
> 2014-09-30 15:05:29.361 18023 TRACE glance.store.swift ClientException:
> put_object('glance', '9f56ccec-deeb-4020-95ba-ca7bf1170056-1', ...)
> failure and no ability to reset contents for reupload.
> 2014-09-30 15:05:29.361 18023 TRACE glance.store.swift
> 2014-09-30 15:05:29.362 18023 ERROR glance.store.swift [-] Failed to add
> object to Swift.
> Got error from Swift: put_object('glance', 
> '9f56ccec-deeb-4020-95ba-ca7bf1170056-1',
> ...) failure and no ability to reset contents for reupload.
> 2014-09-30 15:05:29.362 18023 ERROR glance.api.v1.upload_utils [-] Failed
> to upload image 9f56ccec-deeb-4020-95ba-ca7bf1170056
> 2014-09-30 15:05:29.362 18023 TRACE glance.api.v1.upload_utils Traceback
> (most recent call last):
> 2014-09-30 15:05:29.362 18023 TRACE glance.api.v1.upload_utils   File
> "/usr/lib/python2.7/dist-packages/glance/api/v1/upload_utils.py", line
> 101, in upload_data_to_store
> 2014-09-30 15:05:29.362 18023 TRACE glance.api.v1.upload_utils store)
> 2014-09-30 15:05:29.362 18023 TRACE glance.api.v1.upload_utils   File
> "/usr/lib/python2.7/dist-packages/glance/store/__init__.py", line 333, in
> store_add_to_backend
> 2014-09-30 15:05:29.362 18023 TRACE glance.api.v1.upload_utils
>  (location, size, checksum, metadata) = store.add(image_id, data, size)
> 2014-09-30 15:05:29.362 18023 TRACE glance.api.v1.upload_utils   File
> "/usr/lib/python2.7/dist-packages/glance/store/swift.py", line 447, in add
> 2014-09-30 15:05:29.362 18023 TRACE glance.api.v1.upload_utils raise
> glance.store.BackendException(msg)
> 2014-09-30 15:05:29.362 18023 TRACE glance.api.v1.upload_utils
> BackendException: Failed to add object to Swift.
> 2014-09-30 15:05:29.362 18023 TRACE glance.api.v1.upload_utils Got error
> from Swift: put_object('glance', '9f56ccec-deeb-4020-95ba-ca7bf1170056-1',
> ...) failure and no ability to reset contents for reupload.
>
>
> Does someone of You got same error and know what is solution of it? I was
> searching about that in google but I not found anything what could solve my
> problem.
>
> --
> Best regards
> Sławek Kapłoński
> sla...@kaplonski.pl
>
> ___
> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/
> openstack
> Post to : openst...@lists.openstack.org
> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/
> openstack
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Openstack] Glance on swift problem

2014-09-30 Thread Sławek Kapłoński

Hello,

I'm using openstack havana release and glance with swift backend. Today 
I found that I have problem when I create image with url in 
"--copy-from" when image is bigger than my 
"swift_store_large_object_size" because then glance is trying to split 
image to chunks with size given in "swift_store_large_object_chunk_size" 
and when try to upload first chunk to swift I have error:


2014-09-30 15:05:29.361 18023 ERROR glance.store.swift [-] Error during 
chunked upload to backend, deleting stale chunks
2014-09-30 15:05:29.361 18023 TRACE glance.store.swift Traceback (most 
recent call last):
2014-09-30 15:05:29.361 18023 TRACE glance.store.swift   File 
"/usr/lib/python2.7/dist-packages/glance/store/swift.py", line 384, in 
add
2014-09-30 15:05:29.361 18023 TRACE glance.store.swift 
content_length=content_length)
2014-09-30 15:05:29.361 18023 TRACE glance.store.swift   File 
"/usr/lib/python2.7/dist-packages/swiftclient/client.py", line 1234, in 
put_object
2014-09-30 15:05:29.361 18023 TRACE glance.store.swift 
response_dict=response_dict)
2014-09-30 15:05:29.361 18023 TRACE glance.store.swift   File 
"/usr/lib/python2.7/dist-packages/swiftclient/client.py", line 1143, in 
_retry
2014-09-30 15:05:29.361 18023 TRACE glance.store.swift 
reset_func(func, *args, **kwargs)
2014-09-30 15:05:29.361 18023 TRACE glance.store.swift   File 
"/usr/lib/python2.7/dist-packages/swiftclient/client.py", line 1215, in 
_default_reset
2014-09-30 15:05:29.361 18023 TRACE glance.store.swift % (container, 
obj))
2014-09-30 15:05:29.361 18023 TRACE glance.store.swift ClientException: 
put_object('glance', '9f56ccec-deeb-4020-95ba-ca7bf1170056-1', ...) 
failure and no ability to reset contents for reupload.

2014-09-30 15:05:29.361 18023 TRACE glance.store.swift
2014-09-30 15:05:29.362 18023 ERROR glance.store.swift [-] Failed to add 
object to Swift.
Got error from Swift: put_object('glance', 
'9f56ccec-deeb-4020-95ba-ca7bf1170056-1', ...) failure and no 
ability to reset contents for reupload.
2014-09-30 15:05:29.362 18023 ERROR glance.api.v1.upload_utils [-] 
Failed to upload image 9f56ccec-deeb-4020-95ba-ca7bf1170056
2014-09-30 15:05:29.362 18023 TRACE glance.api.v1.upload_utils Traceback 
(most recent call last):
2014-09-30 15:05:29.362 18023 TRACE glance.api.v1.upload_utils   File 
"/usr/lib/python2.7/dist-packages/glance/api/v1/upload_utils.py", line 
101, in upload_data_to_store
2014-09-30 15:05:29.362 18023 TRACE glance.api.v1.upload_utils 
store)
2014-09-30 15:05:29.362 18023 TRACE glance.api.v1.upload_utils   File 
"/usr/lib/python2.7/dist-packages/glance/store/__init__.py", line 333, 
in store_add_to_backend
2014-09-30 15:05:29.362 18023 TRACE glance.api.v1.upload_utils 
(location, size, checksum, metadata) = store.add(image_id, data, size)
2014-09-30 15:05:29.362 18023 TRACE glance.api.v1.upload_utils   File 
"/usr/lib/python2.7/dist-packages/glance/store/swift.py", line 447, in 
add
2014-09-30 15:05:29.362 18023 TRACE glance.api.v1.upload_utils raise 
glance.store.BackendException(msg)
2014-09-30 15:05:29.362 18023 TRACE glance.api.v1.upload_utils 
BackendException: Failed to add object to Swift.
2014-09-30 15:05:29.362 18023 TRACE glance.api.v1.upload_utils Got error 
from Swift: put_object('glance', 
'9f56ccec-deeb-4020-95ba-ca7bf1170056-1', ...) failure and no 
ability to reset contents for reupload.



Does someone of You got same error and know what is solution of it? I 
was searching about that in google but I not found anything what could 
solve my problem.


--
Best regards
Sławek Kapłoński
sla...@kaplonski.pl

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openst...@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] Glance on swift problem

2014-09-30 Thread Sławek Kapłoński

Hello,

I can't find that upload from was previous logs but I now try to upload 
same image once again. In glance there was exactly same error. In swift 
logs I have:


Sep 30 17:35:10 127.0.0.1 proxy-server X.X.X.X Y.Y.Y.Y 
30/Sep/2014/15/35/10 HEAD 
/v1/AUTH_7ef5a7661ccd4c069e3ad387a6dceebd/glance HTTP/1.0 204
Sep 30 17:35:16 127.0.0.1 proxy-server X.X.X.X Y.Y.Y.Y 
30/Sep/2014/15/35/16 PUT 
/v1/AUTH_7ef5a7661ccd4c069e3ad387a6dceebd/glance/fa5dfe09-74f5-4287-9852-d2f1991eebc0-1 HTTP/1.0 201 - -


Best regards
Slawek Kaplonski

W dniu 2014-09-30 17:03, Kuo Hugo napisał(a):

Hi ,

Could you please post the log of related requests in Swift's log ???

Thanks // Hugo

2014-09-30 22:20 GMT+08:00 Sławek Kapłoński :


Hello,

I'm using openstack havana release and glance with swift backend.
Today I found that I have problem when I create image with url in
"--copy-from" when image is bigger than my
"swift_store_large_object_size" because then glance is trying to
split image to chunks with size given in
"swift_store_large_object_chunk_size" and when try to upload first
chunk to swift I have error:

2014-09-30 15:05:29.361 18023 ERROR glance.store.swift [-] Error
during chunked upload to backend, deleting stale chunks
2014-09-30 15:05:29.361 18023 TRACE glance.store.swift Traceback
(most recent call last):
2014-09-30 15:05:29.361 18023 TRACE glance.store.swift   File
"/usr/lib/python2.7/dist-packages/glance/store/swift.py", line 384,
in add
2014-09-30 15:05:29.361 18023 TRACE glance.store.swift   
 content_length=content_length)
2014-09-30 15:05:29.361 18023 TRACE glance.store.swift   File
"/usr/lib/python2.7/dist-packages/swiftclient/client.py", line 1234,
in put_object
2014-09-30 15:05:29.361 18023 TRACE glance.store.swift   
 response_dict=response_dict)
2014-09-30 15:05:29.361 18023 TRACE glance.store.swift   File
"/usr/lib/python2.7/dist-packages/swiftclient/client.py", line 1143,
in _retry
2014-09-30 15:05:29.361 18023 TRACE glance.store.swift   
 reset_func(func, *args, **kwargs)
2014-09-30 15:05:29.361 18023 TRACE glance.store.swift   File
"/usr/lib/python2.7/dist-packages/swiftclient/client.py", line 1215,
in _default_reset
2014-09-30 15:05:29.361 18023 TRACE glance.store.swift     %
(container, obj))
2014-09-30 15:05:29.361 18023 TRACE glance.store.swift
ClientException: put_object('glance',
'9f56ccec-deeb-4020-95ba-ca7bf1170056-1', ...) failure and no
ability to reset contents for reupload.
2014-09-30 15:05:29.361 18023 TRACE glance.store.swift
2014-09-30 15:05:29.362 18023 ERROR glance.store.swift [-] Failed
to add object to Swift.
Got error from Swift: put_object('glance',
'9f56ccec-deeb-4020-95ba-ca7bf1170056-1', ...) failure and no
ability to reset contents for reupload.
2014-09-30 15:05:29.362 18023 ERROR glance.api.v1.upload_utils [-]
Failed to upload image 9f56ccec-deeb-4020-95ba-ca7bf1170056
2014-09-30 15:05:29.362 18023 TRACE glance.api.v1.upload_utils
Traceback (most recent call last):
2014-09-30 15:05:29.362 18023 TRACE glance.api.v1.upload_utils 
 File
"/usr/lib/python2.7/dist-packages/glance/api/v1/upload_utils.py",
line 101, in upload_data_to_store
2014-09-30 15:05:29.362 18023 TRACE glance.api.v1.upload_utils   
 store)
2014-09-30 15:05:29.362 18023 TRACE glance.api.v1.upload_utils 
 File "/usr/lib/python2.7/dist-packages/glance/store/__init__.py",
line 333, in store_add_to_backend
2014-09-30 15:05:29.362 18023 TRACE glance.api.v1.upload_utils   
 (location, size, checksum, metadata) = store.add(image_id, data,
size)
2014-09-30 15:05:29.362 18023 TRACE glance.api.v1.upload_utils 
 File "/usr/lib/python2.7/dist-packages/glance/store/swift.py",
line 447, in add
2014-09-30 15:05:29.362 18023 TRACE glance.api.v1.upload_utils   
 raise glance.store.BackendException(msg)
2014-09-30 15:05:29.362 18023 TRACE glance.api.v1.upload_utils
BackendException: Failed to add object to Swift.
2014-09-30 15:05:29.362 18023 TRACE glance.api.v1.upload_utils Got
error from Swift: put_object('glance',
'9f56ccec-deeb-4020-95ba-ca7bf1170056-1', ...) failure and no
ability to reset contents for reupload.

Does someone of You got same error and know what is solution of it?
I was searching about that in google but I not found anything what
could solve my problem.

--
Best regards
Sławek Kapłoński
sla...@kaplonski.pl

___
Mailing list:
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack [1]
Post to     : openst...@lists.openstack.org
Unsubscribe :
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack [1]




Links:
--
[1] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


--
Pozdrawiam
Sławek Kapłoński
sla...@kaplonski.pl

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openst...@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
___
OpenStack-dev m

Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack cascading

2014-09-30 Thread Joe Gordon
On Tue, Sep 30, 2014 at 6:04 AM, joehuang  wrote:

> Hello, Dear TC and all,
>
> Large cloud operators prefer to deploy multiple OpenStack instances(as
> different zones), rather than a single monolithic OpenStack instance
> because of these reasons:
>
> 1) Multiple data centers distributed geographically;
> 2) Multi-vendor business policy;
> 3) Server nodes scale up modularized from 00's up to million;
> 4) Fault and maintenance isolation between zones (only REST interface);
>
> At the same time, they also want to integrate these OpenStack instances
> into one cloud. Instead of proprietary orchestration layer, they want to
> use standard OpenStack framework for Northbound API compatibility with
> HEAT/Horizon or other 3rd ecosystem apps.
>
> We call this pattern as "OpenStack Cascading", with proposal described by
> [1][2]. PoC live demo video can be found[3][4].
>
> Nova, Cinder, Neutron, Ceilometer and Glance (optional) are involved in
> the OpenStack cascading.
>
> Kindly ask for cross program design summit session to discuss OpenStack
> cascading and the contribution to Kilo.
>

Cross program design summit sessions should be used for things that we are
unable to make progress on via this mailing list, and not as a way to begin
new conversations. With that in mind, I think this thread is a good place
to get initial feedback on the idea and possible make a plan for how to
tackle this.


>
> Kindly invite those who are interested in the OpenStack cascading to work
> together and contribute it to OpenStack.
>
> (I applied for “other projects” track [5], but it would be better to have
> a discussion as a formal cross program session, because many core programs
> are involved )
>
>
> [1] wiki: https://wiki.openstack.org/wiki/OpenStack_cascading_solution
> [2] PoC source code: https://github.com/stackforge/tricircle
> [3] Live demo video at YouTube:
> https://www.youtube.com/watch?v=OSU6PYRz5qY
> [4] Live demo video at Youku (low quality, for those who can't access
> YouTube):http://v.youku.com/v_show/id_XNzkzNDQ3MDg4.html
> [5]
> http://www.mail-archive.com/openstack-dev@lists.openstack.org/msg36395.html
>
> Best Regards
> Chaoyi Huang ( Joe Huang )
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Core Reviewer Change

2014-09-30 Thread Devdatta Kulkarni
+1


From: Georgy Okrokvertskhov [gokrokvertsk...@mirantis.com]
Sent: Tuesday, September 30, 2014 12:18 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Solum] Core Reviewer Change

+1

On Tue, Sep 30, 2014 at 10:03 AM, Adrian Otto 
mailto:adrian.o...@rackspace.com>> wrote:
Solum Core Reviewer Team,

I propose the following change to our core reviewer group:

-lifeless (Robert Collins) [inactive]
+murali-allada (Murali Allada)
+james-li (James Li)

Please let me know your votes (+1, 0, or -1).

Thanks,

Adrian

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Georgy Okrokvertskhov
Architect,
OpenStack Platform Products,
Mirantis
http://www.mirantis.com
Tel. +1 650 963 9828
Mob. +1 650 996 3284
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zaqar] Comments on the concerns arose during the TC meeting

2014-09-30 Thread Clint Byrum
Excerpts from Jay Pipes's message of 2014-09-30 09:41:29 -0700:
> A relational database was built for the above types of queries, and 
> that's why I said it's the best tool for the job *in this specific case*.
> 
> Now... that said...
> 
> Is it possible to go through the Nova schema and identify mini-schemas 
> that could be pulled out of the RDBMS and placed into Riak or Cassandra? 
> Absolutely yes! The service group and compute node usage records are 
> good candidates for that, in my opinion. With the nova.objects work that 
> was completed over the last few cycles, we might actually now have the 
> foundation in place to make doing this a reality. I welcome your 
> contributions in this area.
> 

I forgot to say, +1 here. :)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Core Reviewer Change

2014-09-30 Thread Georgy Okrokvertskhov
+1

On Tue, Sep 30, 2014 at 10:03 AM, Adrian Otto 
wrote:

> Solum Core Reviewer Team,
>
> I propose the following change to our core reviewer group:
>
> -lifeless (Robert Collins) [inactive]
> +murali-allada (Murali Allada)
> +james-li (James Li)
>
> Please let me know your votes (+1, 0, or -1).
>
> Thanks,
>
> Adrian
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Georgy Okrokvertskhov
Architect,
OpenStack Platform Products,
Mirantis
http://www.mirantis.com
Tel. +1 650 963 9828
Mob. +1 650 996 3284
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zaqar] Comments on the concerns arose during the TC meeting

2014-09-30 Thread Clint Byrum
Excerpts from Jay Pipes's message of 2014-09-30 09:41:29 -0700:
> On 09/30/2014 08:03 AM, Soren Hansen wrote:
> > 2014-09-12 1:05 GMT+02:00 Jay Pipes :
> >> If Nova was to take Soren's advice and implement its data-access layer
> >> on top of Cassandra or Riak, we would just end up re-inventing SQL
> >> Joins in Python-land.
> >
> > I may very well be wrong(!), but this statement makes it sound like you've
> > never used e.g. Riak. Or, if you have, not done so in the way it's
> > supposed to be used.
> >
> > If you embrace an alternative way of storing your data, you wouldn't just
> > blindly create a container for each table in your RDBMS.
> >
> > For example: In Nova's SQL-based datastore we have a table for security
> > groups and another for security group rules. Rows in the security group
> > rules table have a foreign key referencing the security group to which
> > they belong. In a datastore like Riak, you could have a security group
> > container where each value contains not just the security group
> > information, but also all the security group rules. No joins in
> > Python-land necessary.
> 
> OK, that's all fine for a simple one-to-many relation.
> 
> How would I go about getting the associated fixed IPs for a network? The 
> query to get associated fixed IPs for a network [1] in Nova looks like this:
> 
> SELECT
>   fip.address,
>   fip.instance_uuid,
>   fip.network_id,
>   fip.virtual_interface_id,
>   vif.address,
>   i.hostname,
>   i.updated_at,
>   i.created_at,
>   fip.allocated,
>   fip.leased,
>   vif2.id
> FROM fixed_ips fip
> LEFT JOIN virtual_interfaces vif
>   ON vif.id = fip.virtual_interface_id
>   AND vif.deleted = 0
> LEFT JOIN instances i
>   ON fip.instance_uuid = i.uuid
>   AND i.deleted = 0
> LEFT JOIN (
>   SELECT MIN(vi.id) AS id, vi.instance_uuid
>   FROM virtual_interfaces vi
>   GROUP BY instance_uuid
> ) as vif2
> WHERE fip.deleted = 0
> AND fip.network_id = :network_id
> AND fip.virtual_interface_id IS NOT NULL
> AND fip.instance_uuid IS NOT NULL
> AND i.host = :host
> 

You and I both know that this query is not something we want to be
running a lot. In the past when I've had systems where something like
the above needed to be run a lot, I created materialized views for it
and made access to this information asynchronous because relying on this
in real-time is _extremely expensive_.

This is where a massively scalable but somewhat dumb database shines
because you can materialize the view into it at a much faster pace and
with more scaled-out workers so that you now have fine grained resource
allocations for scaling the responsiveness of this view.

> would I have a Riak container for virtual_interfaces that would also 
> have instance information, network information, fixed_ip information? 
> How would I accomplish the query against a derived table that gets the 
> minimum virtual interface ID for each instance UUID?
> 

Yes you'd have a bunch of duplicated information everywhere.  Asynchronous
denormalizing is the new join. The only worry I'd have is what
accidental contracts have we made that require bits of information to
appear all at once. :-P

> More than likely, I would end up having to put a bunch of indexes and 
> relations into my Riak containers and structures just so I could do 
> queries like the above. Failing that, I'd need to do multiple queries to 
> multiple Riak containers and then join the resulting projection in 
> memory, in Python. And that is why I say you will just end up 
> implementing joins in Python.
> 

My experience has been that you have a map/reduce cluster somewhere
churning through updates to pre-compute things for slow queries and you
do any realtime duplication at write time. The idea is to create documents
that are themselves useful without the need of the other documents. Where
this approach suffers is when your documents all have so many copies of
data that you end up writing 10x more than you read. That is a rare thing,
and most certainly would not happen with Nova's data, which mostly would
have duplicates of lots of UUID's and IP addresses.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Solum] Core Reviewer Change

2014-09-30 Thread Adrian Otto
Solum Core Reviewer Team,

I propose the following change to our core reviewer group:

-lifeless (Robert Collins) [inactive]
+murali-allada (Murali Allada)
+james-li (James Li)

Please let me know your votes (+1, 0, or -1).

Thanks,

Adrian

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Kolla Blueprints

2014-09-30 Thread Chmouel Boudjnah
On Tue, Sep 30, 2014 at 6:41 PM, Steven Dake  wrote:

>
> I've done a first round of prioritization.  I think key things we need
> people to step up for are nova and rabbitmq containers.
>
> For the developers, please take a moment to pick a specific blueprint to
> work on.  If your already working on something, this hsould help to prevent
> duplicate work :)



As I understand in the current implementations[1]  the containers are
configured with a mix of shell scripts using crudini and other shell
command. Is it the way to configure the containers? and is a deployment
tool like Ansible (or others) is something that is planned to be used in
the future?

Chmouel


[1] from https://github.com/jlabocki/superhappyfunshow/
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zaqar] Comments on the concerns arose during the TC meeting

2014-09-30 Thread Jay Pipes

On 09/30/2014 08:03 AM, Soren Hansen wrote:

2014-09-12 1:05 GMT+02:00 Jay Pipes :

If Nova was to take Soren's advice and implement its data-access layer
on top of Cassandra or Riak, we would just end up re-inventing SQL
Joins in Python-land.


I may very well be wrong(!), but this statement makes it sound like you've
never used e.g. Riak. Or, if you have, not done so in the way it's
supposed to be used.

If you embrace an alternative way of storing your data, you wouldn't just
blindly create a container for each table in your RDBMS.

For example: In Nova's SQL-based datastore we have a table for security
groups and another for security group rules. Rows in the security group
rules table have a foreign key referencing the security group to which
they belong. In a datastore like Riak, you could have a security group
container where each value contains not just the security group
information, but also all the security group rules. No joins in
Python-land necessary.


OK, that's all fine for a simple one-to-many relation.

How would I go about getting the associated fixed IPs for a network? The 
query to get associated fixed IPs for a network [1] in Nova looks like this:


SELECT
 fip.address,
 fip.instance_uuid,
 fip.network_id,
 fip.virtual_interface_id,
 vif.address,
 i.hostname,
 i.updated_at,
 i.created_at,
 fip.allocated,
 fip.leased,
 vif2.id
FROM fixed_ips fip
LEFT JOIN virtual_interfaces vif
 ON vif.id = fip.virtual_interface_id
 AND vif.deleted = 0
LEFT JOIN instances i
 ON fip.instance_uuid = i.uuid
 AND i.deleted = 0
LEFT JOIN (
 SELECT MIN(vi.id) AS id, vi.instance_uuid
 FROM virtual_interfaces vi
 GROUP BY instance_uuid
) as vif2
WHERE fip.deleted = 0
AND fip.network_id = :network_id
AND fip.virtual_interface_id IS NOT NULL
AND fip.instance_uuid IS NOT NULL
AND i.host = :host

would I have a Riak container for virtual_interfaces that would also 
have instance information, network information, fixed_ip information? 
How would I accomplish the query against a derived table that gets the 
minimum virtual interface ID for each instance UUID?


More than likely, I would end up having to put a bunch of indexes and 
relations into my Riak containers and structures just so I could do 
queries like the above. Failing that, I'd need to do multiple queries to 
multiple Riak containers and then join the resulting projection in 
memory, in Python. And that is why I say you will just end up 
implementing joins in Python.


A relational database was built for the above types of queries, and 
that's why I said it's the best tool for the job *in this specific case*.


Now... that said...

Is it possible to go through the Nova schema and identify mini-schemas 
that could be pulled out of the RDBMS and placed into Riak or Cassandra? 
Absolutely yes! The service group and compute node usage records are 
good candidates for that, in my opinion. With the nova.objects work that 
was completed over the last few cycles, we might actually now have the 
foundation in place to make doing this a reality. I welcome your 
contributions in this area.


[1] 
https://github.com/openstack/nova/blob/stable/icehouse/nova/db/sqlalchemy/api.py#L2608



I've said it before, and I'll say it again. In Nova at least, the SQL
schema is complex because the problem domain is complex. That means
lots of relations, lots of JOINs, and that means the best way to query
for that data is via an RDBMS.


I was really hoping you could be more specific than "best"/"most
appropriate" so that we could have a focused discussion.

I don't think relying on a central data store is in any conceivable way
appropriate for a project like OpenStack. Least of all Nova.

I don't see how we can build a highly available, distributed service on
top of a centralized data store like MySQL.

Tens or hundreds of thousands of nodes, spread across many, many racks
and datacentre halls are going to experience connectivity problems[1].

This means that some percentage of your infrastructure (possibly many
thousands of nodes, affecting many, many thousands of customers) will
find certain functionality not working on account of your datastore not
being reachable from the part of the control plane they're attempting to
use (or possibly only being able to read from it).

I say over and over again that people should own their own uptime.
Expect things to fail all the time. Do whatever you need to do to ensure
your service keeps working even when something goes wrong. Of course
this applies to our customers too. Even if we take the greatest care to
avoid downtime, customers should spread their workloads across multiple
availability zones and/or regions and probably even multiple cloud
providers. Their service towards their users is their responsibility.

However, our service towards our users is our responsibility. We should
take the greatest care to avoid having internal problems affect our
users.  Building a massively distributed system like Nova on top of a
centralized data 

Re: [openstack-dev] [kolla] Kolla Blueprints

2014-09-30 Thread Steven Dake

On 09/30/2014 06:32 AM, Ryan Hallisey wrote:

Hi all,

The blueprints have been setup for Kolla: https://blueprints.launchpad.net/kolla

Currently, there are blueprints for all of the openstack services and a few 
supporting services.
They are, nova, swift, cinder, neutron, horizon, keystone, glance, ceilometer, 
heat, trove,
zaqar, sahara, mysql, and rabbitmq.

Feel free to take ownership of a service!

Thanks,
Ryan Hallisey

Ryan,

Thanks.

I've done a first round of prioritization.  I think key things we need 
people to step up for are nova and rabbitmq containers.


For the developers, please take a moment to pick a specific blueprint to 
work on.  If your already working on something, this hsould help to 
prevent duplicate work :)


Thanks
-0steve


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] 2 Minute tokens

2014-09-30 Thread Sean Dague
On 09/30/2014 11:58 AM, Jay Pipes wrote:
> On 09/30/2014 11:37 AM, Adam Young wrote:
>> On 09/30/2014 11:06 AM, Louis Taylor wrote:
>>> On Tue, Sep 30, 2014 at 10:44:51AM -0400, Adam Young wrote:
 What are the uses that require long lived tokens?
>>> Glance has operations which can take a long time, such as uploading and
>>> downloading large images.
>> Yes, but the token is only authenticated at the start of the operation.
>> Does anything need to happen afterwards?
> 
> Funny you mention it... :) We were just having this conversation on IRC
> about Nikesh's issues with some Tempest volume tests and a token
> expiration problem.
> 
> So, yes, a Glance upload operation makes a series of HTTP calls in the
> course of the upload:
> 
>  POST $registry/images <-- Creates the queued image record
>  ...  upload of chunked body of HTTP request to backend like Swift ..
>  PUT $registry/images/ <-- update image status and checksum
> 
> So, what seems to be happening here is that the PUT call at the end of
> uploading the snapshot is using the same token that was created in the
> keystone client of the tempest test case during the test classes'
> setUpClass() method, and the test class ends up running for >1 hour, and
> by the time the PUT call is reached, the token has expired.

Yes... and there is this whole unresolved dev thread on this -
http://lists.openstack.org/pipermail/openstack-dev/2014-September/045567.html

-Sean

-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack cascading

2014-09-30 Thread John Griffith
On Tue, Sep 30, 2014 at 7:35 AM, John Garbutt  wrote:

> On 30 September 2014 14:04, joehuang  wrote:
> > Hello, Dear TC and all,
> >
> > Large cloud operators prefer to deploy multiple OpenStack instances(as
> different zones), rather than a single monolithic OpenStack instance
> because of these reasons:
> >
> > 1) Multiple data centers distributed geographically;
> > 2) Multi-vendor business policy;
> > 3) Server nodes scale up modularized from 00's up to million;
> > 4) Fault and maintenance isolation between zones (only REST interface);
> >
> > At the same time, they also want to integrate these OpenStack instances
> into one cloud. Instead of proprietary orchestration layer, they want to
> use standard OpenStack framework for Northbound API compatibility with
> HEAT/Horizon or other 3rd ecosystem apps.
> >
> > We call this pattern as "OpenStack Cascading", with proposal described
> by [1][2]. PoC live demo video can be found[3][4].
> >
> > Nova, Cinder, Neutron, Ceilometer and Glance (optional) are involved in
> the OpenStack cascading.
> >
> > Kindly ask for cross program design summit session to discuss OpenStack
> cascading and the contribution to Kilo.
> >
> > Kindly invite those who are interested in the OpenStack cascading to
> work together and contribute it to OpenStack.
> >
> > (I applied for “other projects” track [5], but it would be better to
> have a discussion as a formal cross program session, because many core
> programs are involved )
> >
> >
> > [1] wiki: https://wiki.openstack.org/wiki/OpenStack_cascading_solution
> > [2] PoC source code: https://github.com/stackforge/tricircle
> > [3] Live demo video at YouTube:
> https://www.youtube.com/watch?v=OSU6PYRz5qY
> > [4] Live demo video at Youku (low quality, for those who can't access
> YouTube):http://v.youku.com/v_show/id_XNzkzNDQ3MDg4.html
> > [5]
> http://www.mail-archive.com/openstack-dev@lists.openstack.org/msg36395.html
>
> There are etherpads for suggesting cross project sessions here:
> https://wiki.openstack.org/wiki/Summit/Planning
> https://etherpad.openstack.org/p/kilo-crossproject-summit-topics
>
> I am interested at comparing this to Nova's cells concept:
>
> http://docs.openstack.org/trunk/config-reference/content/section_compute-cells.html
>
> Cells basically scales out a single datacenter region by aggregating
> multiple child Nova installations with an API cell.
>
> Each child cell can be tested in isolation, via its own API, before
> joining it up to an API cell, that adds it into the region. Each cell
> logically has its own database and message queue, which helps get more
> independent failure domains. You can use cell level scheduling to
> restrict people or types of instances to particular subsets of the
> cloud, if required.
>
> It doesn't attempt to aggregate between regions, they are kept
> independent. Except, the usual assumption that you have a common
> identity between all regions.
>
> It also keeps a single Cinder, Glance, Neutron deployment per region.
>
> It would be great to get some help hardening, testing, and building
> out more of the cells vision. I suspect we may form a new Nova subteam
> to trying and drive this work forward in kilo, if we can build up
> enough people wanting to work on improving cells.
>
> Thanks,
> John
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

​Interesting idea, to be honest when TripleO was first announced what you
have here is more along the lines of what I envisioned.  It seems that this
would have some interesting wins in terms of upgrades, migrations and
scaling in general.  Anyway, you should propose it to the etherpad as John
G ( the other John G :) ) recommended, I'd love to dig deeper into this.

Thanks,
John​
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] 2 Minute tokens

2014-09-30 Thread Jay Pipes

On 09/30/2014 11:37 AM, Adam Young wrote:

On 09/30/2014 11:06 AM, Louis Taylor wrote:

On Tue, Sep 30, 2014 at 10:44:51AM -0400, Adam Young wrote:

What are the uses that require long lived tokens?

Glance has operations which can take a long time, such as uploading and
downloading large images.

Yes, but the token is only authenticated at the start of the operation.
Does anything need to happen afterwards?


Funny you mention it... :) We were just having this conversation on IRC 
about Nikesh's issues with some Tempest volume tests and a token 
expiration problem.


So, yes, a Glance upload operation makes a series of HTTP calls in the 
course of the upload:


 POST $registry/images <-- Creates the queued image record
 ...  upload of chunked body of HTTP request to backend like Swift ..
 PUT $registry/images/ <-- update image status and checksum

So, what seems to be happening here is that the PUT call at the end of 
uploading the snapshot is using the same token that was created in the 
keystone client of the tempest test case during the test classes' 
setUpClass() method, and the test class ends up running for >1 hour, and 
by the time the PUT call is reached, the token has expired.


-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Hyper-V meeting cancelled today

2014-09-30 Thread Peter Pouliot
Hi All,

Dealing with a critical bug this morning.  We will have to postpone the Hyper-V 
meeting until next week.

p

Peter J. Pouliot CISSP
Sr. SDET OpenStack
Microsoft
New England Research & Development Center
1 Memorial Drive
Cambridge, MA 02142
P: 1.(857).4536436
E: ppoul...@microsoft.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] 2 Minute tokens

2014-09-30 Thread Adam Young

On 09/30/2014 11:06 AM, Louis Taylor wrote:

On Tue, Sep 30, 2014 at 10:44:51AM -0400, Adam Young wrote:

What are the uses that require long lived tokens?

Glance has operations which can take a long time, such as uploading and
downloading large images.
Yes, but the token is only authenticated at the start of the operation.  
Does anything need to happen afterwards?






___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] 2 Minute tokens

2014-09-30 Thread Jay Pipes

On 09/30/2014 10:44 AM, Adam Young wrote:

What is keeping us from dropping the (scoped) token duration to 5 minutes?

If we could keep their lifetime as short as network skew lets us, we
would be able to:

Get rid of revocation checking.
Get rid of persisted tokens.

OK, so that assumes we can move back to PKI tokens, but we're working
on that.

What are the uses that require long lived tokens?  Can they be replaced
with a better mechanism for long term delegation (OAuth or Keystone
trusts) as Heat has done?


I think you will find that most folks just don't know the intracacies of 
non-UUID tokens in Keystone. I think we'd be open to any options that 
are reliable, well-documented and don't produce 4K in each HTTP request.


Best,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Fate of xmlutils

2014-09-30 Thread Julien Danjou
On Tue, Sep 30 2014, Doug Hellmann wrote:

> Yes, I think we are still on track to drop 2.6 support for the servers in 
> Kilo.
>
> This wasn’t used in the client libraries, right?

After a quick grep of the code I've around, it doesn't look being used
by anything else than Nova itself.

-- 
Julien Danjou
-- Free Software hacker
-- http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Fate of xmlutils

2014-09-30 Thread Doug Hellmann
Yes, I think we are still on track to drop 2.6 support for the servers in Kilo.

This wasn’t used in the client libraries, right?

On Sep 30, 2014, at 10:25 AM, Ben Nemec  wrote:

> This was also needed for Python 2.6, right?  Do we have confirmation
> that we can drop that for Kilo?
> 
> -Ben
> 
> On 09/30/2014 08:28 AM, Doug Hellmann wrote:
>> I agree, it sounds like option 2 is safe.
>> 
>> Julien, I updated your commit message on 
>> https://review.openstack.org/#/c/125021/ to point to this thread.
>> 
>> Write-it-down-ly,
>> Doug
>> 
>> On Sep 30, 2014, at 7:17 AM, Davanum Srinivas  wrote:
>> 
>>> Julien,
>>> 
>>> I believe all the lessons learned from defusedxml (see the release
>>> dates) have been folded back into the different libraries. For example
>>> plain old etree.fromstring() even without any special options is ok
>>> with the specially crafted xml bombs that you can find as test cases
>>> in defusedxml repo. There is some more information here as well
>>> (http://lxml.de/FAQ.html#is-lxml-vulnerable-to-xml-bombs). So at this
>>> point, unless we see a new attack vector other than the ones that
>>> caused folks to whip up defusedxml, we should be good. So Option #2 is
>>> definitely the way to go
>>> 
>>> thanks,
>>> dims
>>> 
>>> On Tue, Sep 30, 2014 at 3:45 AM, Julien Danjou  wrote:
 On Mon, Sep 29 2014, Joshua Harlow wrote:
 
> Do we know that the users (keystone, neutron...) aren't vulnerable?
> 
> From https://pypi.python.org/pypi/defusedxml#python-xml-libraries it sure 
> seems
> like we would likely still have issues if custom implementations are being
> used/created. Perhaps we should just use the defusedxml libraries until 
> proven
> otherwise (better to be safe than sorry).
 
 According to LP#1100282¹, Keystone and Neutron are supposed to not be
 vulnerable with different fixes than Nova.
 
 Since all the solutions are different, I'm not sure it covers the
 problem in its entirety in all cases.
 
 I see 2 options:
 1. Put effort to move all projects to defusedxml
 2. Since XML API are going to be deprecated (at least in Nova), move
  xmlutils to Nova and be done with it.
 
 Solution 1 requires a lot more effort, and I wonder if it's worth it.
 
 
 ¹  https://bugs.launchpad.net/bugs/1100282
 
 --
 Julien Danjou
 // Free Software hacker
 // http://julien.danjou.info
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
>>> 
>>> 
>>> 
>>> -- 
>>> Davanum Srinivas :: https://twitter.com/dims
>>> 
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
>> 
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] 2 Minute tokens

2014-09-30 Thread Louis Taylor
On Tue, Sep 30, 2014 at 10:44:51AM -0400, Adam Young wrote:
> What are the uses that require long lived tokens?

Glance has operations which can take a long time, such as uploading and
downloading large images.


signature.asc
Description: Digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [infra] Welcome three new members to project-config-core

2014-09-30 Thread Jeremy Stanley
With unanimous consent[1][2][3] of the OpenStack Project
Infrastructure core team (infra-core), I'm pleased to welcome
Andreas Jaeger, Anita Kuno and Sean Dague as members of the
newly-formed project-config-core team. Their assistance has been
invaluable in reviewing changes to our project-specific
configuration data, and I predict their addition to the core team
for the newly split-out openstack-infra/project-config repository
represents an immense benefit to everyone in OpenStack.

[1] 
http://lists.openstack.org/pipermail/openstack-dev/2014-September/047251.html
[2] 
http://lists.openstack.org/pipermail/openstack-dev/2014-September/047252.html
[3] 
http://lists.openstack.org/pipermail/openstack-dev/2014-September/047253.html
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] 2 Minute tokens

2014-09-30 Thread Adam Young

What is keeping us from dropping the (scoped) token duration to 5 minutes?


If we could keep their lifetime as short as network skew lets us, we 
would be able to:


Get rid of revocation checking.
Get rid of persisted tokens.

OK,  so that assumes we can move back to PKI tokens, but we're working 
on that.


What are the uses that require long lived tokens?  Can they be replaced 
with a better mechanism for long term delegation (OAuth or Keystone 
trusts) as Heat has done?


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Fate of xmlutils

2014-09-30 Thread Ben Nemec
This was also needed for Python 2.6, right?  Do we have confirmation
that we can drop that for Kilo?

-Ben

On 09/30/2014 08:28 AM, Doug Hellmann wrote:
> I agree, it sounds like option 2 is safe.
> 
> Julien, I updated your commit message on 
> https://review.openstack.org/#/c/125021/ to point to this thread.
> 
> Write-it-down-ly,
> Doug
> 
> On Sep 30, 2014, at 7:17 AM, Davanum Srinivas  wrote:
> 
>> Julien,
>>
>> I believe all the lessons learned from defusedxml (see the release
>> dates) have been folded back into the different libraries. For example
>> plain old etree.fromstring() even without any special options is ok
>> with the specially crafted xml bombs that you can find as test cases
>> in defusedxml repo. There is some more information here as well
>> (http://lxml.de/FAQ.html#is-lxml-vulnerable-to-xml-bombs). So at this
>> point, unless we see a new attack vector other than the ones that
>> caused folks to whip up defusedxml, we should be good. So Option #2 is
>> definitely the way to go
>>
>> thanks,
>> dims
>>
>> On Tue, Sep 30, 2014 at 3:45 AM, Julien Danjou  wrote:
>>> On Mon, Sep 29 2014, Joshua Harlow wrote:
>>>
 Do we know that the users (keystone, neutron...) aren't vulnerable?

 From https://pypi.python.org/pypi/defusedxml#python-xml-libraries it sure 
 seems
 like we would likely still have issues if custom implementations are being
 used/created. Perhaps we should just use the defusedxml libraries until 
 proven
 otherwise (better to be safe than sorry).
>>>
>>> According to LP#1100282¹, Keystone and Neutron are supposed to not be
>>> vulnerable with different fixes than Nova.
>>>
>>> Since all the solutions are different, I'm not sure it covers the
>>> problem in its entirety in all cases.
>>>
>>> I see 2 options:
>>> 1. Put effort to move all projects to defusedxml
>>> 2. Since XML API are going to be deprecated (at least in Nova), move
>>>   xmlutils to Nova and be done with it.
>>>
>>> Solution 1 requires a lot more effort, and I wonder if it's worth it.
>>>
>>>
>>> ¹  https://bugs.launchpad.net/bugs/1100282
>>>
>>> --
>>> Julien Danjou
>>> // Free Software hacker
>>> // http://julien.danjou.info
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>>
>> -- 
>> Davanum Srinivas :: https://twitter.com/dims
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ceilometer] Juno RC1 available

2014-09-30 Thread Thierry Carrez
Hello everyone,

Ceilometer just published its first Juno release candidate. The list of
fixed bugs and the RC1 tarball are available for download at:
https://launchpad.net/ceilometer/juno/juno-rc1

Unless release-critical issues are found that warrant a release
candidate respin, this RC1 will be formally released as the 2014.2 final
version on October 16. You are therefore strongly encouraged to test and
validate this tarball !

Alternatively, you can directly test the proposed/juno branch at:
https://github.com/openstack/ceilometer/tree/proposed/juno

If you find an issue that could be considered release-critical, please
file it at:

https://bugs.launchpad.net/ceilometer/+filebug

and tag it *juno-rc-potential* to bring it to the release crew's
attention.

Note that the "master" branch of Ceilometer is now open for Kilo
development, and feature freeze restrictions no longer apply there.

Regards,

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack cascading

2014-09-30 Thread John Garbutt
On 30 September 2014 14:04, joehuang  wrote:
> Hello, Dear TC and all,
>
> Large cloud operators prefer to deploy multiple OpenStack instances(as 
> different zones), rather than a single monolithic OpenStack instance because 
> of these reasons:
>
> 1) Multiple data centers distributed geographically;
> 2) Multi-vendor business policy;
> 3) Server nodes scale up modularized from 00's up to million;
> 4) Fault and maintenance isolation between zones (only REST interface);
>
> At the same time, they also want to integrate these OpenStack instances into 
> one cloud. Instead of proprietary orchestration layer, they want to use 
> standard OpenStack framework for Northbound API compatibility with 
> HEAT/Horizon or other 3rd ecosystem apps.
>
> We call this pattern as "OpenStack Cascading", with proposal described by 
> [1][2]. PoC live demo video can be found[3][4].
>
> Nova, Cinder, Neutron, Ceilometer and Glance (optional) are involved in the 
> OpenStack cascading.
>
> Kindly ask for cross program design summit session to discuss OpenStack 
> cascading and the contribution to Kilo.
>
> Kindly invite those who are interested in the OpenStack cascading to work 
> together and contribute it to OpenStack.
>
> (I applied for “other projects” track [5], but it would be better to have a 
> discussion as a formal cross program session, because many core programs are 
> involved )
>
>
> [1] wiki: https://wiki.openstack.org/wiki/OpenStack_cascading_solution
> [2] PoC source code: https://github.com/stackforge/tricircle
> [3] Live demo video at YouTube: https://www.youtube.com/watch?v=OSU6PYRz5qY
> [4] Live demo video at Youku (low quality, for those who can't access 
> YouTube):http://v.youku.com/v_show/id_XNzkzNDQ3MDg4.html
> [5] 
> http://www.mail-archive.com/openstack-dev@lists.openstack.org/msg36395.html

There are etherpads for suggesting cross project sessions here:
https://wiki.openstack.org/wiki/Summit/Planning
https://etherpad.openstack.org/p/kilo-crossproject-summit-topics

I am interested at comparing this to Nova's cells concept:
http://docs.openstack.org/trunk/config-reference/content/section_compute-cells.html

Cells basically scales out a single datacenter region by aggregating
multiple child Nova installations with an API cell.

Each child cell can be tested in isolation, via its own API, before
joining it up to an API cell, that adds it into the region. Each cell
logically has its own database and message queue, which helps get more
independent failure domains. You can use cell level scheduling to
restrict people or types of instances to particular subsets of the
cloud, if required.

It doesn't attempt to aggregate between regions, they are kept
independent. Except, the usual assumption that you have a common
identity between all regions.

It also keeps a single Cinder, Glance, Neutron deployment per region.

It would be great to get some help hardening, testing, and building
out more of the cells vision. I suspect we may form a new Nova subteam
to trying and drive this work forward in kilo, if we can build up
enough people wanting to work on improving cells.

Thanks,
John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla] Kolla Blueprints

2014-09-30 Thread Ryan Hallisey
Hi all,

The blueprints have been setup for Kolla: https://blueprints.launchpad.net/kolla

Currently, there are blueprints for all of the openstack services and a few 
supporting services.
They are, nova, swift, cinder, neutron, horizon, keystone, glance, ceilometer, 
heat, trove, 
zaqar, sahara, mysql, and rabbitmq.

Feel free to take ownership of a service!

Thanks,
Ryan Hallisey

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Fate of xmlutils

2014-09-30 Thread Doug Hellmann
I agree, it sounds like option 2 is safe.

Julien, I updated your commit message on 
https://review.openstack.org/#/c/125021/ to point to this thread.

Write-it-down-ly,
Doug

On Sep 30, 2014, at 7:17 AM, Davanum Srinivas  wrote:

> Julien,
> 
> I believe all the lessons learned from defusedxml (see the release
> dates) have been folded back into the different libraries. For example
> plain old etree.fromstring() even without any special options is ok
> with the specially crafted xml bombs that you can find as test cases
> in defusedxml repo. There is some more information here as well
> (http://lxml.de/FAQ.html#is-lxml-vulnerable-to-xml-bombs). So at this
> point, unless we see a new attack vector other than the ones that
> caused folks to whip up defusedxml, we should be good. So Option #2 is
> definitely the way to go
> 
> thanks,
> dims
> 
> On Tue, Sep 30, 2014 at 3:45 AM, Julien Danjou  wrote:
>> On Mon, Sep 29 2014, Joshua Harlow wrote:
>> 
>>> Do we know that the users (keystone, neutron...) aren't vulnerable?
>>> 
>>> From https://pypi.python.org/pypi/defusedxml#python-xml-libraries it sure 
>>> seems
>>> like we would likely still have issues if custom implementations are being
>>> used/created. Perhaps we should just use the defusedxml libraries until 
>>> proven
>>> otherwise (better to be safe than sorry).
>> 
>> According to LP#1100282¹, Keystone and Neutron are supposed to not be
>> vulnerable with different fixes than Nova.
>> 
>> Since all the solutions are different, I'm not sure it covers the
>> problem in its entirety in all cases.
>> 
>> I see 2 options:
>> 1. Put effort to move all projects to defusedxml
>> 2. Since XML API are going to be deprecated (at least in Nova), move
>>   xmlutils to Nova and be done with it.
>> 
>> Solution 1 requires a lot more effort, and I wonder if it's worth it.
>> 
>> 
>> ¹  https://bugs.launchpad.net/bugs/1100282
>> 
>> --
>> Julien Danjou
>> // Free Software hacker
>> // http://julien.danjou.info
>> 
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
> 
> 
> 
> -- 
> Davanum Srinivas :: https://twitter.com/dims
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Requirements freeze exception for testscenarios, oslotest, psycopg2, MySQL-python

2014-09-30 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

Do we even apply freeze for test requirements? It should be ok since
it's for tests only. So there is no real impact on deployer side.

Also, depfreeze seems to apply to openstack/requirements repository
only [1], and projects are open to consume new dependencies from there.

[1]: https://wiki.openstack.org/wiki/DepFreeze

On 30/09/14 15:12, Anna Kamyshnikova wrote:
> I'd like to request a requirements freeze exception for
> testscenarios, oslotest, psycopg2 and MySQL-python for tests -
> https://review.openstack.org/76520
> 
> This test provides verification of models with migrations 
> synchronization which is important to have in Juno - 
> https://bugs.launchpad.net/neutron/+bug/1346444
> 
> 
> ___ OpenStack-dev
> mailing list OpenStack-dev@lists.openstack.org 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2.0.22 (Darwin)

iQEcBAEBCgAGBQJUKq5xAAoJEC5aWaUY1u57aGsH/2jgJ4qmq+9hkB5xXFxXcgCR
rBUlDLp+oFiAlXTM5AgK98YBCAlvYXv0Q5dKpHcJjHFvNqbmPkolraj2us6cWaUz
ZGIYA/mTLYJaID6h65854nHSz+DsJgnM83q5pAmxL0+eXls+aYwDcG9Zm1oljy2j
ZJr7QzFAZfbQrMjrmfsEfgX7Zvp2TfKRJJUMQS6r3INruxy3H46CsrtfyIm+OPqz
GzdTxuCKgRNmJyDThR/8XNF63/h2opAKRHwbPTtmafa/2knCqboyAy8X6r3JiIFf
+Or2ZRC1DJ9au0hcajyYBKCRYMtykajdxxvuzWONH3opQvRqrSqTTZubAGDoHwE=
=2L3e
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Requirements freeze exception for testscenarios, oslotest, psycopg2, MySQL-python

2014-09-30 Thread Anna Kamyshnikova
I'd like to request a requirements freeze exception for testscenarios,
oslotest, psycopg2 and MySQL-python for tests
- https://review.openstack.org/76520

This test provides verification of models with migrations synchronization
which is important to have in Juno -
https://bugs.launchpad.net/neutron/+bug/1346444
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Kilo Blueprints and Specs

2014-09-30 Thread John Garbutt
I plan to start prioritising the specs that are in the review queue,
not just post approve.

We should fast track those that have already been approved, and
particularly when code is ready to go.

My original plan was to propose a git move from juno to kilo, so its
easy to see its just a re-approve. But this change alters that:
https://review.openstack.org/#/c/122109/

On 30 September 2014 02:43, Michael Still  wrote:
> How about we tag these re-proposals with a commit message tag people can
> search for when they review? Perhaps "Previously-approved: Juno"?

Sure that could work.

Ideally also a link to the old juno spec in the references section
could help. Referencing it in here (after we tidy that up):
http://specs.openstack.org/openstack/nova-specs/

Linking to a WIP patch set in the review comments, might also help.

Thanks,
John


> On Tue, Sep 30, 2014 at 11:06 AM, Joe Gordon  wrote:
>>
>>
>>
>> On Mon, Sep 29, 2014 at 4:46 PM, Christopher Yeoh 
>> wrote:
>>>
>>> On Mon, 29 Sep 2014 13:32:57 -0700
>>> Joe Gordon  wrote:
>>>
>>> > On Mon, Sep 29, 2014 at 5:23 AM, Gary Kotton 
>>> > wrote:
>>> >
>>> > > Hi,
>>> > > Is the process documented anywhere? That is, if say for example I
>>> > > had a spec approved in J and its code did not land, how do we go
>>> > > about kicking the tires for K on that spec.
>>> > >
>>> >
>>> > Specs will need be re-submitted once we open up the specs repo for
>>> > Kilo. The Kilo template will be changing a little bit, so specs will
>>> > need a little bit of reworking. But I expect the process to approve
>>> > previously approved specs to be quicker
>>>
>>> Am biased given I have a spec approved for Juno which we didn't quite
>>> fully merge which we want to finish off early in Kilo (most of the
>>> patches are very close already to being ready to merge), but I think we
>>> should give priority to reviewing specs already approved in Juno and
>>> perhaps only require one +2 for re-approval.
>>
>>
>> I like the idea of prioritizing specs that were previously approved and
>> only requiring a single +2 for re-approval if there are no major changes to
>> them.
>>
>>>
>>>
>>> Otherwise we'll end up wasting weeks of development time just when
>>> there is lots of review bandwidth available and the CI system is
>>> lightly loaded. Honestly, ideally I'd like to just start merging as
>>> soon as Kilo opens. Nothing has changed between Juno FF and Kilo opening
>>> so there's really no reason that an approved Juno spec should not be
>>> reapproved.
>>>
>>> Chris
>>>
>>> >
>>> >
>>> > > Thanks
>>> > > Gary
>>> > >
>>> > > On 9/29/14, 1:07 PM, "John Garbutt"  wrote:
>>> > >
>>> > > >On 27 September 2014 00:31, Joe Gordon 
>>> > > >wrote:
>>> > > >> On Thu, Sep 25, 2014 at 9:21 AM, John Garbutt
>>> > > >> 
>>> > > >>wrote:
>>> > > >>> On 25 September 2014 14:10, Daniel P. Berrange
>>> > > >>>  wrote:
>>> > > >>> >> The proposal is to keep kilo-1, kilo-2 much the same as juno.
>>> > > >>>Except,
>>> > > >>> >> we work harder on getting people to buy into the priorities
>>> > > >>> >> that are set, and actively provoke more debate on their
>>> > > >>> >> "correctness", and we reduce the bar for what needs a
>>> > > >>> >> blueprint.
>>> > > >>> >>
>>> > > >>> >> We can't have 50 high priority blueprints, it doesn't mean
>>> > > >>> >> anything, right? We need to trim the list down to a
>>> > > >>> >> manageable number, based
>>> > > >>>on
>>> > > >>> >> the agreed project priorities. Thats all I mean by slots /
>>> > > >>> >> runway at this point.
>>> > > >>> >
>>> > > >>> > I would suggest we don't try to rank high/medium/low as that
>>> > > >>> > is too coarse, but rather just an ordered priority list. Then
>>> > > >>> > you would not be in the situation of having 50 high
>>> > > >>> > blueprints. We would instead naturally just start at the
>>> > > >>> > highest priority and work downwards.
>>> > > >>>
>>> > > >>> OK. I guess I was fixating about fitting things into launchpad.
>>> > > >>>
>>> > > >>> I guess having both might be what happens.
>>> > > >>>
>>> > > >>> >> > The runways
>>> > > >>> >> > idea is just going to make me less efficient at reviewing.
>>> > > >>> >> > So I'm very much against it as an idea.
>>> > > >>> >>
>>> > > >>> >> This proposal is different to the runways idea, although it
>>> > > >>>certainly
>>> > > >>> >> borrows aspects of it. I just don't understand how this
>>> > > >>> >> proposal has all the same issues?
>>> > > >>> >>
>>> > > >>> >>
>>> > > >>> >> The key to the kilo-3 proposal, is about getting better at
>>> > > >>> >> saying
>>> > > >>>no,
>>> > > >>> >> this blueprint isn't very likely to make kilo.
>>> > > >>> >>
>>> > > >>> >> If we focus on a smaller number of blueprints to review, we
>>> > > >>> >> should
>>> > > >>>be
>>> > > >>> >> able to get a greater percentage of those fully completed.
>>> > > >>> >>
>>> > > >>> >> I am just using slots/runway-like ideas to help pick the high
>>> > > >>>priority
>>> > > >>> >> blueprints we should conc

[openstack-dev] [all] [tc] Multi-clouds integration by OpenStack cascading

2014-09-30 Thread joehuang
Hello, Dear TC and all, 

Large cloud operators prefer to deploy multiple OpenStack instances(as 
different zones), rather than a single monolithic OpenStack instance because of 
these reasons:
 
1) Multiple data centers distributed geographically;
2) Multi-vendor business policy;
3) Server nodes scale up modularized from 00's up to million;
4) Fault and maintenance isolation between zones (only REST interface);
 
At the same time, they also want to integrate these OpenStack instances into 
one cloud. Instead of proprietary orchestration layer, they want to use 
standard OpenStack framework for Northbound API compatibility with HEAT/Horizon 
or other 3rd ecosystem apps.
 
We call this pattern as "OpenStack Cascading", with proposal described by 
[1][2]. PoC live demo video can be found[3][4].
 
Nova, Cinder, Neutron, Ceilometer and Glance (optional) are involved in the 
OpenStack cascading. 
 
Kindly ask for cross program design summit session to discuss OpenStack 
cascading and the contribution to Kilo. 

Kindly invite those who are interested in the OpenStack cascading to work 
together and contribute it to OpenStack. 
 
(I applied for “other projects” track [5], but it would be better to have a 
discussion as a formal cross program session, because many core programs are 
involved )
 
 
[1] wiki: https://wiki.openstack.org/wiki/OpenStack_cascading_solution
[2] PoC source code: https://github.com/stackforge/tricircle
[3] Live demo video at YouTube: https://www.youtube.com/watch?v=OSU6PYRz5qY
[4] Live demo video at Youku (low quality, for those who can't access 
YouTube):http://v.youku.com/v_show/id_XNzkzNDQ3MDg4.html
[5] http://www.mail-archive.com/openstack-dev@lists.openstack.org/msg36395.html
 
Best Regards
Chaoyi Huang ( Joe Huang )
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [IPv6] New API format for extra_dhcp_opts

2014-09-30 Thread Robert Li (baoli)
Xu Han,

That looks good to me. To keep it consistent with existing CLI, we should use 
ip-version instead of ‘version’. It seems to be identical to prefixing the 
option_name with v4 or v6, though.

Just to clarify, are the available opt-names coming from dnsmasq definitions?

With regard to the default, your suggestion "version is optional (no version 
means version=4)." seems to be different from Mark’s:
I’m -1 for both options because neither is properly backwards compatible.  
Instead we should add an optional 3rd value to the dictionary: “version”.  The 
version key would be used to make the option only apply to either version 4 or 
6.  If the key is missing or null, then the option would apply to both.

Thanks,
Robert

On 9/30/14, 1:46 AM, "Xu Han Peng" 
mailto:pengxu...@gmail.com>> wrote:

Robert,

I think the CLI will look something like based on Mark's suggestion:

neutron port-create extra_dhcp_opts 
opt_name=,opt_value=,version=4(or 6) 

This extra_dhcp_opts can be repeated and version is optional (no version means 
version=4).

Xu Han

On 09/29/2014 08:51 PM, Robert Li (baoli) wrote:
Hi Xu Han,

My question is how the CLI user interface would look like to distinguish 
between v4 and v6 dhcp options?

Thanks,
Robert

On 9/28/14, 10:29 PM, "Xu Han Peng" 
mailto:pengxu...@gmail.com>> wrote:

Mark's suggestion works for me as well. If no one objects, I am going to start 
the implementation.

Thanks,
Xu Han

On 09/27/2014 01:05 AM, Mark McClain wrote:

On Sep 26, 2014, at 2:39 AM, Xu Han Peng 
mailto:pengxu...@gmail.com>> wrote:

Currently the extra_dhcp_opts has the following API interface on a port:

{
"port":
{
"extra_dhcp_opts": [
{"opt_value": "testfile.1","opt_name": "bootfile-name"},
{"opt_value": "123.123.123.123", "opt_name": "tftp-server"},
{"opt_value": "123.123.123.45", "opt_name": "server-ip-address"}
],

 }
}

During the development of DHCPv6 function for IPv6 subnets, we found this 
format doesn't work anymore because an port can have both IPv4 and IPv6 
address. So we need to find a new way to specify extra_dhcp_opts for DHCPv4 and 
DHCPv6, respectively. ( https://bugs.launchpad.net/neutron/+bug/1356383)

Here are some thoughts about the new format:

Option1: Change the opt_name in extra_dhcp_opts to add a prefix (v4 or v6) so 
we can distinguish opts for v4 or v6 by parsing the opt_name. For backward 
compatibility, no prefix means IPv4 dhcp opt.

"extra_dhcp_opts": [
{"opt_value": "testfile.1","opt_name": "bootfile-name"},
{"opt_value": "123.123.123.123", "opt_name": "v4:tftp-server"},
{"opt_value": "[2001:0200:feed:7ac0::1]", "opt_name": 
"v6:dns-server"}
]

Option2: break extra_dhcp_opts into IPv4 opts and IPv6 opts. For backward 
compatibility, both old format and new format are acceptable, but old format 
means IPv4 dhcp opts.

"extra_dhcp_opts": {
 "ipv4": [
{"opt_value": "testfile.1","opt_name": "bootfile-name"},
{"opt_value": "123.123.123.123", "opt_name": "tftp-server"},
 ],
 "ipv6": [
{"opt_value": "[2001:0200:feed:7ac0::1]", "opt_name": 
"dns-server"}
 ]
}

The pro of Option1 is there is no need to change API structure but only need to 
add validation and parsing to opt_name. The con of Option1 is that user need to 
input prefix for every opt_name which can be error prone. The pro of Option2 is 
that it's clearer than Option1. The con is that we need to check two formats 
for backward compatibility.

We discussed this in IPv6 sub-team meeting and we think Option2 is preferred. 
Can I also get community's feedback on which one is preferred or any other 
comments?


I’m -1 for both options because neither is properly backwards compatible.  
Instead we should add an optional 3rd value to the dictionary: “version”.  The 
version key would be used to make the option only apply to either version 4 or 
6.  If the key is missing or null, then the option would apply to both.

mark




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] [Devstack] Why route for private network is not taken care by neutron?

2014-09-30 Thread Salvatore Orlando
I reckon it is a sort of "convenience" route which allows us to connect
directly to private instances running in the network namespace from the
devstack host without having to use floating ips.

It is something which probably makes sense for dev scenarios only as
FIXED_RANGE is generally not publicly routable, so I doubt it will have any
use in production environments.

Finally, it would be technically possible for neutron to add such route for
every subnet on the host where the l3 agent is running but I don't see this
as something pertaining to neutron. I would simply create a local script
that wrap neutron subnet-create:

neutron subnet-create $network_id $cidr
sudo route add -net $cidr gw $router_gw_ip

Salvatore

On 30 September 2014 07:54, Xu Han Peng  wrote:

>  Hi,
>
> Can anyone help elaborate why the following line of code in devstack which
> is trying to add a route for VM private network via router gateway IP on
> network node is *NOT* taken care by neutron but devstack? The reason to
> ask is that every time a router external gateway IP changed or a new router
> is added, we have to manually change this route or add a new one on network
> node.
>
> sudo route add -net $FIXED_RANGE gw $ROUTER_GW_IP
>
>
> https://github.com/openstack-dev/devstack/blob/stable/icehouse/lib/neutron#L428
>
> Thanks!
> Xu Han
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Create an instance with a custom uuid

2014-09-30 Thread Andrew Laski


On 09/30/2014 06:53 AM, Pasquale Porreca wrote:

Going back to my original question, I would like to know:

1) Is it acceptable to have the UUID passed from client side?


In my opinion, no.  This opens a door to issues we currently don't need 
to deal with, and use cases I don't think Nova should support. Another 
possibility, which I don't like either, would be to pass in some data 
which could influence the generation of the UUID to satisfy requirements.


But there was a suggestion to look into addressing your use case on the 
QEMU mailing list, which I think would be a better approach.




2) What is the correct way to do it? I started to implement this 
feature, simply passing it as metadata with key uuid, but I feel that 
this feature should have a reserved option rather then use metadata.



On 09/25/14 17:26, Daniel P. Berrange wrote:

On Thu, Sep 25, 2014 at 05:23:22PM +0200, Pasquale Porreca wrote:

This is correct Daniel, except that that it is done by the virtual
firmware/BIOS of the virtual machine and not by the OS (not yet 
installed at

that time).

This is the reason we thought about UUID: it is yet used by the iPXE 
client
to be included in Bootstrap Protocol messages, it is taken from the 

field in libvirt template and the  in libvirt is set by 
OpenStack; the
only missing passage is the chance to set the UUID in OpenStack 
instead to

have it randomly generated.

Having another user defined tag in libvirt won't help for our issue, 
since
it won't be included in Bootstrap Protocol messages, not without 
changes in
the virtual BIOS/firmware (as you stated too) and honestly my team 
doesn't

have interest in this (neither the competence).

I don't think the configdrive or metadata service would help either: 
the OS
on the instance is not yet installed at that time (the target if the 
network
boot is exactly to install the OS on the instance!), so it won't be 
able to

mount it.

Ok, yes, if we're considering the DHCP client inside the iPXE BIOS
blob, then I don't see any currently viable options besides UUID.
There's no mechanism for passing any other data into iPXE that I
am aware of, though if there is a desire todo that it could be
raised on the QEMU mailing list for discussion.


Regards,
Daniel





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zaqar] Comments on the concerns arose during the TC meeting

2014-09-30 Thread Soren Hansen
2014-09-12 1:05 GMT+02:00 Jay Pipes :
> If Nova was to take Soren's advice and implement its data-access layer
> on top of Cassandra or Riak, we would just end up re-inventing SQL
> Joins in Python-land.

I may very well be wrong(!), but this statement makes it sound like you've
never used e.g. Riak. Or, if you have, not done so in the way it's
supposed to be used.

If you embrace an alternative way of storing your data, you wouldn't just
blindly create a container for each table in your RDBMS.

For example: In Nova's SQL-based datastore we have a table for security
groups and another for security group rules. Rows in the security group
rules table have a foreign key referencing the security group to which
they belong. In a datastore like Riak, you could have a security group
container where each value contains not just the security group
information, but also all the security group rules. No joins in
Python-land necessary.

> I've said it before, and I'll say it again. In Nova at least, the SQL
> schema is complex because the problem domain is complex. That means
> lots of relations, lots of JOINs, and that means the best way to query
> for that data is via an RDBMS.

I was really hoping you could be more specific than "best"/"most
appropriate" so that we could have a focused discussion.

I don't think relying on a central data store is in any conceivable way
appropriate for a project like OpenStack. Least of all Nova.

I don't see how we can build a highly available, distributed service on
top of a centralized data store like MySQL.

Tens or hundreds of thousands of nodes, spread across many, many racks
and datacentre halls are going to experience connectivity problems[1].

This means that some percentage of your infrastructure (possibly many
thousands of nodes, affecting many, many thousands of customers) will
find certain functionality not working on account of your datastore not
being reachable from the part of the control plane they're attempting to
use (or possibly only being able to read from it).

I say over and over again that people should own their own uptime.
Expect things to fail all the time. Do whatever you need to do to ensure
your service keeps working even when something goes wrong. Of course
this applies to our customers too. Even if we take the greatest care to
avoid downtime, customers should spread their workloads across multiple
availability zones and/or regions and probably even multiple cloud
providers. Their service towards their users is their responsibility.

However, our service towards our users is our responsibility. We should
take the greatest care to avoid having internal problems affect our
users.  Building a massively distributed system like Nova on top of a
centralized data store is practically a guarantee of the opposite.

> For complex control plane software like Nova, though, an RDBMS is the
> best tool for the job given the current lay of the land in open source
> data storage solutions matched with Nova's complex query and
> transactional requirements.

What transactional requirements?

> Folks in these other programs have actually, you know, thought about
> these kinds of things and had serious discussions about alternatives.
> It would be nice to have someone acknowledge that instead of snarky
> comments implying everyone else "has it wrong".

I'm terribly sorry, but repeating over and over that an RDBMS is "the
best tool" without further qualification than "Nova's data model is
really complex" reads *exactly* like a snarky comment implying everyone
else "has it wrong".

[1]: http://aphyr.com/posts/288-the-network-is-reliable

-- 
Soren Hansen | http://linux2go.dk/
Ubuntu Developer | http://www.ubuntu.com/
OpenStack Developer  | http://www.openstack.org/

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Fate of xmlutils

2014-09-30 Thread Julien Danjou
On Tue, Sep 30 2014, Davanum Srinivas wrote:

> I believe all the lessons learned from defusedxml (see the release
> dates) have been folded back into the different libraries. For example
> plain old etree.fromstring() even without any special options is ok
> with the specially crafted xml bombs that you can find as test cases
> in defusedxml repo. There is some more information here as well
> (http://lxml.de/FAQ.html#is-lxml-vulnerable-to-xml-bombs). So at this
> point, unless we see a new attack vector other than the ones that
> caused folks to whip up defusedxml, we should be good. So Option #2 is
> definitely the way to go

Thanks for this information dims! I'll start working on that ASAP.

-- 
Julien Danjou
-- Free Software hacker
-- http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Fate of xmlutils

2014-09-30 Thread Daniel P. Berrange
On Tue, Sep 30, 2014 at 09:28:22AM +0930, Christopher Yeoh wrote:
> On Mon, 29 Sep 2014 18:03:20 +0200
> Julien Danjou  wrote:
> > 
> > It seems that Python fixed that issue with 2 modules released on PyPI:
> > 
> >   https://pypi.python.org/pypi/defusedxml
> >   https://pypi.python.org/pypi/defusedexpat
> > 
> > I'm no XML expert, and I've only a shallow understanding of the issue,
> > but I wonder if we should put some efforts to drop xmlutils and our
> > custom XML fixes to used instead these 2 modules.
> 
> Nova XML API support is marked as deprecated in Juno. So hopefully
> we'll be able to just drop XML and associated helper modules within a
> couple of cycles.

Even if Nova doesn't have an XML API, internals of Nova still need to
deal with XML. eg the libvirt driver needs to parse & format XML docs.
At least these docs are coming from a trusted source, not the untrusted
end user, but we'll always have a need for python XML modules of some
kind.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Fate of xmlutils

2014-09-30 Thread Davanum Srinivas
Julien,

I believe all the lessons learned from defusedxml (see the release
dates) have been folded back into the different libraries. For example
plain old etree.fromstring() even without any special options is ok
with the specially crafted xml bombs that you can find as test cases
in defusedxml repo. There is some more information here as well
(http://lxml.de/FAQ.html#is-lxml-vulnerable-to-xml-bombs). So at this
point, unless we see a new attack vector other than the ones that
caused folks to whip up defusedxml, we should be good. So Option #2 is
definitely the way to go

thanks,
dims

On Tue, Sep 30, 2014 at 3:45 AM, Julien Danjou  wrote:
> On Mon, Sep 29 2014, Joshua Harlow wrote:
>
>> Do we know that the users (keystone, neutron...) aren't vulnerable?
>>
>> From https://pypi.python.org/pypi/defusedxml#python-xml-libraries it sure 
>> seems
>> like we would likely still have issues if custom implementations are being
>> used/created. Perhaps we should just use the defusedxml libraries until 
>> proven
>> otherwise (better to be safe than sorry).
>
> According to LP#1100282¹, Keystone and Neutron are supposed to not be
> vulnerable with different fixes than Nova.
>
> Since all the solutions are different, I'm not sure it covers the
> problem in its entirety in all cases.
>
> I see 2 options:
> 1. Put effort to move all projects to defusedxml
> 2. Since XML API are going to be deprecated (at least in Nova), move
>xmlutils to Nova and be done with it.
>
> Solution 1 requires a lot more effort, and I wonder if it's worth it.
>
>
> ¹  https://bugs.launchpad.net/bugs/1100282
>
> --
> Julien Danjou
> // Free Software hacker
> // http://julien.danjou.info
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Davanum Srinivas :: https://twitter.com/dims

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Rally]Rally exception

2014-09-30 Thread Sergey Skripnick

It is not known issue, but it will be fixed very soon.

Meanwhile I can suggest that rally was unable to connect to VM for some  
reason.



I ran into below rally exception while trying to run Rally scenario.

u'Traceback (most recent call last):\n  File  
"/home/localadmin/openstack/cvg_rally/rally/rally/benchmark/runners/base.py",  
line >73, in _run_scenario_once\nmethod_name)(**kwargs) or  
scenario_output\n  File  
"/home/localadmin/openstack/cvg_rally/rally/>rally/benchmark/scenarios/vm_int/vm_perf.py",  
line 139, in boot_runperf_delete\nself.server.dispose()\n  File  
"/home/>localadmin/openstack/cvg_rally/rally/rally/benchmark/scenarios/vm_int/IperfInstance.py",  
line 68, in dispose\nInstance.dispose>(self)\n  File  
"/home/localadmin/openstack/cvg_rally/rally/rally/benchmark/scenarios/vm_int/instance.py",  
line 96, in dispose\n   > self.ssh_instance.close()\n  File  
"/home/localadmin/openstack/cvg_rally/rally/rally/sshutils.py", line  
136, in close\n   > self._client.close()\nAttributeError: \'bool\'  
object has no attribute \'close\'\n'],


Looking at the code this how close() is written.
= rally/sshutils.py
135 def close(self):
136 self._client.close()
137 self._client = False

85  def __init__(self, user, host, port=22, pkey=None,
86  key_filename=None, password=None):
87 """Initialize SSH client.
8889 :param user: ssh username
90 :param host: hostname or ip address of remote ssh server
91 :param port: remote ssh port
92 :param pkey: RSA or DSS private key string or file object
93 :param key_filename: private key filename
94 :param password: password
9596 """
9798 self.user = user
99 self.host = host
100 self.port = port
101 self.pkey = self._get_pkey(pkey) if pkey else None
102 self.password = password
103 self.key_filename = key_filename
104 self._client = False

==

This object _client is used at boolean and in above code its trying to  
invoke a method on it, so its this intentional. If not is this a >known  
issue ?


Thanks,
Harshil





--
Using Opera's mail client: http://www.opera.com/mail/___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Create an instance with a custom uuid

2014-09-30 Thread Pasquale Porreca

Going back to my original question, I would like to know:

1) Is it acceptable to have the UUID passed from client side?

2) What is the correct way to do it? I started to implement this 
feature, simply passing it as metadata with key uuid, but I feel that 
this feature should have a reserved option rather then use metadata.



On 09/25/14 17:26, Daniel P. Berrange wrote:

On Thu, Sep 25, 2014 at 05:23:22PM +0200, Pasquale Porreca wrote:

This is correct Daniel, except that that it is done by the virtual
firmware/BIOS of the virtual machine and not by the OS (not yet installed at
that time).

This is the reason we thought about UUID: it is yet used by the iPXE client
to be included in Bootstrap Protocol messages, it is taken from the 
field in libvirt template and the  in libvirt is set by OpenStack; the
only missing passage is the chance to set the UUID in OpenStack instead to
have it randomly generated.

Having another user defined tag in libvirt won't help for our issue, since
it won't be included in Bootstrap Protocol messages, not without changes in
the virtual BIOS/firmware (as you stated too) and honestly my team doesn't
have interest in this (neither the competence).

I don't think the configdrive or metadata service would help either: the OS
on the instance is not yet installed at that time (the target if the network
boot is exactly to install the OS on the instance!), so it won't be able to
mount it.

Ok, yes, if we're considering the DHCP client inside the iPXE BIOS
blob, then I don't see any currently viable options besides UUID.
There's no mechanism for passing any other data into iPXE that I
am aware of, though if there is a desire todo that it could be
raised on the QEMU mailing list for discussion.


Regards,
Daniel


--
Pasquale Porreca

DEK Technologies
Via dei Castelli Romani, 22
00040 Pomezia (Roma)

Mobile +39 3394823805
Skype paskporr


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] How to set port_filter in port binding?

2014-09-30 Thread Rossella Sblendido
Hi Alex,

a spoof filter is set by default to avoid that a VM can send packets
whose source address is different from the VM's address. There's no
option to change that.

cheers,

Rossella

On 09/25/2014 10:59 PM, Alexandre Levine wrote:
> Hi All,
> 
> I'm looking for a way to set port_filter flag to False for port binding.
> Is there a way to do this in IceHouse or in current Juno code? I use
> devstack with the default ML2 plugin and configuration.
> 
> According to this guide
> (http://docs.openstack.org/api/openstack-network/2.0/content/binding_ext_ports.html)
> it should be done via binding:profile but it gets only recorded in the
> dictionary of binding:profile and doesn't get reflected in vif_details
> as supposed to.
> 
> I tried to find any code in Neutron that can potentially do this
> transferring from incoming binding:profile into binding:vif_details and
> found none.
> 
> I'd be very grateful if anybody can point me in the right direction.
> 
> And by the by the reason I'm trying to do this is because I want to use
> one instance as NAT for another one in private subnet. As a result of
> ping 8.8.8.8 from private instance to NAT instance the reply gets
> Dropped by the security rule in iptables on TAP interface of NAT
> instance because the source is different from the NAT instance IP. So I
> suppose that port_filter is responsible for this behavior and will
> remove this restriction in iptables.
> 
> Best regards,
>   Alex Levine
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] for help!

2014-09-30 Thread Salvatore Orlando
On 30 September 2014 10:26, Linchengyong  wrote:

>  Dear all,
>
> Could anyone help me?  I have some questions to trouble .
>
> 1. Can the neutron create complex virtual network topology? Such as,
> any two routers' interconnection between each other.
>

Only if you bridge them with a network - direct connections are not
possible at the moment.
Something like:
 169.254.169.1   169.254.169.2
  R1 --- BRIDGE --- R2
   |
 |
   INT_NET_1 INT_NET_2

Also neutron routers do not support at the moment dynamic routing
protocols, so you'll have to manage static routes (eg. INT_NET_1's cidr on
R2 via 169.254.169.1)

>  2. If can't, Does the neutron team have any plan to support the
> complex virtual network topology?
>
Neutron has plans for everything and the exact opposite! If you think it
should support more complex topologies - whatever that might mean - feel
free to share your ideas.

>  3. As we can't create the complex virtual network topology with the
> GUI, we are wondering whether the neutron has already supported the feature
> to create complex virtual network topology, but just constrained by the GUI?
>
I reckon Horizon pretty much allows for controlling most of neutron API
features. I'm not sure about static routes however.


>
>
> Thanks very much.
>
>
>
> Best Regards,
>
> Chengyong Lin
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] for help!

2014-09-30 Thread Fawad Khaliq
Hi Chengyong,

I remember there is a blueprint[1] by Edgar Magana which talks about
similar use cases but that still marked for discussion whether this belongs
to Neutron or Heat.

[1] https://blueprints.launchpad.net/neutron/+spec/network-topologies-api

Fawad Khaliq

On Tue, Sep 30, 2014 at 2:26 PM, Linchengyong 
wrote:

>  Dear all,
>
> Could anyone help me?  I have some questions to trouble .
>
> 1. Can the neutron create complex virtual network topology? Such as,
> any two routers' interconnection between each other.
>
> 2. If can't, Does the neutron team have any plan to support the
> complex virtual network topology?
>
> 3. As we can't create the complex virtual network topology with the
> GUI, we are wondering whether the neutron has already supported the feature
> to create complex virtual network topology, but just constrained by the GUI?
>
>
>
> Thanks very much.
>
>
>
> Best Regards,
>
> Chengyong Lin
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] for help!

2014-09-30 Thread Linchengyong
Dear all,

Could anyone help me?  I have some questions to trouble .

1. Can the neutron create complex virtual network topology? Such as, any 
two routers' interconnection between each other.

2. If can't, Does the neutron team have any plan to support the complex 
virtual network topology?

3. As we can't create the complex virtual network topology with the GUI, we 
are wondering whether the neutron has already supported the feature to create 
complex virtual network topology, but just constrained by the GUI?



Thanks very much.



Best Regards,

Chengyong Lin

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][swift] Has anybody considered storing tokens in Swift?

2014-09-30 Thread Julien Danjou
On Mon, Sep 29 2014, Morgan Fainberg wrote:

> The big issue you're going to run into is locking. The indexes need to have
> a distributed lock that guarantees that each index is read/updated/released
> atomically (similar to the SQL transaction). The way memcache and redis
> handle this is by trying to add a record that is based on the index record
> name and if that "add" fails (already exists) we assume the referenced
> record is locked. We automatically timeout the lock after a period of time.

For distributed locking, using tooz¹ could solve that issue.


¹  http://tooz.readthedocs.org/en/latest/

-- 
Julien Danjou
/* Free Software hacker
   http://julien.danjou.info */


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Fate of xmlutils

2014-09-30 Thread Julien Danjou
On Mon, Sep 29 2014, Joshua Harlow wrote:

> Do we know that the users (keystone, neutron...) aren't vulnerable?
>
> From https://pypi.python.org/pypi/defusedxml#python-xml-libraries it sure 
> seems
> like we would likely still have issues if custom implementations are being
> used/created. Perhaps we should just use the defusedxml libraries until proven
> otherwise (better to be safe than sorry).

According to LP#1100282¹, Keystone and Neutron are supposed to not be
vulnerable with different fixes than Nova.

Since all the solutions are different, I'm not sure it covers the
problem in its entirety in all cases.

I see 2 options:
1. Put effort to move all projects to defusedxml
2. Since XML API are going to be deprecated (at least in Nova), move
   xmlutils to Nova and be done with it.

Solution 1 requires a lot more effort, and I wonder if it's worth it.


¹  https://bugs.launchpad.net/bugs/1100282

-- 
Julien Danjou
// Free Software hacker
// http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [UX] [Heat] [Mistral] Merlin project PoC update: shift from HOT builder to Mistral Workbook builder

2014-09-30 Thread Angus Salkeld
On Fri, Sep 26, 2014 at 7:04 AM, Steve Baker  wrote:

> On 26/09/14 05:36, Timur Sufiev wrote:
>
>> Hello, folks!
>>
>> Following Drago Rosson's introduction of Barricade.js and our discussion
>> in ML about possibility of using it in Merlin [1], I've decided to change
>> the plans for PoC: now the goal for Merlin's PoC is to implement Mistral
>> Workbook builder on top of Barricade.js. The reasons for that are:
>>
>> * To better understand Barricade.js potential as data abstraction layer
>> in Merlin, I need to learn much more about its possibilities and
>> limitations than simple examining/reviewing of its source code allows. The
>> best way to do this is by building upon it.
>> * It's becoming too crowded in the HOT builder's sandbox - doing the same
>> work as Drago currently does [2] seems like a waste of resources to me
>> (especially in case he'll opensource his HOT builder someday just as he did
>> with Barricade.js).
>>
>
> Drago, it would be to everyone's benefit if your HOT builder efforts were
> developed on a public git repository, no matter how functional it is
> currently.
>
> Is there any chance you can publish what you're working on to
> https://github.com/dragorosson or rackerlabs for a start?
>

Drago any news of this? This would prevent a lot of duplication of work and
later merging of code. The sooner this is done the better.

-Angus


>
>  * Why Mistral and not Murano or Solum? Because Mistral's YAML templates
>> have simpler structure than Murano's ones do and is better defined at that
>> moment than the ones in Solum.
>>
>> There already some commits in https://github.com/stackforge/merlin and
>> since client-side app doesn't talk to the Mistral's server yet, it is
>> pretty easy to run it (just follow the instructions in README.md) and then
>> see it in browser at http://localhost:8080. UI is yet not great, as the
>> current focus is data abstraction layer exploration, i.e. how to exploit
>> Barricade.js capabilities to reflect all relations between Mistral's
>> entities. I hope to finish the minimal set of features in a few weeks - and
>> will certainly announce it in the ML.
>>
>> [1] http://lists.openstack.org/pipermail/openstack-dev/2014-
>> September/044591.html
>> [2] http://lists.openstack.org/pipermail/openstack-dev/2014-
>> August/044186.html
>>
>>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Keystone] [Glance] Juno RC1 available

2014-09-30 Thread Thierry Carrez
Hello everyone,

Keystone and Glance are the first integrated projects to publish a
release candidate in preparation for the final Juno release.

The RC1 tarballs are available for download at:
https://launchpad.net/keystone/juno/juno-rc1
https://launchpad.net/glance/juno/juno-rc1

Unless release-critical issues are found that warrant a release
candidate respin, these RC1s will be formally released as the 2014.2
final version on October 16. You are therefore strongly encouraged to
test and validate these tarballs !

Alternatively, you can directly test the proposed/juno branch at:
https://github.com/openstack/keystone/tree/proposed/juno
https://github.com/openstack/glance/tree/proposed/juno

If you find an issue that could be considered release-critical, please
file it at:

https://bugs.launchpad.net/keystone/+filebug
https://bugs.launchpad.net/glance/+filebug

and tag it *juno-rc-potential* to bring it to the release crew's
attention.

Note that the "master" branch of Keystone and Glance are now open for
Kilo development, and feature freeze restrictions no longer apply there.

Regards,

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Contributing to docs without Docbook -- YES you can!

2014-09-30 Thread nitin singh
Nice articles Akilesh. 


On Tuesday, 30 September 2014 12:56 PM, Akilesh K  wrote:
 


Sorry the correct links are
1.Comparison between networking devices and linux software components
2. Openstack ovs plugin configuration for single/multi machine setup
3. Neutron ovs plugin layer 2 connectivity
4. Layer 3 connectivity using neutron-l3-agent



On Tue, Sep 30, 2014 at 12:50 PM, Andreas Scheuring 
 wrote:

Hi Ageeleshwar,
>the links you provided are wordpress admin links and require a login. Is
>there also a public link available?
>Thanks
>--
>Andreas
>(irc: scheuran)
>
>
>
>On Tue, 2014-09-30 at 09:33 +0530, Akilesh K wrote:
>> Hi,
>>
>> I saw the table of contents. I have posted documents on configuring
>> openstack neutron-openvswitch-plugin, comparison between networking
>> devices and thier linux software components and also about the working
>> principles of neutron-ovs-plugin at layer 2 and neutron-l3-agent at
>> layer 3 . My intention with the posts was to aid begginers in
>> debugging neutron issues.
>>
>>
>> The problem is that I am not sure where exactly these posts fit in the
>> topic of contents. Anyone with suggestions please reply to me. Below
>> are the link to the blog posts
>>
>>
>> 1. Comparison between networking devices and linux software components
>>
>> 2. Openstack ovs plugin configuration for single/multi machine setup
>>
>> 3. Neutron ovs plugin layer 2 connectivity
>>
>> 4. Layer 3 connectivity using neutron-l3-agent
>>
>>
>> I would be glad to include sub sections in any of these posts if that
>> helps.
>>
>>
>> Thank you,
>> Ageeleshwar K
>>
>>
>> On Tue, Sep 30, 2014 at 2:36 AM, Nicholas Chase 
>> wrote:
>> As you know, we're always looking for ways for people to be
>> able to contribute to Docs, but we do understand that there's
>> a certain amount of pain involved in dealing with Docbook.  So
>> to try and make this process easier, we're going to try an
>> experiment.
>>
>> What we've put together is a system where you can update a
>> wiki with links to content in whatever form you've got it --
>> gist on github, wiki page, blog post, whatever -- and we have
>> a dedicated resource that will turn it into actual
>> documentation, in Docbook. If you want to be added as a
>> co-author on the patch, make sure to provide us the email
>> address you used to become a Foundation member.
>>
>> Because we know that the networking documentation needs
>> particular attention, we're starting there.  We have a
>> Networking Guide, from which we will ultimately pull
>> information to improve the networking section of the admin
>> guide.  The preliminary Table of Contents is here:
>> https://wiki.openstack.org/wiki/NetworkingGuide/TOC , and the
>> instructions for contributing are as follows:
>>
>>  1. Pick an existing topic or create a new topic. For new
>> topics, we're primarily interested in deployment
>> scenarios.
>>  2. Develop content (text and/or diagrams) in a format
>> that supports at least basic markup (e.g., titles,
>> paragraphs, lists, etc.).
>>  3. Provide a link to the content (e.g., gist on
>> github.com, wiki page, blog post, etc.) under the
>> associated topic.
>>  4. Send e-mail to reviewers network...@openstacknow.com.
>>  5. A writer turns the content into an actual patch, with
>
>> tracking bug, and docs reviewers (and the original
>> author, we would hope) make sure it gets reviewed and
>> merged.
>>
>> Please let us know if you have any questions/comments.
>> Thanks!
>>
>>   Nick
>> --
>> Nick Chase
>> 1-650-567-5640
>> Technical Marketing Manager, Mirantis
>> Editor, OpenStack:Now
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Contributing to docs without Docbook -- YES you can!

2014-09-30 Thread Akilesh K
Sorry the correct links are
1. Comparison between networking devices and linux software components

2. Openstack ovs plugin configuration for single/multi machine setup

3. Neutron ovs plugin layer 2 connectivity

4. Layer 3 connectivity using neutron-l3-agent


On Tue, Sep 30, 2014 at 12:50 PM, Andreas Scheuring <
scheu...@linux.vnet.ibm.com> wrote:

> Hi Ageeleshwar,
> the links you provided are wordpress admin links and require a login. Is
> there also a public link available?
> Thanks
> --
> Andreas
> (irc: scheuran)
>
>
> On Tue, 2014-09-30 at 09:33 +0530, Akilesh K wrote:
> > Hi,
> >
> > I saw the table of contents. I have posted documents on configuring
> > openstack neutron-openvswitch-plugin, comparison between networking
> > devices and thier linux software components and also about the working
> > principles of neutron-ovs-plugin at layer 2 and neutron-l3-agent at
> > layer 3 . My intention with the posts was to aid begginers in
> > debugging neutron issues.
> >
> >
> > The problem is that I am not sure where exactly these posts fit in the
> > topic of contents. Anyone with suggestions please reply to me. Below
> > are the link to the blog posts
> >
> >
> > 1. Comparison between networking devices and linux software components
> >
> > 2. Openstack ovs plugin configuration for single/multi machine setup
> >
> > 3. Neutron ovs plugin layer 2 connectivity
> >
> > 4. Layer 3 connectivity using neutron-l3-agent
> >
> >
> > I would be glad to include sub sections in any of these posts if that
> > helps.
> >
> >
> > Thank you,
> > Ageeleshwar K
> >
> >
> > On Tue, Sep 30, 2014 at 2:36 AM, Nicholas Chase 
> > wrote:
> > As you know, we're always looking for ways for people to be
> > able to contribute to Docs, but we do understand that there's
> > a certain amount of pain involved in dealing with Docbook.  So
> > to try and make this process easier, we're going to try an
> > experiment.
> >
> > What we've put together is a system where you can update a
> > wiki with links to content in whatever form you've got it --
> > gist on github, wiki page, blog post, whatever -- and we have
> > a dedicated resource that will turn it into actual
> > documentation, in Docbook. If you want to be added as a
> > co-author on the patch, make sure to provide us the email
> > address you used to become a Foundation member.
> >
> > Because we know that the networking documentation needs
> > particular attention, we're starting there.  We have a
> > Networking Guide, from which we will ultimately pull
> > information to improve the networking section of the admin
> > guide.  The preliminary Table of Contents is here:
> > https://wiki.openstack.org/wiki/NetworkingGuide/TOC , and the
> > instructions for contributing are as follows:
> >
> >  1. Pick an existing topic or create a new topic. For new
> > topics, we're primarily interested in deployment
> > scenarios.
> >  2. Develop content (text and/or diagrams) in a format
> > that supports at least basic markup (e.g., titles,
> > paragraphs, lists, etc.).
> >  3. Provide a link to the content (e.g., gist on
> > github.com, wiki page, blog post, etc.) under the
> > associated topic.
> >  4. Send e-mail to reviewers network...@openstacknow.com.
> >  5. A writer turns the content into an actual patch, with
> > tracking bug, and docs reviewers (and the original
> > author, we would hope) make sure it gets reviewed and
> > merged.
> >
> > Please let us know if you have any questions/comments.
> > Thanks!
> >
> >   Nick
> > --
> > Nick Chase
> > 1-650-567-5640
> > Technical Marketing Manager, Mirantis
> > Editor, OpenStack:Now
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> >
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/l

Re: [openstack-dev] Contributing to docs without Docbook -- YES you can!

2014-09-30 Thread Andreas Scheuring
Hi Ageeleshwar, 
the links you provided are wordpress admin links and require a login. Is
there also a public link available?
Thanks
-- 
Andreas 
(irc: scheuran)


On Tue, 2014-09-30 at 09:33 +0530, Akilesh K wrote:
> Hi,
> 
> I saw the table of contents. I have posted documents on configuring
> openstack neutron-openvswitch-plugin, comparison between networking
> devices and thier linux software components and also about the working
> principles of neutron-ovs-plugin at layer 2 and neutron-l3-agent at
> layer 3 . My intention with the posts was to aid begginers in
> debugging neutron issues. 
> 
> 
> The problem is that I am not sure where exactly these posts fit in the
> topic of contents. Anyone with suggestions please reply to me. Below
> are the link to the blog posts
> 
> 
> 1. Comparison between networking devices and linux software components
> 
> 2. Openstack ovs plugin configuration for single/multi machine setup
> 
> 3. Neutron ovs plugin layer 2 connectivity
> 
> 4. Layer 3 connectivity using neutron-l3-agent
> 
> 
> I would be glad to include sub sections in any of these posts if that
> helps.
> 
> 
> Thank you,
> Ageeleshwar K
> 
> 
> On Tue, Sep 30, 2014 at 2:36 AM, Nicholas Chase 
> wrote:
> As you know, we're always looking for ways for people to be
> able to contribute to Docs, but we do understand that there's
> a certain amount of pain involved in dealing with Docbook.  So
> to try and make this process easier, we're going to try an
> experiment.
> 
> What we've put together is a system where you can update a
> wiki with links to content in whatever form you've got it --
> gist on github, wiki page, blog post, whatever -- and we have
> a dedicated resource that will turn it into actual
> documentation, in Docbook. If you want to be added as a
> co-author on the patch, make sure to provide us the email
> address you used to become a Foundation member.
> 
> Because we know that the networking documentation needs
> particular attention, we're starting there.  We have a
> Networking Guide, from which we will ultimately pull
> information to improve the networking section of the admin
> guide.  The preliminary Table of Contents is here:
> https://wiki.openstack.org/wiki/NetworkingGuide/TOC , and the
> instructions for contributing are as follows:
> 
>  1. Pick an existing topic or create a new topic. For new
> topics, we're primarily interested in deployment
> scenarios.
>  2. Develop content (text and/or diagrams) in a format
> that supports at least basic markup (e.g., titles,
> paragraphs, lists, etc.).
>  3. Provide a link to the content (e.g., gist on
> github.com, wiki page, blog post, etc.) under the
> associated topic.
>  4. Send e-mail to reviewers network...@openstacknow.com.
>  5. A writer turns the content into an actual patch, with
> tracking bug, and docs reviewers (and the original
> author, we would hope) make sure it gets reviewed and
> merged.
> 
> Please let us know if you have any questions/comments.
> Thanks!
> 
>   Nick
> -- 
> Nick Chase 
> 1-650-567-5640
> Technical Marketing Manager, Mirantis
> Editor, OpenStack:Now
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [UX] [Heat] [Mistral] Merlin project PoC update: shift from HOT builder to Mistral Workbook builder

2014-09-30 Thread Renat Akhmerov
Timur,

For us, undoubtedly, it’s a great news. Visualization of any kind is really 
important for Mistral for a number of reasons. You can count on any 
help(including code contribution) from our side.

Thanks

Renat Akhmerov
@ Mirantis Inc.



On 26 Sep 2014, at 04:04, Steve Baker  wrote:

> On 26/09/14 05:36, Timur Sufiev wrote:
>> Hello, folks!
>> 
>> Following Drago Rosson's introduction of Barricade.js and our discussion in 
>> ML about possibility of using it in Merlin [1], I've decided to change the 
>> plans for PoC: now the goal for Merlin's PoC is to implement Mistral 
>> Workbook builder on top of Barricade.js. The reasons for that are:
>> 
>> * To better understand Barricade.js potential as data abstraction layer in 
>> Merlin, I need to learn much more about its possibilities and limitations 
>> than simple examining/reviewing of its source code allows. The best way to 
>> do this is by building upon it.
>> * It's becoming too crowded in the HOT builder's sandbox - doing the same 
>> work as Drago currently does [2] seems like a waste of resources to me 
>> (especially in case he'll opensource his HOT builder someday just as he did 
>> with Barricade.js).
> 
> Drago, it would be to everyone's benefit if your HOT builder efforts were 
> developed on a public git repository, no matter how functional it is 
> currently.
> 
> Is there any chance you can publish what you're working on to 
> https://github.com/dragorosson or rackerlabs for a start?
> 
>> * Why Mistral and not Murano or Solum? Because Mistral's YAML templates have 
>> simpler structure than Murano's ones do and is better defined at that moment 
>> than the ones in Solum.
>> 
>> There already some commits in https://github.com/stackforge/merlin and since 
>> client-side app doesn't talk to the Mistral's server yet, it is pretty easy 
>> to run it (just follow the instructions in README.md) and then see it in 
>> browser at http://localhost:8080. UI is yet not great, as the current focus 
>> is data abstraction layer exploration, i.e. how to exploit Barricade.js 
>> capabilities to reflect all relations between Mistral's entities. I hope to 
>> finish the minimal set of features in a few weeks - and will certainly 
>> announce it in the ML.
>> 
>> [1] 
>> http://lists.openstack.org/pipermail/openstack-dev/2014-September/044591.html
>> [2] 
>> http://lists.openstack.org/pipermail/openstack-dev/2014-August/044186.html
>> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev