Re: [openstack-dev] a common client library

2014-01-16 Thread Flavio Percoco

On 15/01/14 21:35 +, Jesse Noller wrote:


On Jan 15, 2014, at 1:37 PM, Doug Hellmann doug.hellm...@dreamhost.com wrote:


Several people have mentioned to me that they are interested in, or actively working on, 
code related to a common client library -- something meant to be reused 
directly as a basis for creating a common library for all of the openstack clients to 
use. There's a blueprint [1] in oslo, and I believe the keystone devs and unified CLI 
teams are probably interested in ensuring that the resulting API ends up meeting all of 
our various requirements.

If you're interested in this effort, please subscribe to the blueprint and use 
that to coordinate efforts so we don't produce more than one common library. ;-)

Thanks,
Doug


[1] https://blueprints.launchpad.net/oslo/+spec/common-client-library-2


*raises hand*

Me me!

I’ve been talking to many contributors about the Developer Experience stuff I 
emailed out prior to the holidays and I was starting blueprint work, but this 
is a great pointer. I’m going to have to sync up with Alexei.

I think solving this for openstack developers and maintainers as the blueprint 
says is a big win in terms of code reuse / maintenance and consistent but more 
so for *end-user developers* consuming openstack clouds.

Some background - there’s some terminology mismatch but the rough idea is the 
same:

* A centralized “SDK” (Software Development Kit) would be built condensing the 
common code and logic and operations into a single namespace.

* This SDK would be able to be used by “downstream” CLIs - essentially the CLIs 
become a specialized front end - and in some cases, only an argparse or cliff 
front-end to the SDK methods located in the (for example) 
openstack.client.api.compute

* The SDK would handle Auth, re-auth (expired tokens, etc) for long-lived 
clients - all of the openstack.client.api.** classes would accept an Auth 
object to delegate management / mocking of the Auth / service catalog stuff to. 
This means developers building applications (say for example, horizon) don’t 
need to worry about token/expired authentication/etc.

* Simplify the dependency graph  code for the existing tools to enable single 
binary installs (py2exe, py2app, etc) for end users of the command line tools.

Short version: if a developer wants to consume an openstack cloud; the would 
have a single SDK with minimal dependencies and import from a single namespace. 
An example application might look like:

from openstack.api import AuthV2
from openstack.api import ComputeV2

myauth = AuthV2(…., connect=True)
compute = ComputeV2(myauth)

compute.list_flavors()



I know this is an example but, could we leave the version out of the
class name? Having something like:

from openstack.api.v2 import Compute

   or

from openstack.compute.v2 import Instance

(just made that up)

for marconi we're using the later.


This greatly improves the developer experience both internal to openstack and 
externally. Currently OpenStack has 22+ (counting stackforge) potential 
libraries a developer may need to install to use a full deployment of OpenStack:

 * python-keystoneclient (identity)
 * python-glanceclient (image)
 * python-novaclient (compute)
 * python-troveclient (database)
 * python-neutronclient (network)
 * python-ironicclient (bare metal)
 * python-heatclient (orchestration)
 * python-cinderclient (block storage)
 * python-ceilometerclient (telemetry, metrics  billing)
 * python-swiftclient (object storage)
 * python-savannaclient (big data)
 * python-openstackclient (meta client package)
 * python-marconiclient (queueing)
 * python-tuskarclient (tripleo / management)
 * python-melangeclient (dead)
 * python-barbicanclient (secrets)
 * python-solumclient (ALM)
 * python-muranoclient (application catalog)
 * python-manilaclient (shared filesystems)
 * python-libraclient (load balancers)
 * python-climateclient (reservations)
 * python-designateclient (Moniker/DNS)

If you exclude the above and look on PyPI:

On PyPi (client libraries/SDKs only, excluding the above - not maintained by 
openstack):

* hpcloud-auth-openstack 1.0
* python-openstackclient 0.2.2
* rackspace-auth-openstack 1.1
* posthaste 0.2.2
* pyrax 1.6.2
* serverherald 0.0.1
* warm 0.3.1
* vaporize 0.3.2
* swiftsc (https://github.com/mkouhei/swiftsc)
* bookofnova 0.007
* nova-adminclient 0.1.8
* python-quantumclient 2.2.4.3
* python-stackhelper 0.0.7.1.gcab1eb0
* swift-bench 1.0
* swiftly 1.12
* txAWS 0.2.3
* cfupload 0.5.1
* python-reddwarfclient 0.1.2
* python-automationclient 1.2.1
* rackspace-glanceclient 0.9
* rackspace-novaclient 1.4

If you ignore PyPI and just want to install the base say - 7 services, each one 
of the python-** clients has its own dependency tree and may or may not build 
from one of the others. If a vendor wants to extend any of them, it’s basically 
a fork instead of a clean plugin system.

On the CLI front - this would *greatly* simplify the work openstackclient needs 

Re: [openstack-dev] [nova] [neutron] PCI pass-through network support

2014-01-16 Thread yongli he

On 2014?01?16? 08:28, Ian Wells wrote:
To clarify a couple of Robert's points, since we had a conversation 
earlier:
On 15 January 2014 23:47, Robert Li (baoli) ba...@cisco.com 
mailto:ba...@cisco.com wrote:


---  do we agree that BDF address (or device id, whatever you call
it), and node id shouldn't be used as attributes in defining a PCI
flavor?


Note that the current spec doesn't actually exclude it as an option.  
It's just an unwise thing to do.  In theory, you could elect to define 
your flavors using the BDF attribute but determining 'the card in this 
slot is equivalent to all the other cards in the same slot in other 
machines' is probably not the best idea...  We could lock it out as an 
option or we could just assume that administrators wouldn't be daft 
enough to try.


  * the compute node needs to know the PCI flavor. [...]
  - to support live migration, we need to use it
to create network xml


I didn't understand this at first and it took me a while to get what 
Robert meant here.


This is based on Robert's current code for macvtap based live 
migration.  The issue is that if you wish to migrate a VM and it's 
tied to a physical interface, you can't guarantee that the same 
physical interface is going to be used on the target machine, but at 
the same time you can't change the libvirt.xml as it comes over with 
the migrating machine.  The answer is to define a network and refer 
out to it from libvirt.xml.  In Robert's current code he's using the 
group name of the PCI devices to create a network containing the list 
of equivalent devices (those in the group) that can be macvtapped.  
Thus when the host migrates it will find another, equivalent, 
interface. This falls over in the use case under
but, with flavor we defined, the group could be a tag for this purpose, 
and all Robert's design still work, so it ok, right?
consideration where a device can be mapped using more than one flavor, 
so we have to discard the use case or rethink the implementation.


There's a more complex solution - I think - where we create a 
temporary network for each macvtap interface a machine's going to use, 
with a name based on the instance UUID and port number, and containing 
the device to map. Before starting the migration we would create a 
replacement network containing only the new device on the target host; 
migration would find the network from the name in the libvirt.xml, and 
the content of that network would behave identically.  We'd be 
creating libvirt networks on the fly and a lot more of them, and we'd 
need decent cleanup code too ('when freeing a PCI device, delete any 
network it's a member of'), so it all becomes a lot more hairy.

--
Ian.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] instance migration strangeness in devstack

2014-01-16 Thread Vishvananda Ishaya
This is probably more of a usage question, but I will go ahead and answer it.

If you are writing to the root drive you may need to run the sync command a few 
times to make sure that the data has been flushed to disk before you kick off 
the migration.

The confirm resize step should be deleting the old data, but there may be a bug 
in the lvm backend if this isn’t happening. Live(block) migration will probably 
be a bit more intuitive.

Vish
On Jan 15, 2014, at 2:40 PM, Dan Genin daniel.ge...@jhuapl.edu wrote:

 I think this qualifies as a development question but please let me know if I 
 am wrong.
 
 I have been trying to test instance migration in devstack by setting up a 
 multi-node devstack following directions at 
 http://devstack.org/guides/multinode-lab.html. I tested that indeed there 
 were multiple availability zones and that it was possible to create instances 
 in each. The I tried migrating an instance from one compute node to another 
 using the Horizon interface (I could not find a way to confirm migration, 
 which is a necessary step, from the command line). I created a test file on 
 the instance's ephemeral disk, before migrating it, to verify that the data 
 was moved to the destination compute node. After migration, I observed an 
 instance with the same id active on the destination node but the test file 
 was not present.
 
 Perhaps I misunderstand how migration is supposed to work but I expected that 
 the data on the ephemeral disk would be migrated with the instance. I suppose 
 it could take some time for the ephemeral disk to be copied but then I would 
 not expect the instance to become active on the destination node before the 
 copy operation was complete.
 
 I also noticed that the ephemeral disk on the source compute node was not 
 removed after the instance was migrated, although, the instance directory 
 was. Nor was the disk removed after the instance was destroyed. I was using 
 LVM backend for my tests. 
 
 I can provide more information about my setup but I just wanted to check 
 whether I was doing (or expecting) something obviously stupid.
 
 Thank you,
 Dan
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Tuskar] Deployment Management section - Wireframes

2014-01-16 Thread Jaromir Coufal

On 2014/14/01 21:35, Vitaly Kramskikh wrote:

Hi Jaromir,

2014/1/13 Jaromir Coufal jcou...@redhat.com mailto:jcou...@redhat.com

So what we can do at the moment (until there is some way to specify
which node to remove) is to inform user, which nodes were removed in
the end... at least.

In the future, I'd like to enable user to have both ways available -
just decrease number and let system to decide which nodes are going
to be removed for him (but at least inform in advance which nodes
are the chosen ones). Or, let user to choose by himself.


I think the functionality to choose which node is allocated for each
role is needed to. I didn't realize how/if it is possible from the
wireframes.

For example, I have 3 racks. I want to allocate 1 node from each rack
for Controller and make other nodes Computes and then make each rack a
separate availability zone. In this case I need to specify which exact
node will become Controller.

Is it possible to do this?

Hi Vitaly,

this is excellent question. And I was rising similar concerns before. I 
believe I managed to get to some compromise how we can achieve such 
use-case.


In Node Profile, you are able to specify HW specs as well as node tags 
(both should be already available in Ironic). So if user has Tags well 
set up (for example mark desired nodes with specific tag), we should be 
able re-use that tag in Node Profile definition of Controller role and 
during scheduling process.


I hope that in time we will be able to get to smoother experience ;)

Cheers
-- Jarda

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] auto configration of local_ip

2014-01-16 Thread NOTSU Arata
Hello,

I'm trying to add a new configuration option for Neutron OVS agent. Although 
I've submitted a patch and it is being reviewed [1], I'm posting to this 
mailing list seeking opinion from a wider range.

At present, when you deploy an environment using Neutron + OVS + GRE/VXLAN, you 
have to set local_ip for tunnelling in neutron agent config (typically 
ovs_neutron_plugin.ini). As each host has different local_ip, preparing and 
maintaining the config files is a cumbersome work.

So, I would like to introduce automatic configuration of local_ip in Neutron. 
Although such a management should be done by some management system, Neutron 
having this feature cloud be also helpful to the deployment system. It would 
reduce the need for the systems to implement such a feature on their own.

Anyway, with this feature, instead of setting an actual IP address to local_ip, 
you set some options (criteria) for choosing an IP address suitable for 
local_ip among IP addresses assigned to the host. At runtime, OVS agent will 
choose an IP address according to the criteria. You will get the same effect as 
you set local_ip=the IP address by hand.

The question is, what criteria is appropriate for the purpose. The criteria 
being mentioned so far in the review are:

1. assigned to the interface attached to default gateway
2. being in the specified network (CIDR)
3. assigned to the specified interface
   (1 can be considered a special case of 3)

Any comment would be appreciated.

Thanks,
Arata


[1] https://review.openstack.org/#/c/64253/



smime.p7s
Description: S/MIME Cryptographic Signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [rally] Naming of a deployment

2014-01-16 Thread Oleg Gelbukh
On Thu, Jan 16, 2014 at 2:21 AM, Jay Pipes jaypi...@gmail.com wrote:

 On Wed, 2014-01-15 at 21:46 +, Hugh Saunders wrote:
  On 15 January 2014 21:14, Ilya Kharin ikha...@mirantis.com wrote:
 
  Hi, guys,
 
  In Rally there is an entity that represents installed instance
  of OpenStack.
  What you think about a proper name for the entity? (a
  Deployment, a Cluster, an Installation, an Instance or
  something else)
 
  I vote for Deployment.


Doesn't it sound a bit weird to deploy a Deployment? Otherwise, it does not
really matter how it is called as long as the naming is consistent.

I have another question. Shoud we think about separation of Deployment and
Endpoint entities in API? Deployment is an object managed by deployment
engine, while Endpoint can refer to existing installation which has nothing
to do with deployment engine. It means that different sets of operations
are applicable to those entities. What do you think?

--
Best regards,
Oleg Gelbukh



 ++

 Best,
 -jay



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Meeting time congestion

2014-01-16 Thread Julien Danjou
On Wed, Jan 15 2014, Joe Gordon wrote:

 * Python3 - 16-May-2013
Bi-Weekly (every other week) meetings on Thursdays at 1600 UTC

Yep, I confirm this one's dead for now. I've edited the details on the
wiki.

Thanks Joe.

-- 
Julien Danjou
-- Free Software hacker - independent consultant
-- http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [neutron] PCI pass-through network support

2014-01-16 Thread Irena Berezovsky
Ian,
Thank you for putting in writing the ongoing discussed specification.
I have added few comments on the Google doc [1].

As for live migration support, this can be done also without libvirt network 
usage.
Not very elegant, but working:  rename the interface of the PCI device to some 
logical name, let's say based on neutron port UUID and put it into the 
interface XML, i.e.:
If PCI device network interface name  is eth8 and neutron port UUID is   
02bc4aec-b4f4-436f-b651-024 then rename it to something like: eth02bc4aec-b4'. 
The interface XML will look like this:

  ...
interface type='direct'
mac address='fa:16:3e:46:d3:e8'/
source dev='eth02bc4aec-b4' mode='passthrough'/
target dev='macvtap0'/
model type='virtio'/
alias name='net0'/
address type='pci' domain='0x' bus='0x00' slot='0x03' function='0x0'/
/interface


...

[1] 
https://docs.google.com/document/d/1vadqmurlnlvZ5bv3BlUbFeXRS_wh-dsgi5plSjimWjU/edit?pli=1#heading=h.308b0wqn1zde

BR,
Irena
From: Ian Wells [mailto:ijw.ubu...@cack.org.uk]
Sent: Thursday, January 16, 2014 2:34 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] [neutron] PCI pass-through network support

To clarify a couple of Robert's points, since we had a conversation earlier:
On 15 January 2014 23:47, Robert Li (baoli) 
ba...@cisco.commailto:ba...@cisco.com wrote:
  ---  do we agree that BDF address (or device id, whatever you call it), and 
node id shouldn't be used as attributes in defining a PCI flavor?

Note that the current spec doesn't actually exclude it as an option.  It's just 
an unwise thing to do.  In theory, you could elect to define your flavors using 
the BDF attribute but determining 'the card in this slot is equivalent to all 
the other cards in the same slot in other machines' is probably not the best 
idea...  We could lock it out as an option or we could just assume that 
administrators wouldn't be daft enough to try.
* the compute node needs to know the PCI flavor. [...]
  - to support live migration, we need to use it to create 
network xml

I didn't understand this at first and it took me a while to get what Robert 
meant here.

This is based on Robert's current code for macvtap based live migration.  The 
issue is that if you wish to migrate a VM and it's tied to a physical 
interface, you can't guarantee that the same physical interface is going to be 
used on the target machine, but at the same time you can't change the 
libvirt.xml as it comes over with the migrating machine.  The answer is to 
define a network and refer out to it from libvirt.xml.  In Robert's current 
code he's using the group name of the PCI devices to create a network 
containing the list of equivalent devices (those in the group) that can be 
macvtapped.  Thus when the host migrates it will find another, equivalent, 
interface.  This falls over in the use case under consideration where a device 
can be mapped using more than one flavor, so we have to discard the use case or 
rethink the implementation.

There's a more complex solution - I think - where we create a temporary network 
for each macvtap interface a machine's going to use, with a name based on the 
instance UUID and port number, and containing the device to map.  Before 
starting the migration we would create a replacement network containing only 
the new device on the target host; migration would find the network from the 
name in the libvirt.xml, and the content of that network would behave 
identically.  We'd be creating libvirt networks on the fly and a lot more of 
them, and we'd need decent cleanup code too ('when freeing a PCI device, delete 
any network it's a member of'), so it all becomes a lot more hairy.
--
Ian.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] auto configration of local_ip

2014-01-16 Thread Robert Collins
On 16 January 2014 21:41, NOTSU Arata no...@virtualtech.jp wrote:
 Hello,

 I'm trying to add a new configuration option for Neutron OVS agent. Although 
 I've submitted a patch and it is being reviewed [1], I'm posting to this 
 mailing list seeking opinion from a wider range.

 At present, when you deploy an environment using Neutron + OVS + GRE/VXLAN, 
 you have to set local_ip for tunnelling in neutron agent config (typically 
 ovs_neutron_plugin.ini). As each host has different local_ip, preparing and 
 maintaining the config files is a cumbersome work.

It's fully automated in all the deployment systems I've looked at. I
appreciate the concern for deployers, but this is a solved problem for
us IMO.

 Anyway, with this feature, instead of setting an actual IP address to 
 local_ip, you set some options (criteria) for choosing an IP address suitable 
 for local_ip among IP addresses assigned to the host. At runtime, OVS agent 
 will choose an IP address according to the criteria. You will get the same 
 effect as you set local_ip=the IP address by hand.

 The question is, what criteria is appropriate for the purpose. The criteria 
 being mentioned so far in the review are:

 1. assigned to the interface attached to default gateway
 2. being in the specified network (CIDR)
 3. assigned to the specified interface
(1 can be considered a special case of 3)

I don't think 1 is a special case of 3 - interface based connections
are dependent on physical wiring,

How about 4. Send a few packets with a nonce in them to any of the
already meshed nodes, and those nodes can report what ip they
originated from.

-Rob


-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] auto configration of local_ip

2014-01-16 Thread Robert Collins
On 16 January 2014 22:51, Robert Collins robe...@robertcollins.net wrote:

 I don't think 1 is a special case of 3 - interface based connections
 are dependent on physical wiring,

 How about 4. Send a few packets with a nonce in them to any of the
 already meshed nodes, and those nodes can report what ip they
 originated from.

Oh, and 5: do an 'ip route get ip of existing mesh node' which will
give you output like:
Press ENTER or type command to continue
192.168.3.1 via 192.168.1.1 dev wlan0  src 192.168.1.17
cache

The src is the src address the local machine would be outputting from.

-Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [rally] Naming of a deployment

2014-01-16 Thread Ilya Kharin
On 16 Jan 2014, at 12:57, Oleg Gelbukh ogelb...@mirantis.com wrote:

 
 
 On Thu, Jan 16, 2014 at 2:21 AM, Jay Pipes jaypi...@gmail.com wrote:
 On Wed, 2014-01-15 at 21:46 +, Hugh Saunders wrote:
  On 15 January 2014 21:14, Ilya Kharin ikha...@mirantis.com wrote:
 
  Hi, guys,
 
  In Rally there is an entity that represents installed instance
  of OpenStack.
  What you think about a proper name for the entity? (a
  Deployment, a Cluster, an Installation, an Instance or
  something else)
 
  I vote for Deployment.
 
 Doesn't it sound a bit weird to deploy a Deployment? Otherwise, it does not 
 really matter how it is called as long as the naming is consistent.
 
 I have another question. Shoud we think about separation of Deployment and 
 Endpoint entities in API? Deployment is an object managed by deployment 
 engine, while Endpoint can refer to existing installation which has nothing 
 to do with deployment engine. It means that different sets of operations are 
 applicable to those entities. What do you think?

Yep, you are right. It's very usefull because currently there is an DummyEngine 
which does nothing, just returns an endpoint. So, an Endpoint should be an 
entity that represents an installed OpenStack and contains an endpoint and 
credentials. In this case the deployment process should be changed. As a result 
of the deployment process an instance of an Endpoint will created. To start a 
Task an instance of an Endpoint should be passed.

 
 --
 Best regards,
 Oleg Gelbukh
  
 
 ++
 
 Best,
 -jay
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Docker and TripleO

2014-01-16 Thread Jaromir Coufal

On 2014/01/01 03:35, Robert Collins wrote:

So, we've spoken about using containers on baremetal - e.g. the lxc
provider - in the past, and with the [righteously deserved] noise
Docker is receiving, I think we need to have a short
expectation-setting discussion.

Previously we've said that deploying machines to deploy containers to
deploy OpenStack was overly meta - I stand by it being strictly
unnecessary, but since Docker seems to have gotten a really good sweet
spot together, I think we're going to want to revisit those
discussions.

However, I think we should do so no sooner than 6 months, and probably
more like a year out.

I say 6-12 months because:
  - Docker currently describes itself as 'not for production use'
  - It's really an optimisation from our perspective
  - We need to ship a production ready version of TripleO ASAP, and I
think retooling would delay us more than it would benefit us.
  - There are going to be some nasty bootstrapping issues - we have to
deploy the bare metal substrate and update it in all cases anyway
- And I think pushing ahead with (any) container without those
resolved is unwise
- because our goal as always has to be to push the necessary
support into the rest of OpenStack, *not* as a TripleO unique
facility.

This all ties into other threads that have been raised about future
architectures we could use: I think we want to evolve to have better
flexability and performance, but lets get a v1 minimal but functional
- HA, scalable, usable - version in place before we advance.

-Rob


+1 especially to the last two paragraphs.

-- Jarda

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] Augmenting openstack_dashboard settings and possible horizon bug

2014-01-16 Thread Timur Sufiev
Radomir,

it looks interesting indeed. I think Murano could use it in case several
additional parameters were added. I will submit a patch with my ideas a bit
later.

One thing that seemed tricky to me in your patchset is determining which
dashboard will actually be the default one, but I have yet no clue on how
it could be made simpler using pluggable architecture.


On Wed, Jan 15, 2014 at 6:57 PM, Radomir Dopieralski openst...@sheep.art.pl
 wrote:

 On 15/01/14 15:30, Timur Sufiev wrote:
  Recently I've decided to fix situation with Murano's dashboard and move
  all Murano-specific django settings into a separate file (previously
  they were appended to
  /usr/share/openstack-dashboard/openstack_dashboard/settings.py). But, as
  I knew, /etc/openstack_dashboard/local_settings.py is for customization
  by admins and is distro-specific also - so I couldn't use it for
  Murano's dashboard customization.

 [snip]

  2. What is the sensible approach for customizing settings for some
  Horizon's dashboard in that case?

 We recently added a way for dashboards to have (some) of their
 configuration provided in separate files, maybe that would be
 helpful for Murano?

 The patch is https://review.openstack.org/#/c/56367/

 We can add more settings that can be changed, we just have to know what
 is needed.

 --
 Radomir Dopieralski


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Timur Sufiev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] auto configration of local_ip

2014-01-16 Thread Ian Wells
On 16 January 2014 10:51, Robert Collins robe...@robertcollins.net wrote:

  1. assigned to the interface attached to default gateway


Which you may not have, or may be on the wrong interface (if I'm setting up
a control node I usually have the default gateway on the interface with the
API endpoints, which I emphatically don't use for internal traffic like
tunnelling)


  2. being in the specified network (CIDR)

 3. assigned to the specified interface
 (1 can be considered a special case of 3)


Except that (1) and (2) specify a subnet and a single address, and an
interface in (3) can have multiple addresses.

How about 4. Send a few packets with a nonce in them to any of the
 already meshed nodes, and those nodes can report what ip they
 originated from.


Which doesn't work unless you've discovered the IP address on another
machine - chicken, meet egg...

I appreciate the effort but I've seen people try this repeatedly and it's a
much harder problem than it appears to be.  There's no easy way, for a
given machine, to guess which interface you should be using.  Robert's
suggestion of a broadcast is actually the best idea I've seen so far - you
could, for instance, use MDNS to work out where the control node is and
which interface is which when you add a compute node, which would certainly
be elegant - but I'm concerned about taking a stab in the dark at an
important config item when there really isn't a good way of working it out.

Sorry,
--
Ian.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Domain Model Locations

2014-01-16 Thread Jaromir Coufal

Hi Jay,

Awesome. I'll just add quick note inline (and sorry for smaller delay):

On 2014/09/01 18:22, Jay Dobies wrote:

I'm trying to hash out where data will live for Tuskar (both long term
and for its Icehouse deliverables). Based on the expectations for
Icehouse (a combination of the wireframes and what's in Tuskar client's
api.py), we have the following concepts:


[snip]


= Resource Categories =

[snip]


== Count ==
In the Tuskar UI, the user selects how many of each category is desired.
This stored in Tuskar's domain model for the category and is used when
generating the template to pass to Heat to make it happen.
Based on latest discussions - instance count is a bit tricky, but it 
should be specific to Node Profile if we care what hardware we want in 
the play.


Later, we can add possibility to enter just number of instances for the 
whole resource category and let system to decide for me which node 
profile to deploy. But I believe this is future look.



These counts are what is displayed to the user in the Tuskar UI for each
category. The staging concept has been removed for Icehouse. In other
words, the wireframes that cover the waiting to be deployed aren't
relevant for now.

+1



== Image ==
For Icehouse, each category will have one image associated with it. Last
I remember, there was discussion on whether or not we need to support
multiple images for a category, but for Icehouse we'll limit it to 1 and
deal with it later.

Metadata for each Resource Category is owned by the Tuskar API. The
images themselves are managed by Glance, with each Resource Category
keeping track of just the UUID for its image.

I think we were discussing to keep track of image's name there.

Thanks for this great work
-- Jarda

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [neutron] PCI pass-through network support

2014-01-16 Thread Ian Wells
On 16 January 2014 09:07, yongli he yongli...@intel.com wrote:

  On 2014年01月16日 08:28, Ian Wells wrote:

 This is based on Robert's current code for macvtap based live migration.
 The issue is that if you wish to migrate a VM and it's tied to a physical
 interface, you can't guarantee that the same physical interface is going to
 be used on the target machine, but at the same time you can't change the
 libvirt.xml as it comes over with the migrating machine.  The answer is to
 define a network and refer out to it from libvirt.xml.  In Robert's current
 code he's using the group name of the PCI devices to create a network
 containing the list of equivalent devices (those in the group) that can be
 macvtapped.  Thus when the host migrates it will find another, equivalent,
 interface.  This falls over in the use case under

 but, with flavor we defined, the group could be a tag for this purpose,
 and all Robert's design still work, so it ok, right?


Well, you could make a label up consisting of the values of the attributes
in the group, but since a flavor can encompass multiple groups (for
instance, I group by device and vendor and then I use two device types in
my flavor) this still doesn't work.  Irena's solution does, though.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Disk Eraser

2014-01-16 Thread Oleg Gelbukh
On Wed, Jan 15, 2014 at 10:25 PM, Alan Kavanagh
alan.kavan...@ericsson.comwrote:

  Cheers Guys



 So what would you recommend Oleg. Yes its for linux system.


Alan,

Approach proposed below (/dev/zero) is probably better as it allows to
perform at around 60MB/s. Another approach that I've seen flying around is
to generate random string and use it's hashes for dd. There are some
one-liners out there which do that with openssl, just one example:

openssl enc -aes-256-ctr -pass pass:$(dd if=/dev/urandom bs=128
count=1 2/dev/null | base64) -nosalt  /dev/zero  randomfile.bin

Hope this helps.

--
Best regards,
Oleg Gelbukh


 /Alan



 *From:* Oleg Gelbukh [mailto:ogelb...@mirantis.com]
 *Sent:* January-15-14 10:30 AM
 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] [ironic] Disk Eraser





 On Wed, Jan 15, 2014 at 6:42 PM, Alexei Kornienko 
 alexei.kornie...@gmail.com wrote:

 If you are working on linux system following can help you:

 dd if=/dev/urandom of=/dev/sda bs=4k



 I would not recommend that as /dev/urandom is real slow (10-15 MB/s).



 --

 Best regards,

 Oleg Gelbukh




 :)
 Best Regards,



 On 01/15/2014 04:31 PM, Alan Kavanagh wrote:

   Hi fellow OpenStackers



 Does anyone have any recommendations on open source tools for disk
 erasure/data destruction software. I have so far looked at DBAN and disk
 scrubber and was wondering if ironic team have some better recommendations?



 BR

 Alan



 ___

 OpenStack-dev mailing list

 OpenStack-dev@lists.openstack.org

 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Domain Model Locations

2014-01-16 Thread Jaromir Coufal


On 2014/12/01 20:40, Jay Pipes wrote:

On Fri, 2014-01-10 at 10:28 -0500, Jay Dobies wrote:

So, it's not as simple as it may initially seem :)


Ah, I should have been clearer in my statement - my understanding is that
we're scrapping concepts like Rack entirely.


That was my understanding as well. The existing Tuskar domain model was
largely placeholder/proof of concept and didn't necessarily reflect
exactly what was desired/expected.


Hmm, so this is a bit disappointing, though I may be less disappointed
if I knew that Ironic (or something else?) planned to account for
datacenter inventory in a more robust way than is currently modeled.

If Triple-O/Ironic/Tuskar are indeed meant to be the deployment tooling
that an enterprise would use to deploy bare-metal hardware in a
continuous fashion, then the modeling of racks, and the attributes of
those racks -- location, power supply, etc -- are a critical part of the
overall picture.

As an example of why something like power supply is important... inside
ATT, we had both 8kW and 16kW power supplies in our datacenters. For a
42U or 44U rack, deployments would be limited to a certain number of
compute nodes, based on that power supply.

The average power draw for a particular vendor model of compute worker
would be used in determining the level of compute node packing that
could occur for that rack type within a particular datacenter. This was
a fundamental part of datacenter deployment and planning. If the tooling
intended to do bare-metal deployment of OpenStack in a continual manner
does not plan to account for these kinds of things, then the chances
that tooling will be used in enterprise deployments is diminished.

And, as we all know, when something isn't used, it withers. That's the
last thing I want to happen here. I want all of this to be the
bare-metal deployment tooling that is used *by default* in enterprise
OpenStack deployments, because the tooling fits the expectations of
datacenter deployers.

It doesn't have to be done tomorrow :) It just needs to be on the map
somewhere. I'm not sure if Ironic is the place to put this kind of
modeling -- I thought Tuskar was going to be that thing. But really,
IMO, it should be on the roadmap somewhere.

All the best,
-jay


Perfect write up, Jay.

I can second these needs based on talks I had previously.

The goal is to primarily support enterprise deployments and they work 
with racks, so all of that information such as location, power supply, 
etc are important.


Though this is pretty challenging area and we need to start somewhere. 
As a proof of concept, Tuskar tried to provide similar views, then we 
jumped into reality. OpenStack has no strong support in racks field for 
the moment. As long as we want to deliver working deployment solution 
ASAP and enhance it in time, we started with currently available features.


We are not giving up racks entirely, they are just a bit pushed back, 
since there is no real support in OpenStack yet. But to deliver more 
optimistic news, regarding last OpenStack summit, Ironic intends to work 
with all the racks information (location, power supply, ...). So once 
Ironic contains all of that information, we can happily start providing 
such capability for deployment setups, hardware overviews, etc.


Having said that, for Icehouse I pushed for Node Tags to get in. It is 
not the best experience, but using Node Tags, we can actually support 
various use-cases for user (by him tagging nodes manually at the moment).


Cheers
-- Jarda

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Let's move to Alembic

2014-01-16 Thread Roman Podoliaka
Hi all,

I'm glad you've decided to drop sqlalchemy-migrate support :)

As for porting Ironic to using of alembic migrations, I believe,
Dmitriy Shulyak already uploaded a proof-of-concept patch to Ironic
before, but it abandoned. Adding Dmitriy to this thread, so he is
notified he can restore his patch and continue his work.

Thanks,
Roman

On Thu, Jan 16, 2014 at 5:59 AM, Devananda van der Veen
devananda@gmail.com wrote:
 Hi all,

 Some months back, there was discussion to move Ironic to use Alembic instead
 of SqlAlchemy. At that time, I was much more interested in getting the
 framework together than I was in restructuring our database migrations, and
 what we had was sufficient to get us off the ground.

 Now that the plumbing is coming together, and we're looking hopefully at
 doing a release this cycle, I'd like to see if anyone wants to pick up the
 torch and switch our db migrations to use alembic. Ideally, let's do this
 between the I2 and I3 milestones.

 I am aware of the work adding a transition-to-alembic to Oslo:
 https://review.openstack.org/#/c/59433/

 I feel like we don't necessarily need to wait for that to land. There's a
 lot less history in our migrations than in, say, Nova; we don't yet support
 down-migrations anyway; and there aren't any prior releases of the project
 which folks could upgrade from.

 Thoughts?

 -Deva

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [third-party-testing] Sharing information

2014-01-16 Thread Sullivan, Jon Paul
 -Original Message-
 From: Kyle Mestery [mailto:mest...@siliconloons.com]
 Sent: 15 January 2014 22:53
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [neutron] [third-party-testing] Sharing
 information
 
 FYI, here [1] are the meeting logs from today’s meeting.
 
 A couple of things have become apparent here:
 
 1. No one has a working Neutron 3rd party testing rig yet which is
 voting
 consistently. If I’ve missed something, please, someone correct me.
 2. People are still hung on issues around Jenkins/gerrit integration.

This issue can be very easily resolved if people were to use Jenkins Job 
Builder [2] for the creation of their Jenkins testing jobs.  This would allow 
the reuse of simple macros already in existence to guarantee correct 
configuration of Jenkins jobs at 3rd party sites.  This would also allow simple 
reuse of the code used by the infra team to create the openstack review and 
gate jobs, ensuring 3rd party testers can generate the correct code from the 
gerrit change and also publish results back in a standard way.

I can't recommend Jenkins Job Builder highly enough if you use Jenkins.

[2] https://github.com/openstack-infra/jenkins-job-builder

 3. There are issues with devstack failing, but these seem to be Neutron
 plugin specific. I’ve encouraged people to reach out on both the
 #openstack-neutron and #openstack-qa channels with questions.
 4. There is still some confusion on what tests to run. I think this is
 likely to be plugin dependent.
 5. There is some confusion around what version of devstack to use.
 My assumption has always been upstream master.
 
 Another general issue which I wanted to highlight here, which has been
 brought up before, is that for companies/projects proposing plugins,
 MechanismDrivers, and/or service plugins you really need someone active
 on both the mailing list as well as the IRC channels. This will help if
 your testing rig has issues, or if people need help understanding why
 your test setup is failing with their patch.
 
 So, that’s the Neutron 3rd party testing update as we near the deadline
 next week.
 
 Thanks!
 Kyle
 
 [1]
 http://eavesdrop.openstack.org/meetings/networking_third_party_testing/2
 014/networking_third_party_testing.2014-01-15-22.00.log.html
 
 On Jan 14, 2014, at 10:34 AM, Kyle Mestery mest...@siliconloons.com
 wrote:
 
  Given the Tempest Sprint in Montreal, I still think we should have
  this meeting on IRC. So, lets nail down the time as 2200 UTC on
  #openstack-meeting-alt for tomorrow. If you can’t make it, I’ll send
 the meeting logs out.
 
  Thanks, look forward to seeing people there tomorrow!
 
  Kyle
 
  On Jan 14, 2014, at 9:49 AM, Lucas Eznarriaga lu...@midokura.com
 wrote:
 
  Hi,
  I will also be available for a meeting tomorrow.
  @Mohammad, we are still working on our 3rd party testing setup so do
 not take Midokura CI Bot votes too seriously yet.
  So far I have followed the links on the etherpad to have the
 jenkins+gerrit trigger plugin working with the current setup that's what
 I haven't added anything else yet.
 
  Cheers,
  Lucas
 
 
 
  On Tue, Jan 14, 2014 at 3:55 PM, Edgar Magana emag...@plumgrid.com
 wrote:
  I like it and I am in favor.
  Some of us, will be in Montreal attending the sprint tempest session.
 Hopefully we can all take it from there.
 
  Edgar
 
  On Jan 14, 2014, at 6:31 AM, Kyle Mestery mest...@siliconloons.com
 wrote:
 
  Thanks for sending this note Mohammad. I am all in favor of another
  3rd party testing meeting on IRC. How about if we shoot for
  tomorrow, Wednesday the 15, at 2200 UTC? Please ack if that works
 for everyone.
 
  Thanks,
  Kyle
 
  On Jan 13, 2014, at 5:08 PM, Mohammad Banikazemi m...@us.ibm.com
 wrote:
 
  Hi everybody,
 
  I see that we already have at least two 3rd party testing setups
 (from Arista and Midokura) up and running. Noticed their votes on our
 newly submitted plugin.
  The etherpad which is to be used for sharing information about
 setting up 3rd party testing (as well as multi-node testing) [1] seems
 to have not been updated recently. Would those who have setup their 3rd
 party testing successfully be willing to share more information as to
 what they have done and possibly update the etherpad?
 
  Would it be of value to others if we have another IRC meeting to
 discuss this matter?
  (Kyle, I am sure you are busy so I took the liberty to send this
  note. Please let us know what you think.)
 
  Thanks,
 
  Mohammad
 
 
  [1] https://etherpad.openstack.org/p/multi-node-neutron-tempest
 
  graycol.gifKyle Mestery ---12/19/2013 09:17:44 AM---Apologies
 folks, I meant 2200 UTC Thursday. We'll still do the meeting today.
 
  From:Kyle Mestery mest...@siliconloons.com
  To:OpenStack Development Mailing List \(not for usage
 questions\) openstack-dev@lists.openstack.org,
  Date:12/19/2013 09:17 AM
  Subject:Re: [openstack-dev] [neutron] 

[openstack-dev] [Keystone] Access-key like authentication with password-rotation

2014-01-16 Thread Tristan Cacqueray
Hi,

I'd like to check in on this authentication mechanism.
Keystone should have some kind of apiKey in order to prevent developer
from storing their credential (username/password) in clear text
configuration file.

There are two blueprints that can tackle this feature, yet they
are both in needs of approval

https://blueprints.launchpad.net/keystone/+spec/access-key-authentication
https://blueprints.launchpad.net/keystone/+spec/password-rotation


I believe the access-key-authentication have been superseded by the
password-rotation. Meaning:
* The user create a secondary password.
* He can use this new password to authenticate API request
  with the credential_id + password.
* He won't be able to login to Horizon as it will try to authenticate
  with the user_id + password (Keystone will match those against the
  default_credential_id.)
* API request like password change should be denied if the user didn't
  used his default_credential_id.

Did I get this right ?


Best regards,
Tristan



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Editing Nodes

2014-01-16 Thread Jaromir Coufal


On 2014/15/01 22:33, Jay Dobies wrote:

On 01/15/2014 08:07 AM, James Slagle wrote:

[snip]


I may be misinterpreting, but let me say that I don't think Tuskar
should be building images. There's been a fair amount of discussion
around a Nova native image building service [1][2]. I'm actually not
sure what the status/concensus on that is, but maybe longer term,
Tuskar might call an API to kick off an image build.


I didn't mean to imply that Tuskar would be building images, just
kicking them off.

Definitely not at the moment.


As for whether or not it should, that's an interesting question. You and
I are both on the same page on not having a generic image and having the
services be configured outside of that, so I'll ignore that idea for now.

I've always thought of Tuskar as providing the user with everything
they'd need. My gut reaction is that I don't like the idea of saying
they have to go through a separate step of creating the image and then
configuring the resource category in Tuskar and attaching the image to it.

That said, I suspect my gut is wrong, or at very least not in line with
the OpenStack way of thinking.
Well, I think you are right. We should be able to provide as much as 
possible. I don't think that Tuskar has to do everything. Image builder 
would be amazing feature. And I don't think it has to be Tuskar-UI 
business. There can be UI separate from Tuskar, which is part of Horizon 
umbrella, dealing only with building images. I think it has enough 
reasons to become a separate project in the future.



Ok, so given that frame of reference, I'll reply inline:

On Mon, Jan 13, 2014 at 11:18 AM, Jay Dobies jason.dob...@redhat.com
wrote:

I'm pulling this particular discussion point out of the Wireframes
thread so
it doesn't get lost in the replies.

= Background =

It started with my first bulletpoint:

- When a role is edited, if it has existing nodes deployed with the old
version, are the automatically/immediately updated? If not, how do we
reflect that there's a difference between how the role is currently
configured and the nodes that were previously created from it?


I would think Roles need to be versioned, and the deployed version
recorded as Heat metadata/attribute. When you make a change to a Role,
it's a new version. That way you could easily see what's been
deployed, and if there's a newer version of the Role to deploy.


+1, the more I've been thinking about this, the more I like it. We can't
assume changes will be immediately applied to all provisioned instances,
so we need some sort of record of what an instance was actually built
against.
Does it need to be new version of the whole Resource Category? The image 
information might be part of the node. And we can only display in the UI 
that certain amount of nodes are running based on different image than 
is actually available.


= Regarding immediate changes =
For me, Resource Category is indicating some role of the node. It is 
given by certain behavior and I expect all the nodes behave the same way.


If I change some settings of the role which change the behavior (set of 
services, config), I would expect all nodes behave the same way - new 
nodes as well as old ones - so it indicates the immediate change for me. 
Otherwise, if I expect old nodes to behave one way and new nodes to 
behave differently, I need to create another role.


On the other hand, if I only update image OS, packages, or whatever, but 
the behavior of the node remains the same (same set of services, same 
configuration), I do expect to have those nodes with old image working 
along with nodes containing new image and apply the upgrade when I want to.


[snip]


But there was also idea that there will be some generic image,
containing
all services, we would just configure which services to start. In
that case
we would need to version also this.


-1 to this.  I think we should stick with specialized images per role.
I replied on the wireframes thread, but I don't see how
enabling/disabling services in a prebuilt image should work. Plus, I
don't really think it fits with the TripleO model of having an image
created based on it's specific role (I hate to use that term and
muddy the wateri mean in the generic sense here).
It was not agreed which way to go. I agree that first step should be to 
have specific images and don't deal with enabling/disabling services. 
It's outside the Icehouse timeframe, but for future direction, I am 
removing the enable/disable functionality from wireframes.


[snip]


If the image is immediately created, what happens if the user tries to
change the resource category counts while it's still being generated?
That
question applies both if we automatically update existing nodes as
well as
if we don't and the user is just quick moving around the UI.
This should never happen. Once heat stack is updating, you shouldn't be 
able to apply for other change.



What do we do with old images from previous configurations of 

Re: [openstack-dev] [ironic] Disk Eraser

2014-01-16 Thread Chris Jones
Hi

https://code.google.com/p/diskscrub/

If you need more than /dev/zero, scrub should be packaged in most distros and 
offers a choice of high grade algorithms.

Cheers,
--
Chris Jones

 On 15 Jan 2014, at 14:31, Alan Kavanagh alan.kavan...@ericsson.com wrote:
 
 Hi fellow OpenStackers
  
 Does anyone have any recommendations on open source tools for disk 
 erasure/data destruction software. I have so far looked at DBAN and disk 
 scrubber and was wondering if ironic team have some better recommendations?
  
 BR
 Alan
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] a common client library

2014-01-16 Thread Chris Jones
Hi

Once a common library is in place, is there any intention to (or resistance 
against) collapsing the clients into a single project or even a single command 
(a la busybox)?

(I'm thinking reduced load for packagers, simpler installation for users, etc)

Cheers,
--
Chris Jones

 On 15 Jan 2014, at 19:37, Doug Hellmann doug.hellm...@dreamhost.com wrote:
 
 Several people have mentioned to me that they are interested in, or actively 
 working on, code related to a common client library -- something meant to 
 be reused directly as a basis for creating a common library for all of the 
 openstack clients to use. There's a blueprint [1] in oslo, and I believe the 
 keystone devs and unified CLI teams are probably interested in ensuring that 
 the resulting API ends up meeting all of our various requirements.
 
 If you're interested in this effort, please subscribe to the blueprint and 
 use that to coordinate efforts so we don't produce more than one common 
 library. ;-)
 
 Thanks,
 Doug
 
 
 [1] https://blueprints.launchpad.net/oslo/+spec/common-client-library-2
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][LBaaS] LBaaS subteam meeting 16.01.2014

2014-01-16 Thread Eugene Nikanorov
Hi neutron and lbaas folks,

Let's meet today on 14-00 UTC at #openstack-meeting
It's been a while since we had a quorum in the meeting.

There are a few items on out list that require a discussion:
0) Third party testing
1) SSL extension
  - vendor extension framework
2) L7 rules
3) Loadbalancer instance

Please join!

Thanks,
Eugene.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] a common client library

2014-01-16 Thread Jesse Noller


 On Jan 16, 2014, at 2:09 AM, Flavio Percoco fla...@redhat.com wrote:
 
 On 15/01/14 21:35 +, Jesse Noller wrote:
 
 On Jan 15, 2014, at 1:37 PM, Doug Hellmann doug.hellm...@dreamhost.com 
 wrote:
 
 Several people have mentioned to me that they are interested in, or 
 actively working on, code related to a common client library -- something 
 meant to be reused directly as a basis for creating a common library for 
 all of the openstack clients to use. There's a blueprint [1] in oslo, and I 
 believe the keystone devs and unified CLI teams are probably interested in 
 ensuring that the resulting API ends up meeting all of our various 
 requirements.
 
 If you're interested in this effort, please subscribe to the blueprint and 
 use that to coordinate efforts so we don't produce more than one common 
 library. ;-)
 
 Thanks,
 Doug
 
 
 [1] https://blueprints.launchpad.net/oslo/+spec/common-client-library-2
 
 *raises hand*
 
 Me me!
 
 I’ve been talking to many contributors about the Developer Experience stuff 
 I emailed out prior to the holidays and I was starting blueprint work, but 
 this is a great pointer. I’m going to have to sync up with Alexei.
 
 I think solving this for openstack developers and maintainers as the 
 blueprint says is a big win in terms of code reuse / maintenance and 
 consistent but more so for *end-user developers* consuming openstack clouds.
 
 Some background - there’s some terminology mismatch but the rough idea is 
 the same:
 
 * A centralized “SDK” (Software Development Kit) would be built condensing 
 the common code and logic and operations into a single namespace.
 
 * This SDK would be able to be used by “downstream” CLIs - essentially the 
 CLIs become a specialized front end - and in some cases, only an argparse or 
 cliff front-end to the SDK methods located in the (for example) 
 openstack.client.api.compute
 
 * The SDK would handle Auth, re-auth (expired tokens, etc) for long-lived 
 clients - all of the openstack.client.api.** classes would accept an Auth 
 object to delegate management / mocking of the Auth / service catalog stuff 
 to. This means developers building applications (say for example, horizon) 
 don’t need to worry about token/expired authentication/etc.
 
 * Simplify the dependency graph  code for the existing tools to enable 
 single binary installs (py2exe, py2app, etc) for end users of the command 
 line tools.
 
 Short version: if a developer wants to consume an openstack cloud; the would 
 have a single SDK with minimal dependencies and import from a single 
 namespace. An example application might look like:
 
 from openstack.api import AuthV2
 from openstack.api import ComputeV2
 
 myauth = AuthV2(…., connect=True)
 compute = ComputeV2(myauth)
 
 compute.list_flavors()
 
 I know this is an example but, could we leave the version out of the
 class name? Having something like:
 
 from openstack.api.v2 import Compute
 
   or
 
 from openstack.compute.v2 import Instance
 
 (just made that up)
 
 for marconi we're using the later.

Definitely; it should be based on namespaces. 

 
 This greatly improves the developer experience both internal to openstack 
 and externally. Currently OpenStack has 22+ (counting stackforge) potential 
 libraries a developer may need to install to use a full deployment of 
 OpenStack:
 
 * python-keystoneclient (identity)
 * python-glanceclient (image)
 * python-novaclient (compute)
 * python-troveclient (database)
 * python-neutronclient (network)
 * python-ironicclient (bare metal)
 * python-heatclient (orchestration)
 * python-cinderclient (block storage)
 * python-ceilometerclient (telemetry, metrics  billing)
 * python-swiftclient (object storage)
 * python-savannaclient (big data)
 * python-openstackclient (meta client package)
 * python-marconiclient (queueing)
 * python-tuskarclient (tripleo / management)
 * python-melangeclient (dead)
 * python-barbicanclient (secrets)
 * python-solumclient (ALM)
 * python-muranoclient (application catalog)
 * python-manilaclient (shared filesystems)
 * python-libraclient (load balancers)
 * python-climateclient (reservations)
 * python-designateclient (Moniker/DNS)
 
 If you exclude the above and look on PyPI:
 
 On PyPi (client libraries/SDKs only, excluding the above - not maintained by 
 openstack):
 
 * hpcloud-auth-openstack 1.0
 * python-openstackclient 0.2.2
 * rackspace-auth-openstack 1.1
 * posthaste 0.2.2
 * pyrax 1.6.2
 * serverherald 0.0.1
 * warm 0.3.1
 * vaporize 0.3.2
 * swiftsc (https://github.com/mkouhei/swiftsc)
 * bookofnova 0.007
 * nova-adminclient 0.1.8
 * python-quantumclient 2.2.4.3
 * python-stackhelper 0.0.7.1.gcab1eb0
 * swift-bench 1.0
 * swiftly 1.12
 * txAWS 0.2.3
 * cfupload 0.5.1
 * python-reddwarfclient 0.1.2
 * python-automationclient 1.2.1
 * rackspace-glanceclient 0.9
 * rackspace-novaclient 1.4
 
 If you ignore PyPI and just want to install the base say - 7 services, each 
 one of the python-** clients has its own dependency 

Re: [openstack-dev] [rally] Naming of a deployment

2014-01-16 Thread Ilya Kharin
On 16 Jan 2014, at 13:54, Ilya Kharin ikha...@mirantis.com wrote:

 On 16 Jan 2014, at 12:57, Oleg Gelbukh ogelb...@mirantis.com wrote:
 
 
 
 On Thu, Jan 16, 2014 at 2:21 AM, Jay Pipes jaypi...@gmail.com wrote:
 On Wed, 2014-01-15 at 21:46 +, Hugh Saunders wrote:
  On 15 January 2014 21:14, Ilya Kharin ikha...@mirantis.com wrote:
 
  Hi, guys,
 
  In Rally there is an entity that represents installed instance
  of OpenStack.
  What you think about a proper name for the entity? (a
  Deployment, a Cluster, an Installation, an Instance or
  something else)
 
  I vote for Deployment.
 
 Doesn't it sound a bit weird to deploy a Deployment? Otherwise, it does not 
 really matter how it is called as long as the naming is consistent.
 
 I have another question. Shoud we think about separation of Deployment and 
 Endpoint entities in API? Deployment is an object managed by deployment 
 engine, while Endpoint can refer to existing installation which has nothing 
 to do with deployment engine. It means that different sets of operations are 
 applicable to those entities. What do you think?
 
 Yep, you are right. It's very usefull because currently there is an 
 DummyEngine which does nothing, just returns an endpoint. So, an Endpoint 
 should be an entity that represents an installed OpenStack and contains an 
 endpoint and credentials. In this case the deployment process should be 
 changed. As a result of the deployment process an instance of an Endpoint 
 will created. To start a Task an instance of an Endpoint should be passed.
Also, if try to imagine this in examples, then this can be represented like 
that:

1) In case of an existent cloud, only an Endpoint should be added:

   $ cat endpoint.jso
   {
   auth_url: http://example.com:5000/v2.0;,
   username: admin,
   password: secret,
   tenant_name: demo
   }
   $ rally endpoint add endpoint.json

The last command shows an endpoint_id of an added endpoint.

2.1) In case when OpenStack should be deployed:

$ rally deployment create --filename=deployment.json

2.2) Wait when a status has a proper value (deploy-finished).

$ rally deployment list

In case of success deployment an endpoint is going to be automatically created 
for the deployment:

$ rally endpoint list

3) To start a task an endpoint_id is required. There is one endpoint after (1) 
or (2.2) step, use it:

$ rally task start --endpoint-id=endpoint_id --task task.json
 
 
 --
 Best regards,
 Oleg Gelbukh
  
 
 ++
 
 Best,
 -jay
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] a common client library

2014-01-16 Thread Jesse Noller


On Jan 16, 2014, at 5:42 AM, Chris Jones 
c...@tenshu.netmailto:c...@tenshu.net wrote:

Hi

Once a common library is in place, is there any intention to (or resistance 
against) collapsing the clients into a single project or even a single command 
(a la busybox)?

(I'm thinking reduced load for packagers, simpler installation for users, etc)

Cheers,
--
Chris Jones

Based on the email I sent; ideally we'd expand the blueprint to include these 
details and speak to the PTLs for each project.

Ideally (IMO) yes - long term they would be collapsed into one or two packages 
to install. However at very least the code can be made so that the core 
functionality is in the common/single sdk backend and openstackcli, and each 
project CLI can derive their named/branded project specific one for that.

For standalone installs of nova or swift for example; not pushing users to use 
the entire openstackclient makes sense, instead guiding users to use a tool 
dedicated to that.

There are pros and cons on both sides, meeting in the middle at least collapses 
the heavy lifting into the common code base.


On 15 Jan 2014, at 19:37, Doug Hellmann 
doug.hellm...@dreamhost.commailto:doug.hellm...@dreamhost.com wrote:

Several people have mentioned to me that they are interested in, or actively 
working on, code related to a common client library -- something meant to be 
reused directly as a basis for creating a common library for all of the 
openstack clients to use. There's a blueprint [1] in oslo, and I believe the 
keystone devs and unified CLI teams are probably interested in ensuring that 
the resulting API ends up meeting all of our various requirements.

If you're interested in this effort, please subscribe to the blueprint and use 
that to coordinate efforts so we don't produce more than one common library. ;-)

Thanks,
Doug


[1] https://blueprints.launchpad.net/oslo/+spec/common-client-library-2
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] a common client library

2014-01-16 Thread Chmouel Boudjnah
On Thu, Jan 16, 2014 at 12:38 PM, Chris Jones c...@tenshu.net wrote:

 Once a common library is in place, is there any intention to (or
 resistance against) collapsing the clients into a single project or even a
 single command (a la busybox)?



that's what openstackclient is here for
https://github.com/openstack/python-openstackclient
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Introduction: Rich Megginson - Designate project

2014-01-16 Thread John Dennis
On 01/15/2014 08:24 PM, Rich Megginson wrote:
 Hello.  My name is Rich Megginson.  I am a Red Hat employee interested 
 in working on Designate (DNSaaS), primarily in the areas of integration 
 with IPA DNS, DNSSEC, and authentication (Keystone).
 
 I've signed up for the openstack/launchpad/gerrit accounts.
 
 Be seeing you (online).

Welcome aboard Rich! Great to have an excellent developer with
tremendous experience join the crew.


-- 
John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [savanna] savannaclient v2 api

2014-01-16 Thread Alexander Ignatov
Matthew,

I'm ok with proposed solution. Some comments/thoughts below:

-
FIX - @rest.post_file('/plugins/plugin_name/version/convert-config/name')
- this is an RPC call, made only by a client to do input validation, 
move to POST /validations/plugins/:name/:version/check-config-import
-
AFAIR, this rest call was introduced not only for validation. 
The main idea was to create method which converts plugin specific config 
for cluster creation to savanna's cluster template [1]. So maybe we may change 
this rest call to: /plugins/convert-config/name and include all need fields 
to data. Anyway we have to know Hortonworks guys opinion. Currently HDP 
plugin implements this method only.

--
REMOVE - @rest.put('/node-group-templates/node_group_template_id') - Not 
Implemented
REMOVE - @rest.put('/cluster-templates/cluster_template_id') - Not Implemented
--
Disagree with that. Samsung people did great job in both 
savanna/savanna-dashboard 
to make this implemented [2], [3]. We should leave and support these calls in 
savanna.

--
CONSIDER rename /jobs - /job-templates (consistent w/ cluster-templates  
clusters)
CONSIDER renaming /job-executions to /jobs
---
Good idea!

--
FIX - @rest.get('/jobs/config-hints/job_type') - should move to 
GET /plugins/plugin_name/plugin_version, similar to get_node_processes 
and get_required_image_tags
--
Not sure if it should be plugin specific right now. EDP uses it to show some 
configs to users in the dashboard. it's just a cosmetic thing. Also when user 
starts define some configs for some job he might not define cluster yet and 
thus plugin to run this job. I think we should leave it as is and leave only 
abstract configs like Mapper/Reducer class and allow users to apply any 
key/value configs if needed.

-
CONSIDER REMOVING, MUST ALWAYS UPLOAD TO Swift FOR /job-binaries
-
Disagree. It was discussed before starting EDP implementation that there are 
a lot of OS installations which don't have Swift deployed, and ability to run 
jobs using savanna internal db is a good option in this case. But yes, Swift 
is more preferred. Waiting for Trevor's and maybe Nadya's comments here under 
this section.


REMOVE - @rest.get('/job-executions/job_execution_id/refresh-status') - 
refresh 
and return status - GET should not side-effect, status is part of details and 
updated periodically, currently unused

This call goes to Oozie directly to ask it about job status. It allows not to 
wait 
too long when periodic task will update status JobExecution object in Savanna.
The current GET asks status of JobExecution from savanna-db. I think we can
leave this call, it might be useful for external clients.


REMOVE - @rest.get('/job-executions/job_execution_id/cancel') - cancel 
job-execution - GET should not side-effect, currently unused, 
use DELETE /job/executions/job_execution_id

Disagree. We have to leave this call. This methods stops job executing on the
Hadoop cluster but doesn't remove all its related info from savanna-db.
DELETE removes it completely.

[1] 
http://docs.openstack.org/developer/savanna/devref/plugin.spi.html#convert-config-plugin-name-version-template-name-cluster-template-create
[2] https://blueprints.launchpad.net/savanna/+spec/modifying-cluster-template
[3] https://blueprints.launchpad.net/savanna/+spec/modifying-node-group-template

Regards,
Alexander Ignatov



On 14 Jan 2014, at 21:24, Matthew Farrellee m...@redhat.com wrote:

 https://blueprints.launchpad.net/savanna/+spec/v2-api
 
 I've finished a review of the v1.0 and v1.1 APIs with an eye to making them 
 more consistent and RESTful.
 
 Please use this thread to comment on my suggestions for v1.0  v1.1, or to 
 make further suggestions.
 
 Best,
 
 
 matt
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [neutron] PCI pass-through network support

2014-01-16 Thread Sandhya Dasu (sadasu)
Hi Irena,
   Thanks for pointing out an alternative to the network xml solution to live 
migration. I am still not clear about the solution.

Some questions:

  1.  Where does the rename of the PCI device network interface name occur?
  2.  Can this rename be done for a VF? I think your example shows rename of a 
PF.

Thanks,
Sandhya

From: Irena Berezovsky ire...@mellanox.commailto:ire...@mellanox.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Thursday, January 16, 2014 4:43 AM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [nova] [neutron] PCI pass-through network support

Ian,
Thank you for putting in writing the ongoing discussed specification.
I have added few comments on the Google doc [1].

As for live migration support, this can be done also without libvirt network 
usage.
Not very elegant, but working:  rename the interface of the PCI device to some 
logical name, let’s say based on neutron port UUID and put it into the 
interface XML, i.e.:
If PCI device network interface name  is eth8 and neutron port UUID is   
02bc4aec-b4f4-436f-b651-024 then rename it to something like: eth02bc4aec-b4'. 
The interface XML will look like this:

  ...
interface type='direct'
mac address='fa:16:3e:46:d3:e8'/
source dev='eth02bc4aec-b4' mode='passthrough'/
target dev='macvtap0'/
model type='virtio'/
alias name='net0'/
address type='pci' domain='0x' bus='0x00' slot='0x03' function='0x0'/
/interface


...

[1] 
https://docs.google.com/document/d/1vadqmurlnlvZ5bv3BlUbFeXRS_wh-dsgi5plSjimWjU/edit?pli=1#heading=h.308b0wqn1zde

BR,
Irena
From: Ian Wells [mailto:ijw.ubu...@cack.org.uk]
Sent: Thursday, January 16, 2014 2:34 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] [neutron] PCI pass-through network support

To clarify a couple of Robert's points, since we had a conversation earlier:
On 15 January 2014 23:47, Robert Li (baoli) 
ba...@cisco.commailto:ba...@cisco.com wrote:
  ---  do we agree that BDF address (or device id, whatever you call it), and 
node id shouldn't be used as attributes in defining a PCI flavor?

Note that the current spec doesn't actually exclude it as an option.  It's just 
an unwise thing to do.  In theory, you could elect to define your flavors using 
the BDF attribute but determining 'the card in this slot is equivalent to all 
the other cards in the same slot in other machines' is probably not the best 
idea...  We could lock it out as an option or we could just assume that 
administrators wouldn't be daft enough to try.
* the compute node needs to know the PCI flavor. [...]
  - to support live migration, we need to use it to create 
network xml

I didn't understand this at first and it took me a while to get what Robert 
meant here.

This is based on Robert's current code for macvtap based live migration.  The 
issue is that if you wish to migrate a VM and it's tied to a physical 
interface, you can't guarantee that the same physical interface is going to be 
used on the target machine, but at the same time you can't change the 
libvirt.xml as it comes over with the migrating machine.  The answer is to 
define a network and refer out to it from libvirt.xml.  In Robert's current 
code he's using the group name of the PCI devices to create a network 
containing the list of equivalent devices (those in the group) that can be 
macvtapped.  Thus when the host migrates it will find another, equivalent, 
interface.  This falls over in the use case under consideration where a device 
can be mapped using more than one flavor, so we have to discard the use case or 
rethink the implementation.

There's a more complex solution - I think - where we create a temporary network 
for each macvtap interface a machine's going to use, with a name based on the 
instance UUID and port number, and containing the device to map.  Before 
starting the migration we would create a replacement network containing only 
the new device on the target host; migration would find the network from the 
name in the libvirt.xml, and the content of that network would behave 
identically.  We'd be creating libvirt networks on the fly and a lot more of 
them, and we'd need decent cleanup code too ('when freeing a PCI device, delete 
any network it's a member of'), so it all becomes a lot more hairy.
--
Ian.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Meeting time congestion

2014-01-16 Thread Thierry Carrez
Thierry Carrez wrote:
 There are already conflicts as people pick times that appear to be free
 but are actually used every other week... so a third room is definitely
 wanted. We can go and create #openstack-meeting3 (other suggestions
 welcome) and then discuss a consistent naming scheme and/or if renaming
 is such a good idea.

#openstack-meeting-3 proposed at: https://review.openstack.org/67152
Will let you know if/when accepted and merged.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposal for dd disk i/o performance blueprint of cinder.

2014-01-16 Thread CARVER, PAUL

Alan Kavanagh wrote: 

I posted a query to Ironic which is related to this discussion. My thinking 
was I want to ensure the case you note here (1)  a tenant can not read 
another tenants disk.. the next (2) was where in Ironic you provision a 
baremetal server that has an onboard dish as part of the blade provisioned to 
a given tenant-A. then when tenant-A finishes his baremetal blade lease and 
that blade comes back into the pool and tenant-B comes along, I was asking 
what open source tools guarantee data destruction so that no ghost images  or 
file retrieval is possible?

That is an excellent point. I think the needs of Ironic may be different from 
Cinder. As a volume manager Cinder isn't actually putting the raw disk under 
the control of a tenant. If it can be assured that (as is the case with NetApp 
and other storage vendor hardware) that a fake all zeros is returned on a 
read-before-first-write of a chunk of disk space then that's sufficient to 
address the case of some curious ne'er-do-well allocating volumes purely for 
the purpose of reading them to see what's left on them.

But with bare metal the whole physical disk is at the mercy of the tenant so 
you're right that it must be ensured that the none of the previous tenant's 
bits are left lying around to be snooped on.

But I still think an *option* of wipe=none may be desirable because a cautious 
client might well take it into their own hands to wipe the disk before 
releasing it (and perhaps encrypt as well). In which case always doing an 
additional wipe is going to be more disk I/O for no real benefit.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposal for dd disk i/o performance blueprint of cinder.

2014-01-16 Thread CARVER, PAUL
Clint Byrum wrote:

Is that really a path worth going down, given that tenant-A could just
drop evil firmware in any number of places, and thus all tenants afterward
are owned anyway?

I think a change of subject line is in order for this topic (assuming it hasn't 
been discussed in sufficient depth already). I propose [Ironic] Evil Firmware 
but I didn't change it on this message in case folks interested in this thread 
aren't reading Ironic threads.

Ensuring clean firmware is definitely something Ironic needs to account for. 
Unless you're intending to say that multi-tenant bare metal is a dead end that 
shouldn't be done at all.

As long as anyone is considering Ironic and bare metal in general as a viable 
project and service it is critically important that people are focused on how 
to ensure that a server released by one tenant is clean before being provided 
to another tenant.

It doesn't even have to be evil firmware. Simply providing a tenant with a 
server where the previous tenant screwed up a firmware update or messed with 
BIOS settings or whatever is a problem. If you're going to lease bare metal out 
on a short term basis you've GOT to have some sort of QC to ensure that when 
the hardware is reused for another tenant it's as good as new.

If not, it will be all too common for a tenant to receive a bare metal server 
that's been screwed up by a previous tenant through incompetence as much as 
through maliciousness.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Tuskar] [UX] Infrastructure Management UI - Icehouse scoped wireframes

2014-01-16 Thread Hugh O. Brock
On Thu, Jan 16, 2014 at 01:50:00AM +0100, Jaromir Coufal wrote:
 Hi folks,
 
 thanks everybody for feedback. Based on that I updated wireframes
 and tried to provide a minimum scope for Icehouse timeframe.
 
 http://people.redhat.com/~jcoufal/openstack/tripleo/2014-01-16_tripleo-ui-icehouse.pdf
 
 Hopefully we are able to deliver described set of features. But if
 you find something what is missing which is critical for the first
 release (or that we are implementing a feature which should not have
 such high priority), please speak up now.
 
 The wireframes are very close to implementation. In time, there will
 appear more views and we will see if we can get them in as well.
 
 Thanks all for participation
 -- Jarda
 

These look great Jarda, I feel like things are coming together here.

--Hugh

-- 
== Hugh Brock, hbr...@redhat.com   ==
== Senior Engineering Manager, Cloud Engineering   ==
== Tuskar: Elastic Scaling for OpenStack   ==
== http://github.com/tuskar==

I know that you believe you understand what you think I said, but I’m
not sure you realize that what you heard is not what I meant.
--Robert McCloskey

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Hat Tip to fungi

2014-01-16 Thread Anita Kuno
Thank you, fungi.

You have kept openstack-infra running for that last 2 weeks as the sole
plate-spinner whilst the rest of us were conferencing, working on the
gerrit upgrade or getting our laptop stolen.

You spun up and configured two new Jenkinses (Jenkinsii?) and then deal
with the consequences of one expansion of our system slowing down
another. With Jim's help from afar, Zuul is now on a faster server. [0]

All this while dealing with the every day business of keeping -infra
operating.

I am so grateful for all you do.

I tip my hat to you, sir.

Anita.


[0] My sense is once Jim is back from vacation time he will provide a
detailed report on these changes.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] auto configration of local_ip

2014-01-16 Thread Isaku Yamahata
On Thu, Jan 16, 2014 at 10:53:11PM +1300,
Robert Collins robe...@robertcollins.net wrote:

 On 16 January 2014 22:51, Robert Collins robe...@robertcollins.net wrote:
 
  I don't think 1 is a special case of 3 - interface based connections
  are dependent on physical wiring,
 
  How about 4. Send a few packets with a nonce in them to any of the
  already meshed nodes, and those nodes can report what ip they
  originated from.
 
 Oh, and 5: do an 'ip route get ip of existing mesh node' which will
 give you output like:
 Press ENTER or type command to continue
 192.168.3.1 via 192.168.1.1 dev wlan0  src 192.168.1.17
 cache

6: run a command which is specified in the config file.
   use its output.

Then arbitrary logic can be used by writing shell script or whatever
without modifying neutron.
Some samples which implement option 1-5 can be put in the repo.

thanks,
Isaku Yamahata

 
 The src is the src address the local machine would be outputting from.
 
 -Rob
 
 -- 
 Robert Collins rbtcoll...@hp.com
 Distinguished Technologist
 HP Converged Cloud
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
Isaku Yamahata isaku.yamah...@gmail.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] auto configration of local_ip

2014-01-16 Thread Kyle Mestery

On Jan 16, 2014, at 7:52 AM, Isaku Yamahata isaku.yamah...@gmail.com wrote:

 On Thu, Jan 16, 2014 at 10:53:11PM +1300,
 Robert Collins robe...@robertcollins.net wrote:
 
 On 16 January 2014 22:51, Robert Collins robe...@robertcollins.net wrote:
 
 I don't think 1 is a special case of 3 - interface based connections
 are dependent on physical wiring,
 
 How about 4. Send a few packets with a nonce in them to any of the
 already meshed nodes, and those nodes can report what ip they
 originated from.
 
 Oh, and 5: do an 'ip route get ip of existing mesh node' which will
 give you output like:
 Press ENTER or type command to continue
 192.168.3.1 via 192.168.1.1 dev wlan0  src 192.168.1.17
cache
 
 6: run a command which is specified in the config file.
   use its output.
 
 Then arbitrary logic can be used by writing shell script or whatever
 without modifying neutron.
 Some samples which implement option 1-5 can be put in the repo.
 
Flexibility-wise, I like this option the best.

 thanks,
 Isaku Yamahata
 
 
 The src is the src address the local machine would be outputting from.
 
 -Rob
 
 -- 
 Robert Collins rbtcoll...@hp.com
 Distinguished Technologist
 HP Converged Cloud
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 -- 
 Isaku Yamahata isaku.yamah...@gmail.com
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] a common client library

2014-01-16 Thread Jesse Noller

On Jan 16, 2014, at 5:53 AM, Chmouel Boudjnah 
chmo...@enovance.commailto:chmo...@enovance.com wrote:


On Thu, Jan 16, 2014 at 12:38 PM, Chris Jones 
c...@tenshu.netmailto:c...@tenshu.net wrote:
Once a common library is in place, is there any intention to (or resistance 
against) collapsing the clients into a single project or even a single command 
(a la busybox)?


that's what openstackclient is here for 
https://github.com/openstack/python-openstackclient

After speaking with people working on OSC and looking at the code base in 
depth; I don’t think this addresses what Chris is implying: OSC wraps the 
individual CLIs built by each project today, instead of the inverse: a common 
backend that the individual CLIs can wrap - the latter is an important 
distinction as currently, building a single binary install of OSC for say, 
Windows is difficult given the dependency tree incurred by each of the wrapped 
CLIs, difference in dependencies, structure, etc.

Also, wrapping a series of inconsistent back end Client classes / functions / 
methods means that the layer that presents a consistent user interface (OSC) to 
the user is made more complex juggling names/renames/commands/etc.

In the inverted case of what we have today (single backend); as a developer of 
user interfaces (CLIs, Applications, Web apps (horizon)) you would be able to:

from openstack.common.api import Auth
from openstack.common.api import Compute
from openstack.common.util import cli_tools

my_cli = cli_tools.build(…)

def my_command(cli):
compute = Compute(Auth(cli.tentant…, connect=True))
compute.list_flavors()

This would mean that *even if the individual clients needed or wanted to keep 
their specific CLIs, they would be able to use a not “least common denominator” 
back end (each service can have a rich common.api.compute.py or 
api.compute/client.py and extend where needed. However tools like horizon / 
openstackclient can choose not to leverage the “power user/operator/admin” 
components and present a simplified user interface.

I’m working on a wiki page + blueprint to brainstorm how we could accomplish 
this based off of what work is in flight today (see doug’s linked blueprint) 
and sussing out a layout / API strawman for discussion.

Some of the additions that came out of this email threads and others:

1. Common backend should provide / offer caching utilities
2. Auth retries need to be handled by the auth object, and each sub-project 
delegates to the auth object to manage that.
3. Verified Mocks / Stubs / Fakes must be provided for proper unit testing

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gantt] Sync up patches

2014-01-16 Thread Joe Gordon
On Wed, Jan 15, 2014 at 1:29 PM, Dugger, Donald D donald.d.dug...@intel.com
 wrote:

  My thought was to try and get some parallel effort going, do the resync
 as a continuing task as suffer a little ongoing pain versus a large amount
 of pain at the end.  Given that the steps for a resync are the same no
 matter when we do it waiting until the end is acceptable.



 From a `just do it’ perspective I think we’re in violent agreement on the
 top level tasks, as long as your step 3, integration testing, is the same
 as what I’ve been calling working functionality, e.g. have the nova
 scheduler use the gantt source tree.



 PS:  How I resync.  What I’ve done is create a list with md5sums of all
 the files in nova that we’ve duplicated in gantt.  I then update a nova git
 tree and compare the current md5sums for those files with my list.  I use
 format-patch to get the patches from the nova tree and grep for any patch
 that applies to a gantt file.  I then use `git am’ to apply those patches
 to the gantt tree, modifying any of the patches that are needed.


So this sync won't work once we start the nova/gantt rename, so we need a
better approach.

Syncing the gantt tree with nova sounds like a daunting task. Perhaps it
would be easier if we use the current gantt tree as a test to see what is
involved in getting gantt working, and then redo the fork after the
icehouse feature freeze with the aim of getting the gantt tree working by
the start of juno, so we can have the freeze nova-scheduler discussion.
Syncing nova and gantt during feature freeze should be significantly easier
then doing it now.




 It may sound a little cumbersome but I’ve got some automated scripts that
 make it relatively easy and modifying the patches, the hard part, was going
 to be necessary no matter how we do it.



 --

 Don Dugger

 Censeo Toto nos in Kansa esse decisse. - D. Gale

 Ph: 303/443-3786



 *From:* Joe Gordon [mailto:joe.gord...@gmail.com]
 *Sent:* Wednesday, January 15, 2014 10:21 AM

 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] [gantt] Sync up patches







 On Wed, Jan 15, 2014 at 8:52 AM, Dugger, Donald D 
 donald.d.dug...@intel.com wrote:

  The current thought is that I will do the work to backport any change
 that are made to the nova tree that overlap the gantt tree.  I don’t see
 this as an impossible task.  Yes it will get harder as we make specific
 changes to gantt but, given that our first goal is to make gantt a drop in
 replacement for the nova scheduler there shouldn’t be that many gantt
 specific changes that would make backporting difficult so I think this is a
 doable path.



 How are you tracking this today? I think its worth having a well
 documented plan for this, as we will most likely have to keep syncing the
 two repos for a while.



 If all that is needed to cherry-pick a patch from nova to gantt is a
 nova=gantt rename these should be easy and a single +2 makes sense, but
 for any patch that requires changes beyond that I think a full review
 should be required.





 For the ordering, the unit tests and working functionality are indeed
 effectively the same, highest priority, I don’t have an issue with getting
 the unit tests working first.



 Great, so I would prefer to see gantt gating on unit tests before landing
 any other patches.



 Whats the full plan for the steps to bootstrap?  It would be nice to have
 a roadmap for this so we don't get bogged down in the weeds. Off the top of
 my head I imagine it would be something like (I have a feeling I am missing
 a few steps here):



 1) Get unit tests working

 2) Trim repo

 3) Set up integration testing  (In parallel get gantt client working)

 4) Resync with nova





 As far as trimming is concerned I would still prefer to do that later,
 after we have working functionality.  Since trimable files won’t have gantt
 specific changes keeping them in sync with the nova tree is easy.  Until we
 have working functionality we won’t really know that a file is not needed
 (I am constantly surprised by code that doesn’t do what I expect) and
 deleting a file after we are sure it’s not needed is easy.



 Fair enough, I moved trimming after get unit tests working in the list
 above.





 --

 Don Dugger

 Censeo Toto nos in Kansa esse decisse. - D. Gale

 Ph: 303/443-3786



 *From:* Joe Gordon [mailto:joe.gord...@gmail.com]
 *Sent:* Wednesday, January 15, 2014 9:28 AM


 *To:* OpenStack Development Mailing List (not for usage questions)

 *Subject:* Re: [openstack-dev] [gantt] Sync up patches







 On Tue, Jan 14, 2014 at 2:22 PM, Dugger, Donald D 
 donald.d.dug...@intel.com wrote:

  All-



 I want to clear up some confusion I’m seeing in the reviews of these
 syncup patches.  These patches merely bring recent changes from the nova
 tree over to the gantt tree.  There is no attempt to actually change the
 code for gantt, that is a separate task.  Our first goal is to have the
 

Re: [openstack-dev] [Openstack] [Neutron] auto configration of local_ip

2014-01-16 Thread balaj...@freescale.com
 2014/1/16 NOTSU Arata no...@virtualtech.jp:
  The question is, what criteria is appropriate for the purpose. The
 criteria being mentioned so far in the review are:
 
  1. assigned to the interface attached to default gateway 2. being in
  the specified network (CIDR) 3. assigned to the specified interface
 (1 can be considered a special case of 3)
 
 
 For a certain deployment scenario, local_ip is totally different among
 those nodes, but if we consider local_ip as local_interface, it may
 match most of the nodes. I think it is more convenient to switch from
 ip to interface parameter.
 
[P Balaji-B37839] We implemented this and using in our test setup. We are glad 
to share this through blue-print/Bug if anybody is interested.

Regards,
Balaji.P


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposal for dd disk i/o performance blueprint of cinder.

2014-01-16 Thread Chris Friesen

On 01/15/2014 11:25 PM, Clint Byrum wrote:

Excerpts from Alan Kavanagh's message of 2014-01-15 19:11:03 -0800:

Hi Paul

I posted a query to Ironic which is related to this discussion. My thinking was I want to 
ensure the case you note here (1)  a tenant can not read another tenants 
disk.. the next (2) was where in Ironic you provision a baremetal server that 
has an onboard dish as part of the blade provisioned to a given tenant-A. then when 
tenant-A finishes his baremetal blade lease and that blade comes back into the pool and 
tenant-B comes along, I was asking what open source tools guarantee data destruction so 
that no ghost images  or file retrieval is possible?



Is that really a path worth going down, given that tenant-A could just
drop evil firmware in any number of places, and thus all tenants afterward
are owned anyway?


Ooh, nice one! :)

I suppose the provider could flash to known-good firmware for all 
firmware on the device in between leases.


Chris


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Hat Tip to fungi

2014-01-16 Thread Jay Pipes
On Thu, 2014-01-16 at 08:44 -0500, Anita Kuno wrote:
 Thank you, fungi.
 
 You have kept openstack-infra running for that last 2 weeks as the sole
 plate-spinner whilst the rest of us were conferencing, working on the
 gerrit upgrade or getting our laptop stolen.
 
 You spun up and configured two new Jenkinses (Jenkinsii?) and then deal
 with the consequences of one expansion of our system slowing down
 another. With Jim's help from afar, Zuul is now on a faster server. [0]
 
 All this while dealing with the every day business of keeping -infra
 operating.
 
 I am so grateful for all you do.
 
 I tip my hat to you, sir.

Seconded.

-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Editing Nodes

2014-01-16 Thread Walls, Jeffrey Joel (Cloud OS RD)
 From: Jaromir Coufal [mailto:jcou...@redhat.com]

 Well, I think you are right. We should be able to provide as much as
 possible. I don't think that Tuskar has to do everything. Image builder
 would be amazing feature. And I don't think it has to be Tuskar-UI
 business. There can be UI separate from Tuskar, which is part of Horizon
 umbrella, dealing only with building images. I think it has enough
 reasons to become a separate project in the future.

I think we're already there.  The first time I saw Tuskar I was surprised
that it was embedded inside of Horizon.  Horizon is an administration
dashboard, but tuskar provides operational functionality.  For
HP Cloud OS, we ended up with two dashboards for this very reason.

At HP, I wrote a Tuskar-like solution for standing up OverClouds, but as
Tuskar matures I could totally see us pulling that in and using it instead.

As far as image building goes, I think that too would make more 
sense either stand-alone or embedded inside an operational
dashboard.

Jeff

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Hat Tip to fungi

2014-01-16 Thread Soren Hansen
+1. Thanks for all your hard work.
 Den 16/01/2014 19.17 skrev Anita Kuno ante...@anteaya.info:

 Thank you, fungi.

 You have kept openstack-infra running for that last 2 weeks as the sole
 plate-spinner whilst the rest of us were conferencing, working on the
 gerrit upgrade or getting our laptop stolen.

 You spun up and configured two new Jenkinses (Jenkinsii?) and then deal
 with the consequences of one expansion of our system slowing down
 another. With Jim's help from afar, Zuul is now on a faster server. [0]

 All this while dealing with the every day business of keeping -infra
 operating.

 I am so grateful for all you do.

 I tip my hat to you, sir.

 Anita.


 [0] My sense is once Jim is back from vacation time he will provide a
 detailed report on these changes.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Hat Tip to fungi

2014-01-16 Thread Kyle Mestery

On Jan 16, 2014, at 9:17 AM, Jay Pipes jaypi...@gmail.com wrote:

 On Thu, 2014-01-16 at 08:44 -0500, Anita Kuno wrote:
 Thank you, fungi.
 
 You have kept openstack-infra running for that last 2 weeks as the sole
 plate-spinner whilst the rest of us were conferencing, working on the
 gerrit upgrade or getting our laptop stolen.
 
 You spun up and configured two new Jenkinses (Jenkinsii?) and then deal
 with the consequences of one expansion of our system slowing down
 another. With Jim's help from afar, Zuul is now on a faster server. [0]
 
 All this while dealing with the every day business of keeping -infra
 operating.
 
 I am so grateful for all you do.
 
 I tip my hat to you, sir.
 
 Seconded.
 
Third’ed? Either way, thank you Fungi for all that you do for OpenStack!

Kyle

 -jay
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] a common client library

2014-01-16 Thread Jesse Noller

On Jan 16, 2014, at 9:07 AM, Joe Gordon 
joe.gord...@gmail.commailto:joe.gord...@gmail.com wrote:




On Thu, Jan 16, 2014 at 9:45 AM, Jesse Noller 
jesse.nol...@rackspace.commailto:jesse.nol...@rackspace.com wrote:

On Jan 16, 2014, at 5:53 AM, Chmouel Boudjnah 
chmo...@enovance.commailto:chmo...@enovance.com wrote:


On Thu, Jan 16, 2014 at 12:38 PM, Chris Jones 
c...@tenshu.netmailto:c...@tenshu.net wrote:
Once a common library is in place, is there any intention to (or resistance 
against) collapsing the clients into a single project or even a single command 
(a la busybox)?


that's what openstackclient is here for 
https://github.com/openstack/python-openstackclient

After speaking with people working on OSC and looking at the code base in 
depth; I don’t think this addresses what Chris is implying: OSC wraps the 
individual CLIs built by each project today, instead of the inverse: a common 
backend that the individual CLIs can wrap - the latter is an important 
distinction as currently, building a single binary install of OSC for say, 
Windows is difficult given the dependency tree incurred by each of the wrapped 
CLIs, difference in dependencies, structure, etc.

Also, wrapping a series of inconsistent back end Client classes / functions / 
methods means that the layer that presents a consistent user interface (OSC) to 
the user is made more complex juggling names/renames/commands/etc.

In the inverted case of what we have today (single backend); as a developer of 
user interfaces (CLIs, Applications, Web apps (horizon)) you would be able to:

from openstack.common.api import Auth
from openstack.common.api import Compute
from openstack.common.util import cli_tools

my_cli = cli_tools.build(…)

def my_command(cli):
compute = Compute(Auth(cli.tentant…, connect=True))
compute.list_flavors()

This would mean that *even if the individual clients needed or wanted to keep 
their specific CLIs, they would be able to use a not “least common denominator” 
back end (each service can have a rich 
common.api.compute.pyhttp://common.api.compute.py/ or api.compute/client.py 
and extend where needed. However tools like horizon / openstackclient can 
choose not to leverage the “power user/operator/admin” components and present a 
simplified user interface.

I’m working on a wiki page + blueprint to brainstorm how we could accomplish 
this based off of what work is in flight today (see doug’s linked blueprint) 
and sussing out a layout / API strawman for discussion.

Some of the additions that came out of this email threads and others:

1. Common backend should provide / offer caching utilities
2. Auth retries need to be handled by the auth object, and each sub-project 
delegates to the auth object to manage that.
3. Verified Mocks / Stubs / Fakes must be provided for proper unit testing

I am happy to see this work being done, there is definitely a lot of work to be 
done on the clients.

This blueprint sounds like its still being fleshed out, so I am wondering what 
the value is of the current patches 
https://review.openstack.org/#/q/topic:bp/common-client-library-2,n,z

Those patches mainly sync cliutils and apiutils from oslo into the assorted 
clients. But if this blueprint is about the python API and not the CLI (as that 
would be the openstack-pythonclient), why sync in apiutils?

Also does this need to go through oslo-incubator or can this start out as a 
library? Making this a library earlier on will reduce the number of patches 
needed to get 20+ repositories to use this.


Alexei and others have at least started the first stage of a rollout - the 
blueprint(s) needs additional work, planning and discussion, but his work is a 
good first step (reduce the duplication of code) although I am worried that the 
libraries and APIs / namespaces will need to change if we continue these 
discussions which potentially means re-doing work.

If we take a step back, a rollout process might be:

1: Solidify the libraries / layout / naming conventions (blueprint)
2: Solidify the APIs exposed to consumers (blueprint)
3: Pick up on the common-client-library-2 work which is primarily a migration 
of common code into oslo today, into the structure defined by 1  2

So, I sort of agree: moving / collapsing code now might be premature. I do 
strongly agree it should stand on its own as a library rather than an oslo 
incubator however. We should start with a single, clean namespace / library 
rather than depending on oslo directly.

jesse


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [third-party-testing] Sharing information

2014-01-16 Thread Sullivan, Jon Paul

From: Mohammad Banikazemi [mailto:m...@us.ibm.com]


Sullivan, Jon Paul jonpaul.sulli...@hp.commailto:jonpaul.sulli...@hp.com 
wrote on 01/16/2014 05:39:04 AM:


 I can't recommend Jenkins Job Builder highly enough if you use Jenkins.

 [2] https://github.com/openstack-infra/jenkins-job-builder



Thanks for the pointer.
When you say we could reuse the code used by the infra team, are you referring 
to the yaml files and/or build scripts?
Is there a place where we could look at the yaml configuration (and other 
relevant) files that they may be using?

The openstack-infra/config repository stores the openstack jjb code:

https://github.com/openstack-infra/config/tree/master/modules/openstack_project/files/jenkins_job_builder/config

I am writing another reply with a sample macro to trigger off a gerrit upload 
in a different reply…

Thanks,

Mohammad


Thanks,
Jon-Paul Sullivan ☺ Cloud Services - @hpcloud

Postal Address: Hewlett-Packard Galway Limited, Ballybrit Business Park, Galway.
Registered Office: Hewlett-Packard Galway Limited, 63-74 Sir John Rogerson's 
Quay, Dublin 2.
Registered Number: 361933

The contents of this message and any attachments to it are confidential and may 
be legally privileged. If you have received this message in error you should 
delete it from your system immediately and advise the sender.

To any recipient of this message within HP, unless otherwise stated, you should 
consider this message and attachments as HP CONFIDENTIAL.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] a common client library

2014-01-16 Thread Jesse Noller

On Jan 16, 2014, at 9:26 AM, Justin Hammond 
justin.hamm...@rackspace.commailto:justin.hamm...@rackspace.com wrote:

I'm not sure if it was said, but which httplib using being used (urllib3
maybe?). Also I noticed many people were talking about supporting auth
properly, but are there any intentions to properly support 'noauth'
(python-neutronclient, for instance, doesn't support it properly as of
this writing)?


Can you detail out noauth for me; and I would say the defacto httplib in python 
today is python-requests - urllib3 is also good but I would say from a 
*consumer* standpoint requests offers more in terms of usability / extensibility

On 1/15/14 10:53 PM, Alexei Kornienko 
alexei.kornie...@gmail.commailto:alexei.kornie...@gmail.com wrote:

I did notice, however, that neutronclient is
conspicuously absent from the Work Items in the blueprint's Whiteboard.

It will surely be added later. We already working on several things in
parallel and we will add neutronclient soon.

Do you need another person to make neutron client?


I would love to see a bit more detail on the structure of the
lib(s), the blueprint really doesn't discuss the
design/organization/intended API of the libs.  For example, I would hope
the distinction between the various layers of a client stack don't get
lost, i.e. not mixing the low-level REST API bits with the higher-level
CLI parsers and decorators.
Does the long-term goals include a common caching layer?


Distinction between client layers won't get lost and would only be
improved. My basic idea is the following:

1) Transport layer would handle all transport related stuff - HTTP, JSON
encoding, auth, caching, etc.

2) Model layer (Resource classes, BaseManager, etc.) will handle data
representation, validation

3) API layer will handle all project specific stuff - url mapping, etc.
(This will be imported to use client in other applications)

4) Cli level will handle all stuff related to cli mapping - argparse,
argcomplete, etc.


I believe the current effort referenced by the blueprint is focusing on
moving existing code into the incubator for reuse, to make it easier to
restructure later. Alexei, do I have that correct?

You are right. The first thing we do is try to make all clients look/work
in similar way. After we'll continue our work with improving overall
structure.






2014/1/16 Noorul Islam K M noo...@noorul.commailto:noo...@noorul.com

Doug Hellmann doug.hellm...@dreamhost.commailto:doug.hellm...@dreamhost.com 
writes:

Several people have mentioned to me that they are interested in, or
actively working on, code related to a common client library --
something
meant to be reused directly as a basis for creating a common library for
all of the openstack clients to use. There's a blueprint [1] in oslo,
and I
believe the keystone devs and unified CLI teams are probably interested
in
ensuring that the resulting API ends up meeting all of our various
requirements.

If you're interested in this effort, please subscribe to the blueprint
and
use that to coordinate efforts so we don't produce more than one common
library. ;-)



Solum is already using it https://review.openstack.org/#/c/58067/

I would love to watch this space.

Regards,
Noorul

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev







___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Hat Tip to fungi

2014-01-16 Thread Davanum Srinivas
+1 :)

-- dims

On Thu, Jan 16, 2014 at 10:25 AM, Kyle Mestery mest...@siliconloons.com wrote:

 On Jan 16, 2014, at 9:17 AM, Jay Pipes jaypi...@gmail.com wrote:

 On Thu, 2014-01-16 at 08:44 -0500, Anita Kuno wrote:
 Thank you, fungi.

 You have kept openstack-infra running for that last 2 weeks as the sole
 plate-spinner whilst the rest of us were conferencing, working on the
 gerrit upgrade or getting our laptop stolen.

 You spun up and configured two new Jenkinses (Jenkinsii?) and then deal
 with the consequences of one expansion of our system slowing down
 another. With Jim's help from afar, Zuul is now on a faster server. [0]

 All this while dealing with the every day business of keeping -infra
 operating.

 I am so grateful for all you do.

 I tip my hat to you, sir.

 Seconded.

 Third’ed? Either way, thank you Fungi for all that you do for OpenStack!

 Kyle

 -jay


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: http://davanum.wordpress.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] [Neutron] auto configration of local_ip

2014-01-16 Thread Martinx - ジェームズ
Guys,

Let me ask something about this...

Apparently, VXLAN can be easier to implement/maintain when using it with
IPv6 (read about it here: www.nephos6.com/pdf/OpenStack-on-IPv6.pdf), so,
I'm wondering if local_ip can be an IPv6 address (for IceHouse-3 / Ubuntu
14.04) and, of course, if it is better in the end of the day.

Thoughts?!

Cheers!
Thiago


On 16 January 2014 12:58, balaj...@freescale.com balaj...@freescale.comwrote:

  2014/1/16 NOTSU Arata no...@virtualtech.jp:
   The question is, what criteria is appropriate for the purpose. The
  criteria being mentioned so far in the review are:
  
   1. assigned to the interface attached to default gateway 2. being in
   the specified network (CIDR) 3. assigned to the specified interface
  (1 can be considered a special case of 3)
  
 
  For a certain deployment scenario, local_ip is totally different among
  those nodes, but if we consider local_ip as local_interface, it may
  match most of the nodes. I think it is more convenient to switch from
  ip to interface parameter.
 
 [P Balaji-B37839] We implemented this and using in our test setup. We are
 glad to share this through blue-print/Bug if anybody is interested.

 Regards,
 Balaji.P


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] auto configration of local_ip

2014-01-16 Thread Jay Pipes
On Thu, 2014-01-16 at 17:41 +0900, NOTSU Arata wrote:
 Hello,
 
 I'm trying to add a new configuration option for Neutron OVS agent. Although 
 I've submitted a patch and it is being reviewed [1], I'm posting to this 
 mailing list seeking opinion from a wider range.
 
 At present, when you deploy an environment using Neutron + OVS + GRE/VXLAN, 
 you have to set local_ip for tunnelling in neutron agent config (typically 
 ovs_neutron_plugin.ini). As each host has different local_ip, preparing and 
 maintaining the config files is a cumbersome work.
 
 So, I would like to introduce automatic configuration of local_ip in Neutron. 
 Although such a management should be done by some management system, Neutron 
 having this feature cloud be also helpful to the deployment system. It would 
 reduce the need for the systems to implement such a feature on their own.
 
 Anyway, with this feature, instead of setting an actual IP address to 
 local_ip, you set some options (criteria) for choosing an IP address suitable 
 for local_ip among IP addresses assigned to the host. At runtime, OVS agent 
 will choose an IP address according to the criteria. You will get the same 
 effect as you set local_ip=the IP address by hand.
 
 The question is, what criteria is appropriate for the purpose. The criteria 
 being mentioned so far in the review are:
 
 1. assigned to the interface attached to default gateway
 2. being in the specified network (CIDR)
 3. assigned to the specified interface
(1 can be considered a special case of 3)
 
 Any comment would be appreciated.

Hi Arata,

I definitely understand the thoughts behind your patch. However, I
believe this falls under the realm of configuration management tools
(Chef/Puppet/Ansible/Salt/etc).

As an example of how the OpenStack Chef cookbooks deal with this
particular problem, we inject a local_ip variable into the
ovs_neutron_plugin.ini.erb template [1]. This local_ip variable is
assigned in the common.rb recipe [2]. The deployer can either manually
set the local_ip to an address by setting the node's attribute directly,
or the deployer can have the local_ip address determined based on a
named network interface. See the code at [2] for this switching
behavior.

In summary, I believe it's best left to configuration management systems
to do this type of thing (configuration of many nodes in a structured
fashion), instead of having Neutron do this itself.

Best,
-jay

[1]
https://github.com/stackforge/cookbook-openstack-network/blob/master/templates/default/plugins/openvswitch/ovs_neutron_plugin.ini.erb#L111
[2]
https://github.com/stackforge/cookbook-openstack-network/blob/master/recipes/common.rb#L125


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Hat Tip to fungi

2014-01-16 Thread Russell Bryant
On 01/16/2014 08:44 AM, Anita Kuno wrote:
 Thank you, fungi.
 
 You have kept openstack-infra running for that last 2 weeks as the sole
 plate-spinner whilst the rest of us were conferencing, working on the
 gerrit upgrade or getting our laptop stolen.
 
 You spun up and configured two new Jenkinses (Jenkinsii?) and then deal
 with the consequences of one expansion of our system slowing down
 another. With Jim's help from afar, Zuul is now on a faster server. [0]
 
 All this while dealing with the every day business of keeping -infra
 operating.
 
 I am so grateful for all you do.
 
 I tip my hat to you, sir.

+1

This is some very well deserved recognition for hard work and dedication
to OpenStack. :-)

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] a common client library

2014-01-16 Thread Alexei Kornienko

On 01/16/2014 05:25 PM, Jesse Noller wrote:


On Jan 16, 2014, at 9:07 AM, Joe Gordon joe.gord...@gmail.com 
mailto:joe.gord...@gmail.com wrote:






On Thu, Jan 16, 2014 at 9:45 AM, Jesse Noller 
jesse.nol...@rackspace.com mailto:jesse.nol...@rackspace.com wrote:



On Jan 16, 2014, at 5:53 AM, Chmouel Boudjnah
chmo...@enovance.com mailto:chmo...@enovance.com wrote:



On Thu, Jan 16, 2014 at 12:38 PM, Chris Jones c...@tenshu.net
mailto:c...@tenshu.net wrote:

Once a common library is in place, is there any intention to
(or resistance against) collapsing the clients into a single
project or even a single command (a la busybox)?



that's what openstackclient is here for
https://github.com/openstack/python-openstackclient


After speaking with people working on OSC and looking at the code
base in depth; I don't think this addresses what Chris is
implying: OSC wraps the individual CLIs built by each project
today, instead of the inverse: a common backend that the
individual CLIs can wrap - the latter is an important distinction
as currently, building a single binary install of OSC for say,
Windows is difficult given the dependency tree incurred by each
of the wrapped CLIs, difference in dependencies, structure, etc.

Also, wrapping a series of inconsistent back end Client classes /
functions / methods means that the layer that presents a
consistent user interface (OSC) to the user is made more complex
juggling names/renames/commands/etc.

In the inverted case of what we have today (single backend); as a
developer of user interfaces (CLIs, Applications, Web apps
(horizon)) you would be able to:

from openstack.common.api import Auth
from openstack.common.api import Compute
from openstack.common.util import cli_tools

my_cli = cli_tools.build(...)

def my_command(cli):
compute = Compute(Auth(cli.tentant..., connect=True))
compute.list_flavors()

This would mean that *even if the individual clients needed or
wanted to keep their specific CLIs, they would be able to use a
not least common denominator back end (each service can have a
rich common.api.compute.py http://common.api.compute.py/ or
api.compute/client.py and extend where needed. However tools like
horizon / openstackclient can choose not to leverage the power
user/operator/admin components and present a simplified user
interface.

I'm working on a wiki page + blueprint to brainstorm how we could
accomplish this based off of what work is in flight today (see
doug's linked blueprint) and sussing out a layout / API strawman
for discussion.

Some of the additions that came out of this email threads and others:

1. Common backend should provide / offer caching utilities
2. Auth retries need to be handled by the auth object, and each
sub-project delegates to the auth object to manage that.
3. Verified Mocks / Stubs / Fakes must be provided for proper
unit testing


I am happy to see this work being done, there is definitely a lot of 
work to be done on the clients.


This blueprint sounds like its still being fleshed out, so I am 
wondering what the value is of the current patches 
https://review.openstack.org/#/q/topic:bp/common-client-library-2,n,z


Those patches mainly sync cliutils and apiutils from oslo into the 
assorted clients. But if this blueprint is about the python API and 
not the CLI (as that would be the openstack-pythonclient), why sync 
in apiutils?


Also does this need to go through oslo-incubator or can this start 
out as a library? Making this a library earlier on will reduce the 
number of patches needed to get 20+ repositories to use this.




Alexei and others have at least started the first stage of a rollout - 
the blueprint(s) needs additional work, planning and discussion, but 
his work is a good first step (reduce the duplication of code) 
although I am worried that the libraries and APIs / namespaces will 
need to change if we continue these discussions which potentially 
means re-doing work.


If we take a step back, a rollout process might be:

1: Solidify the libraries / layout / naming conventions (blueprint)
2: Solidify the APIs exposed to consumers (blueprint)
3: Pick up on the common-client-library-2 work which is primarily a 
migration of common code into oslo today, into the structure defined 
by 1  2


So, I sort of agree: moving / collapsing code now might be premature. 
I do strongly agree it should stand on its own as a library rather 
than an oslo incubator however. We should start with a single, clean 
namespace / library rather than depending on oslo directly.
Knowing usual openstack workflow I'm afraid that #1,#2 with a waterfall 
approach may take years to be complete.
And after they'll be approved it will become clear that this 
architecture is already outdated.

We try to use iterative approach for 

Re: [openstack-dev] [TripleO][Tuskar] Editing Nodes

2014-01-16 Thread Jiří Stránský

On 15.1.2014 14:07, James Slagle wrote:

I'll start by laying out how I see editing or updating nodes working
in TripleO without Tuskar:

To do my initial deployment:
1.  I build a set of images for my deployment for different roles. The
images are different based on their role, and only contain the needed
software components to accomplish the role they intend to be deployed.
2.  I load the images into glance
3.  I create the Heat template for my deployment, likely from
fragments that are already avaiable. Set quantities, indicate which
images (via image uuid) are for which resources in heat.
4.  heat stack-create with my template to do the deployment

To update my deployment:
1.  If I need to edit a role (or create a new one), I create a new image.
2.  I load the new image(s) into glance
3.  I edit my Heat template, update any quantities, update any image uuids, etc.
4.  heat stack-update my deployment

In both cases above, I see the role of Tuskar being around steps 3 and 4.


+1. Although it's worth noting that if we want zero downtime updates, 
we'll probably need ability to migrate content off the machines being 
updated - that would be a pre-3 step. (And for that we need spare 
capacity equal to the number of nodes being updated, so we'll probably 
want to do updating in chunks in the future, not the whole overcloud at 
once).




I may be misinterpreting, but let me say that I don't think Tuskar
should be building images. There's been a fair amount of discussion
around a Nova native image building service [1][2]. I'm actually not
sure what the status/concensus on that is, but maybe longer term,
Tuskar might call an API to kick off an image build.


Yeah I don't think image building should be driven through Tuskar API 
(and probably not even Tuskar UI?). Tuskar should just fetch images from 
Glance imho. However, we should be aware that image building *is* our 
concern, as it's an important prerequisite for deployment. We should 
provide at least directions how to easily build images for use with 
Tuskar, not leave users in doubt.


snip


We will have to store image metadata in tuskar probably, that would map to
glance, once the image is generated. I would say we need to store the list
of the elements and probably the commit hashes (because elements can
change). Also it should be versioned, as the images in glance will be also
versioned.


I'm not sure why this image metadata would be in Tuskar. I definitely
like the idea of knowing the versions/commit hashes of the software
components in your images, but that should probably be in Glance.


+1




We can't probably store it in the Glance, cause we will first store the
metadata, then generate image. Right?


I'm not sure I follow this point. But, mainly, I don't think Tuskar
should be automatically generating images.


+1




Then we could see whether image was created from the metadata and whether
that image was used in the heat-template. With versions we could also see
what has changed.


We'll be able to tell what image was used in the heat template, and
thus the deployment,  based on it's UUID.

I love the idea of seeing differences between images, especially
installed software versions, but I'm not sure that belongs in Tuskar.
That sort of utility functionality seems like it could apply to any
image you might want to launch in OpenStack, not just to do a
deployment.  So, I think it makes sense to have that as Glance
metadata or in Glance somehow. For instance, if I wanted to launch an
image that had a specific version of apache, it'd be nice to be able
to see that when I'm choosing an image to launch.


Yes. We might want to show the data to the user, but i don't see a need 
to run this through Tuskar API. Tuskar UI could query Glance directly 
and display the metadata to the user. (When using CLI, one could use 
Glance CLI directly. We're not adding any special logic on top.)





But there was also idea that there will be some generic image, containing
all services, we would just configure which services to start. In that case
we would need to version also this.


-1 to this.  I think we should stick with specialized images per role.
I replied on the wireframes thread, but I don't see how
enabling/disabling services in a prebuilt image should work. Plus, I
don't really think it fits with the TripleO model of having an image
created based on it's specific role (I hate to use that term and
muddy the wateri mean in the generic sense here).



= New Comments =

My comments on this train of thought:

- I'm afraid of the idea of applying changes immediately for the same
reasons I'm worried about a few other things. Very little of what we do will
actually finish executing immediately and will instead be long running
operations. If I edit a few roles in a row, we're looking at a lot of
outstanding operations executing against other OpenStack pieces (namely
Heat).

The idea of immediately also suffers from a sort of Oh shit, that's not
what I 

Re: [openstack-dev] a common client library

2014-01-16 Thread Jay Pipes
On Thu, 2014-01-16 at 09:03 +0100, Flavio Percoco wrote:
 On 15/01/14 21:35 +, Jesse Noller wrote:
 
 On Jan 15, 2014, at 1:37 PM, Doug Hellmann doug.hellm...@dreamhost.com 
 wrote:
 
  Several people have mentioned to me that they are interested in, or 
  actively working on, code related to a common client library -- 
  something meant to be reused directly as a basis for creating a common 
  library for all of the openstack clients to use. There's a blueprint [1] 
  in oslo, and I believe the keystone devs and unified CLI teams are 
  probably interested in ensuring that the resulting API ends up meeting all 
  of our various requirements.
 
  If you're interested in this effort, please subscribe to the blueprint and 
  use that to coordinate efforts so we don't produce more than one common 
  library. ;-)
 
  Thanks,
  Doug
 
 
  [1] https://blueprints.launchpad.net/oslo/+spec/common-client-library-2
 
 *raises hand*
 
 Me me!
 
 I’ve been talking to many contributors about the Developer Experience stuff 
 I emailed out prior to the holidays and I was starting blueprint work, but 
 this is a great pointer. I’m going to have to sync up with Alexei.
 
 I think solving this for openstack developers and maintainers as the 
 blueprint says is a big win in terms of code reuse / maintenance and 
 consistent but more so for *end-user developers* consuming openstack clouds.
 
 Some background - there’s some terminology mismatch but the rough idea is 
 the same:
 
 * A centralized “SDK” (Software Development Kit) would be built condensing 
 the common code and logic and operations into a single namespace.
 
 * This SDK would be able to be used by “downstream” CLIs - essentially the 
 CLIs become a specialized front end - and in some cases, only an argparse or 
 cliff front-end to the SDK methods located in the (for example) 
 openstack.client.api.compute
 
 * The SDK would handle Auth, re-auth (expired tokens, etc) for long-lived 
 clients - all of the openstack.client.api.** classes would accept an Auth 
 object to delegate management / mocking of the Auth / service catalog stuff 
 to. This means developers building applications (say for example, horizon) 
 don’t need to worry about token/expired authentication/etc.
 
 * Simplify the dependency graph  code for the existing tools to enable 
 single binary installs (py2exe, py2app, etc) for end users of the command 
 line tools.
 
 Short version: if a developer wants to consume an openstack cloud; the would 
 have a single SDK with minimal dependencies and import from a single 
 namespace. An example application might look like:
 
 from openstack.api import AuthV2
 from openstack.api import ComputeV2
 
 myauth = AuthV2(…., connect=True)
 compute = ComputeV2(myauth)
 
 compute.list_flavors()
 
 
 I know this is an example but, could we leave the version out of the
 class name? Having something like:
 
 from openstack.api.v2 import Compute
 
 or
 
 from openstack.compute.v2 import Instance
 
 (just made that up)
 
 for marconi we're using the later.

++

-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] instance migration strangeness in devstack

2014-01-16 Thread Dan Genin
Thank you for replying, Vish. I did sync and verified that the file was 
written to the host disk by mounting the LVM volume on the host.


When I tried live migration I got a Horizon blurb Error: Failed to live 
migrate instance to host but there were no errors in syslog.


I have been able to successfully migrate a Qcow2 backed instance.
Dan

On 01/16/2014 03:18 AM, Vishvananda Ishaya wrote:
This is probably more of a usage question, but I will go ahead and 
answer it.


If you are writing to the root drive you may need to run the sync 
command a few times to make sure that the data has been flushed to 
disk before you kick off the migration.


The confirm resize step should be deleting the old data, but there may 
be a bug in the lvm backend if this isn’t happening. Live(block) 
migration will probably be a bit more intuitive.


Vish
On Jan 15, 2014, at 2:40 PM, Dan Genin daniel.ge...@jhuapl.edu 
mailto:daniel.ge...@jhuapl.edu wrote:


I think this qualifies as a development question but please let me 
know if I am wrong.


I have been trying to test instance migration in devstack by setting 
up a multi-node devstack following directions at 
http://devstack.org/guides/multinode-lab.html. I tested that indeed 
there were multiple availability zones and that it was possible to 
create instances in each. The I tried migrating an instance from one 
compute node to another using the Horizon interface (I could not find 
a way to /confirm///migration, which is a necessary step, from the 
command line). I created a test file on the instance's ephemeral 
disk, before migrating it, to verify that the data was moved to the 
destination compute node. After migration, I observed an instance 
with the same id active on the destination node but the test file was 
not present.


Perhaps I misunderstand how migration is supposed to work but I 
expected that the data on the ephemeral disk would be migrated with 
the instance. I suppose it could take some time for the ephemeral 
disk to be copied but then I would not expect the instance to become 
active on the destination node before the copy operation was complete.


I also noticed that the ephemeral disk on the source compute node was 
not removed after the instance was migrated, although, the instance 
directory was. Nor was the disk removed after the instance was 
destroyed. I was using LVM backend for my tests.


I can provide more information about my setup but I just wanted to 
check whether I was doing (or expecting) something obviously stupid.


Thank you,
Dan
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org 
mailto:OpenStack-dev@lists.openstack.org

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev






smime.p7s
Description: S/MIME Cryptographic Signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] a common client library

2014-01-16 Thread Chmouel Boudjnah
On Thu, Jan 16, 2014 at 4:37 PM, Jesse Noller jesse.nol...@rackspace.comwrote:

 Can you detail out noauth for me; and I would say the defacto httplib in
 python today is python-requests - urllib3 is also good but I would say from
 a *consumer* standpoint requests offers more in terms of usability /
 extensibility


FYI: python-requests is using urllib3 for its backend (which use httplib
after that).

Chmouel.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Domain ID in Policy_dict

2014-01-16 Thread Tiwari, Arvind

I think you have to define rule as below

domain-admin: role:domain_admin and domain_id:%(target.domain.domain_id)s

Associate this rule with APIS which you want to scope to domain admin.

Try and let us know.

Arvind

-Original Message-
From: boun...@canonical.com [mailto:boun...@canonical.com] On Behalf Of Telles 
Mota Vidal Nóbrega
Sent: Thursday, January 16, 2014 6:30 AM
To: Tiwari, Arvind
Subject: Domain ID in Policy_dict

Hi, i'm working on some new features for openstack and this merge that
you submitted https://review.openstack.org/#/c/50488/ does most of what
I need. I updated the code here but I couldn't make it work, my idea is
to create a role called domain_admin, to check this we would need to
check if the user is admin and is owner of the domain and for that we
would need the domain_id t o be checked at the policy.json which by the
examples you posted works. Unfortunetly I wasn't able to do so, can you
help me out, give me some tips on how to get this working?

Thanks
-- 
This message was sent from Launchpad by
=?utf-8?q?Telles_Mota_Vidal_N=C3=B3brega?= (https://launchpad.net/~tellesmvn)
using the Contact this user link on your profile page
(https://launchpad.net/~arvind-tiwari).
For more information see
https://help.launchpad.net/YourAccount/ContactingPeople

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [third-party-testing] Sharing information

2014-01-16 Thread Sullivan, Jon Paul
 From: Jay Pipes [mailto:jaypi...@gmail.com]
 On Thu, 2014-01-16 at 10:39 +, Sullivan, Jon Paul wrote:
   From: Kyle Mestery [mailto:mest...@siliconloons.com]
 
   FYI, here [1] are the meeting logs from today’s meeting.
  
   A couple of things have become apparent here:
  
   1. No one has a working Neutron 3rd party testing rig yet which is
   voting
   consistently. If I’ve missed something, please, someone correct
 me.
   2. People are still hung on issues around Jenkins/gerrit
 integration.
 
  This issue can be very easily resolved if people were to use Jenkins
 Job Builder [2] for the creation of their Jenkins testing jobs.  This
 would allow the reuse of simple macros already in existence to guarantee
 correct configuration of Jenkins jobs at 3rd party sites.  This would
 also allow simple reuse of the code used by the infra team to create the
 openstack review and gate jobs, ensuring 3rd party testers can generate
 the correct code from the gerrit change and also publish results back in
 a standard way.
 
  I can't recommend Jenkins Job Builder highly enough if you use
 Jenkins.
 
  [2] https://github.com/openstack-infra/jenkins-job-builder
 
 ++ It's a life-saver. We used it heavily in ATT with our
 Gerrit/Jenkins/Zuul CI system.
 
 -jay

It seems to me that shared JJB macros could be the most concise and simple way
of describing 3rd party testing integration requirements.

So the follow-on questions are:
1. Can the 3rd party testing blueprint enforce, or at least link to,
   use of specific JJB macros for integration to the openstack gerrit?
  1a. Where should shared JJB code be stored?
2. Is it appropriate for 3rd party testers to share their tests as
   JJB code, if they are willing?
  2a. Would this live in the same location as (1a)?

For those unfamiliar with JJB, here is a little example of what you might do:

Example of (untested) JJB macro describing how to configure Jenkins to
trigger from gerrit:

- trigger:
name: 3rd-party-gerrit-review
triggers:
  - gerrit:
 triggerOnPatchsetUploadedEvent: true
 triggerOnChangeMergedEvent: false
 triggerOnRefUpdatedEvent: false
 triggerOnCommentAddedEvent: false
 overrideVotes: true
 gerritBuildSuccessfulVerifiedValue: 0
 gerritBuildFailedVerifiedValue: -1
 projects:
   - projectCompareType: 'PLAIN'
 projectPattern: '{project_pattern}'
 branchCompareType: 'ANT'
 branchPattern: '**'
 failureMessage: '3rd party test {test_name} failed.  Contact is 
{test_contact}.  Test log is available at {test_log_url}'

Use of macro in JJB file (again untested):
-job:
name: my-test
triggers:
- 3rd-party-gerrit-review:
project_pattern: 'https://github.com/openstack/neutron.git'
test_name: 'my-3rd-party-test'
test_contact: 'my-em...@example.com'
test_log_url: 'http://mylogsarehere.com/'

 
   3. There are issues with devstack failing, but these seem to be
 Neutron
   plugin specific. I’ve encouraged people to reach out on both the
   #openstack-neutron and #openstack-qa channels with questions.
   4. There is still some confusion on what tests to run. I think this
 is
   likely to be plugin dependent.
   5. There is some confusion around what version of devstack to use.
   My assumption has always been upstream master.
  
   Another general issue which I wanted to highlight here, which has
   been brought up before, is that for companies/projects proposing
   plugins, MechanismDrivers, and/or service plugins you really need
   someone active on both the mailing list as well as the IRC channels.
   This will help if your testing rig has issues, or if people need
   help understanding why your test setup is failing with their patch.
  
   So, that’s the Neutron 3rd party testing update as we near the
   deadline next week.
  
   Thanks!
   Kyle
  
   [1]
   http://eavesdrop.openstack.org/meetings/networking_third_party_testi
   ng/2 014/networking_third_party_testing.2014-01-15-22.00.log.html
  
Thanks, 
Jon-Paul Sullivan ☺ Cloud Services - @hpcloud

Postal Address: Hewlett-Packard Galway Limited, Ballybrit Business Park, Galway.
Registered Office: Hewlett-Packard Galway Limited, 63-74 Sir John Rogerson's 
Quay, Dublin 2. 
Registered Number: 361933
 
The contents of this message and any attachments to it are confidential and may 
be legally privileged. If you have received this message in error you should 
delete it from your system immediately and advise the sender.

To any recipient of this message within HP, unless otherwise stated, you should 
consider this message and attachments as HP CONFIDENTIAL.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] a common client library

2014-01-16 Thread Dean Troyer
On Thu, Jan 16, 2014 at 9:37 AM, Jesse Noller jesse.nol...@rackspace.comwrote:

 On Jan 16, 2014, at 9:26 AM, Justin Hammond justin.hamm...@rackspace.com
 wrote:

 I'm not sure if it was said, but which httplib using being used (urllib3
 maybe?). Also I noticed many people were talking about supporting auth
 properly, but are there any intentions to properly support 'noauth'
 (python-neutronclient, for instance, doesn't support it properly as of
 this writing)?

 Can you detail out noauth for me; and I would say the defacto httplib in
 python today is python-requests - urllib3 is also good but I would say from
 a *consumer* standpoint requests offers more in terms of usability /
 extensibility


requests is built on top of urllib3 so there's that...

The biggest reaon I favor using Jamie Lennox's new session layer stuff in
keystoneclient is that it better reflects the requests API instead of it
being stuffed in after the fact.  And as the one responsible for that
stuffing, it was pretty blunt and really needs to be cleaned up more than
Alessio did.

only a few libs (maybe just glance and swift?) don't use requests at this
point and I think the resistance there is the chunked transfers they both
do.

I'm really curious what 'noauth' means against APIs that have few, if any,
calls that operate without a valid token.

dt

-- 

Dean Troyer
dtro...@gmail.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [third-party-testing] Sharing information

2014-01-16 Thread Sullivan, Jon Paul
Apologies for an almost duplicate post, I corrected the mistake in the example. 
 Ooops.

 -Original Message-
 From: Sullivan, Jon Paul
 Sent: 16 January 2014 15:38
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [neutron] [third-party-testing] Sharing
 information
 
  From: Jay Pipes [mailto:jaypi...@gmail.com] On Thu, 2014-01-16 at
  10:39 +, Sullivan, Jon Paul wrote:
From: Kyle Mestery [mailto:mest...@siliconloons.com]
   
FYI, here [1] are the meeting logs from today’s meeting.
   
A couple of things have become apparent here:
   
1. No one has a working Neutron 3rd party testing rig yet which is
voting
consistently. If I’ve missed something, please, someone
correct
  me.
2. People are still hung on issues around Jenkins/gerrit
  integration.
  
   This issue can be very easily resolved if people were to use Jenkins
  Job Builder [2] for the creation of their Jenkins testing jobs.  This
  would allow the reuse of simple macros already in existence to
  guarantee correct configuration of Jenkins jobs at 3rd party sites.
  This would also allow simple reuse of the code used by the infra team
  to create the openstack review and gate jobs, ensuring 3rd party
  testers can generate the correct code from the gerrit change and also
  publish results back in a standard way.
  
   I can't recommend Jenkins Job Builder highly enough if you use
  Jenkins.
  
   [2] https://github.com/openstack-infra/jenkins-job-builder
 
  ++ It's a life-saver. We used it heavily in ATT with our
  Gerrit/Jenkins/Zuul CI system.
 
  -jay
 
 It seems to me that shared JJB macros could be the most concise and
 simple way of describing 3rd party testing integration requirements.
 
 So the follow-on questions are:
 1. Can the 3rd party testing blueprint enforce, or at least link to,
use of specific JJB macros for integration to the openstack gerrit?
   1a. Where should shared JJB code be stored?
 2. Is it appropriate for 3rd party testers to share their tests as
JJB code, if they are willing?
   2a. Would this live in the same location as (1a)?
 
 For those unfamiliar with JJB, here is a little example of what you
 might do:
 
 Example of (untested) JJB macro describing how to configure Jenkins to
 trigger from gerrit:
 
 - trigger:
 name: 3rd-party-gerrit-review
 triggers:
   - gerrit:
  triggerOnPatchsetUploadedEvent: true
  triggerOnChangeMergedEvent: false
  triggerOnRefUpdatedEvent: false
  triggerOnCommentAddedEvent: false
  overrideVotes: true
  gerritBuildSuccessfulVerifiedValue: 0
  gerritBuildFailedVerifiedValue: -1
  projects:
- projectCompareType: 'PLAIN'
  projectPattern: '{project_pattern}'
  branchCompareType: 'ANT'
  branchPattern: '**'
  failureMessage: '3rd party test {test_name} failed.  Contact is
 {test_contact}.  Test log is available at {test_log_url}'
 

Corrected example:
 Use of macro in JJB file (again untested):
 -job:
 name: my-test
 triggers:
 - 3rd-party-gerrit-review:
 project_pattern: 'openstack/neutron.git'
 test_name: 'my-3rd-party-test'
 test_contact: 'my-em...@example.com'
 test_log_url: 'http://mylogsarehere.com/'
 
 
3. There are issues with devstack failing, but these seem to be
  Neutron
plugin specific. I’ve encouraged people to reach out on both
 the
#openstack-neutron and #openstack-qa channels with questions.
4. There is still some confusion on what tests to run. I think
this
  is
likely to be plugin dependent.
5. There is some confusion around what version of devstack to use.
My assumption has always been upstream master.
   
Another general issue which I wanted to highlight here, which has
been brought up before, is that for companies/projects proposing
plugins, MechanismDrivers, and/or service plugins you really need
someone active on both the mailing list as well as the IRC
 channels.
This will help if your testing rig has issues, or if people need
help understanding why your test setup is failing with their
 patch.
   
So, that’s the Neutron 3rd party testing update as we near the
deadline next week.
   
Thanks!
Kyle
   
[1]
http://eavesdrop.openstack.org/meetings/networking_third_party_tes
ti
ng/2 014/networking_third_party_testing.2014-01-15-22.00.log.html
   
 Thanks,
 Jon-Paul Sullivan ☺ Cloud Services - @hpcloud
 
 Postal Address: Hewlett-Packard Galway Limited, Ballybrit Business Park,
 Galway.
 Registered Office: Hewlett-Packard Galway Limited, 63-74 Sir John
 Rogerson's Quay, Dublin 2.
 Registered Number: 361933
 
 The contents of this message and any attachments to it are confidential
 and may be legally privileged. If you have received this message in
 error you 

Re: [openstack-dev] [Nova] why don't we deal with claims when live migrating an instance?

2014-01-16 Thread Brian Elliott

On Jan 15, 2014, at 4:34 PM, Clint Byrum cl...@fewbar.com wrote:

 Hi Chris. Your thread may have gone unnoticed as it lacked the Nova tag.
 I've added it to the subject of this reply... that might attract them.  :)
 
 Excerpts from Chris Friesen's message of 2014-01-15 12:32:36 -0800:
 When we create a new instance via _build_instance() or 
 _build_and_run_instance(), in both cases we call instance_claim() to 
 reserve and test for resources.
 
 During a cold migration I see us calling prep_resize() which calls 
 resize_claim().
 
 How come we don't need to do something like this when we live migrate an 
 instance?  Do we track the hypervisor overhead somewhere in the instance?
 
 Chris
 

It is a good point and it should be done.  It is effectively a bug.

Brian

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Hat Tip to fungi

2014-01-16 Thread Jeremy Stanley
On 2014-01-16 08:44:17 -0500 (-0500), Anita Kuno wrote:
 You have kept openstack-infra running for that last 2 weeks as the
 sole plate-spinner whilst the rest of us were conferencing,
 working on the gerrit upgrade or getting our laptop stolen.

Pro tip: don't make a habit of getting your laptop stolen.

 You spun up and configured two new Jenkinses
[...]

I can't really take credit for that--Jim and Clark did most of it
one night while I was sleeping.

 All this while dealing with the every day business of keeping
 -infra operating.

Well, if it weren't for the rest of the Infra team and our extended
community building such resilient systems, it would never have even
been possible, so they share most of the karma for that.

 I am so grateful for all you do.
 
 I tip my hat to you, sir.

Your support is appreciated! Sorry I've been mostly too busy to read
and respond to E-mail of late (I probably wouldn't have spotted this
thread until the weekend had it not been pointed out to me in IRC).

 [0] My sense is once Jim is back from vacation time he will provide a
 detailed report on these changes.

I hope so, because I've forgotten most of them (I think I chronicled
some in the weekly meetings though, if anyone's seeking highlights).
Also, I know he and others have exciting plans/ideas for getting
things scaled to run even faster and more smoothly still.
-- 
Jeremy Stanley

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] a common client library

2014-01-16 Thread Justin Hammond
I'm not sure if it was said, but which httplib using being used (urllib3
maybe?). Also I noticed many people were talking about supporting auth
properly, but are there any intentions to properly support 'noauth'
(python-neutronclient, for instance, doesn't support it properly as of
this writing)? 

On 1/15/14 10:53 PM, Alexei Kornienko alexei.kornie...@gmail.com wrote:

I did notice, however, that neutronclient is
conspicuously absent from the Work Items in the blueprint's Whiteboard.

It will surely be added later. We already working on several things in
parallel and we will add neutronclient soon.

Do you need another person to make neutron client?


I would love to see a bit more detail on the structure of the
lib(s), the blueprint really doesn't discuss the
design/organization/intended API of the libs.  For example, I would hope
 the distinction between the various layers of a client stack don't get
lost, i.e. not mixing the low-level REST API bits with the higher-level
CLI parsers and decorators.
Does the long-term goals include a common caching layer?


Distinction between client layers won't get lost and would only be
improved. My basic idea is the following:

1) Transport layer would handle all transport related stuff - HTTP, JSON
encoding, auth, caching, etc.

2) Model layer (Resource classes, BaseManager, etc.) will handle data
representation, validation

3) API layer will handle all project specific stuff - url mapping, etc.
(This will be imported to use client in other applications)

4) Cli level will handle all stuff related to cli mapping - argparse,
argcomplete, etc.


I believe the current effort referenced by the blueprint is focusing on
moving existing code into the incubator for reuse, to make it easier to
restructure later. Alexei, do I have that correct?

You are right. The first thing we do is try to make all clients look/work
in similar way. After we'll continue our work with improving overall
structure.






2014/1/16 Noorul Islam K M noo...@noorul.com

Doug Hellmann doug.hellm...@dreamhost.com writes:

 Several people have mentioned to me that they are interested in, or
 actively working on, code related to a common client library --
something
 meant to be reused directly as a basis for creating a common library for
 all of the openstack clients to use. There's a blueprint [1] in oslo,
and I
 believe the keystone devs and unified CLI teams are probably interested
in
 ensuring that the resulting API ends up meeting all of our various
 requirements.

 If you're interested in this effort, please subscribe to the blueprint
and
 use that to coordinate efforts so we don't produce more than one common
 library. ;-)



Solum is already using it https://review.openstack.org/#/c/58067/

I would love to watch this space.

Regards,
Noorul

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev







___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] a common client library

2014-01-16 Thread Dean Troyer
On Thu, Jan 16, 2014 at 8:45 AM, Jesse Noller jesse.nol...@rackspace.comwrote:

 After speaking with people working on OSC and looking at the code base in
 depth; I don’t think this addresses what Chris is implying: OSC wraps the
 individual CLIs built by each project today, instead of the inverse: a
 common backend that the individual CLIs can wrap - the latter is an
 important distinction as currently, building a single binary install of OSC
 for say, Windows is difficult given the dependency tree incurred by each of
 the wrapped CLIs, difference in dependencies, structure, etc.


OSC is the top of the cake and was the most beneficial to user experience
so it went first.  I would love to consume fewer libraries and
dependencies, so much that I still have a project to do this in a language
that can easily ship a single binary client.

What I think we're talking about here is the bottom of the cake and
eliminating duplication and accumulated cruft from repeated forking and
later direction changes.

The creamy gooey middle is the API-specific bits that right stay exactly
where they are (for now??) and can continue to ship a stand-alone cli if
they wish.

dt

-- 

Dean Troyer
dtro...@gmail.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Domain Model Locations

2014-01-16 Thread Jay Pipes
On Thu, 2014-01-16 at 11:25 +0100, Jaromir Coufal wrote:
 On 2014/12/01 20:40, Jay Pipes wrote:
  On Fri, 2014-01-10 at 10:28 -0500, Jay Dobies wrote:
  So, it's not as simple as it may initially seem :)
 
  Ah, I should have been clearer in my statement - my understanding is that
  we're scrapping concepts like Rack entirely.
 
  That was my understanding as well. The existing Tuskar domain model was
  largely placeholder/proof of concept and didn't necessarily reflect
  exactly what was desired/expected.
 
  Hmm, so this is a bit disappointing, though I may be less disappointed
  if I knew that Ironic (or something else?) planned to account for
  datacenter inventory in a more robust way than is currently modeled.
 
  If Triple-O/Ironic/Tuskar are indeed meant to be the deployment tooling
  that an enterprise would use to deploy bare-metal hardware in a
  continuous fashion, then the modeling of racks, and the attributes of
  those racks -- location, power supply, etc -- are a critical part of the
  overall picture.
 
  As an example of why something like power supply is important... inside
  ATT, we had both 8kW and 16kW power supplies in our datacenters. For a
  42U or 44U rack, deployments would be limited to a certain number of
  compute nodes, based on that power supply.
 
  The average power draw for a particular vendor model of compute worker
  would be used in determining the level of compute node packing that
  could occur for that rack type within a particular datacenter. This was
  a fundamental part of datacenter deployment and planning. If the tooling
  intended to do bare-metal deployment of OpenStack in a continual manner
  does not plan to account for these kinds of things, then the chances
  that tooling will be used in enterprise deployments is diminished.
 
  And, as we all know, when something isn't used, it withers. That's the
  last thing I want to happen here. I want all of this to be the
  bare-metal deployment tooling that is used *by default* in enterprise
  OpenStack deployments, because the tooling fits the expectations of
  datacenter deployers.
 
  It doesn't have to be done tomorrow :) It just needs to be on the map
  somewhere. I'm not sure if Ironic is the place to put this kind of
  modeling -- I thought Tuskar was going to be that thing. But really,
  IMO, it should be on the roadmap somewhere.
 
  All the best,
  -jay
 
 Perfect write up, Jay.
 
 I can second these needs based on talks I had previously.
 
 The goal is to primarily support enterprise deployments and they work 
 with racks, so all of that information such as location, power supply, 
 etc are important.
 
 Though this is pretty challenging area and we need to start somewhere. 
 As a proof of concept, Tuskar tried to provide similar views, then we 
 jumped into reality. OpenStack has no strong support in racks field for 
 the moment. As long as we want to deliver working deployment solution 
 ASAP and enhance it in time, we started with currently available features.
 
 We are not giving up racks entirely, they are just a bit pushed back, 
 since there is no real support in OpenStack yet. But to deliver more 
 optimistic news, regarding last OpenStack summit, Ironic intends to work 
 with all the racks information (location, power supply, ...). So once 
 Ironic contains all of that information, we can happily start providing 
 such capability for deployment setups, hardware overviews, etc.
 
 Having said that, for Icehouse I pushed for Node Tags to get in. It is 
 not the best experience, but using Node Tags, we can actually support 
 various use-cases for user (by him tagging nodes manually at the moment).

Totally cool, Jarda. I appreciate your response and completely
understand the prioritization.

Best,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] milestone-proposed branches

2014-01-16 Thread Thierry Carrez
James Slagle wrote:
 [...]
 And yes, I'm volunteering to do the work to support the above, and the
 release work :).

Let me know if you have any question or need help. The process and tools
used for the integrated release are described here:

https://wiki.openstack.org/wiki/Release_Team/How_To_Release

Also note that we were considering switching from using
milestone-proposed to using proposed/*, to avoid reusing branch names:

https://review.openstack.org/#/c/65103/

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] a common client library

2014-01-16 Thread Doug Hellmann
On Wed, Jan 15, 2014 at 4:35 PM, Jesse Noller jesse.nol...@rackspace.comwrote:


 On Jan 15, 2014, at 1:37 PM, Doug Hellmann doug.hellm...@dreamhost.com
 wrote:

  Several people have mentioned to me that they are interested in, or
 actively working on, code related to a common client library -- something
 meant to be reused directly as a basis for creating a common library for
 all of the openstack clients to use. There's a blueprint [1] in oslo, and I
 believe the keystone devs and unified CLI teams are probably interested in
 ensuring that the resulting API ends up meeting all of our various
 requirements.
 
  If you're interested in this effort, please subscribe to the blueprint
 and use that to coordinate efforts so we don't produce more than one common
 library. ;-)
 
  Thanks,
  Doug
 
 
  [1] https://blueprints.launchpad.net/oslo/+spec/common-client-library-2

 *raises hand*

 Me me!

 I’ve been talking to many contributors about the Developer Experience
 stuff I emailed out prior to the holidays and I was starting blueprint
 work, but this is a great pointer. I’m going to have to sync up with Alexei.

 I think solving this for openstack developers and maintainers as the
 blueprint says is a big win in terms of code reuse / maintenance and
 consistent but more so for *end-user developers* consuming openstack clouds.

 Some background - there’s some terminology mismatch but the rough idea is
 the same:

 * A centralized “SDK” (Software Development Kit) would be built condensing
 the common code and logic and operations into a single namespace.

 * This SDK would be able to be used by “downstream” CLIs - essentially the
 CLIs become a specialized front end - and in some cases, only an argparse
 or cliff front-end to the SDK methods located in the (for example)
 openstack.client.api.compute

 * The SDK would handle Auth, re-auth (expired tokens, etc) for long-lived
 clients - all of the openstack.client.api.** classes would accept an Auth
 object to delegate management / mocking of the Auth / service catalog stuff
 to. This means developers building applications (say for example, horizon)
 don’t need to worry about token/expired authentication/etc.

 * Simplify the dependency graph  code for the existing tools to enable
 single binary installs (py2exe, py2app, etc) for end users of the command
 line tools.

 Short version: if a developer wants to consume an openstack cloud; the
 would have a single SDK with minimal dependencies and import from a single
 namespace. An example application might look like:

 from openstack.api import AuthV2
 from openstack.api import ComputeV2

 myauth = AuthV2(…., connect=True)
 compute = ComputeV2(myauth)

 compute.list_flavors()

 This greatly improves the developer experience both internal to openstack
 and externally. Currently OpenStack has 22+ (counting stackforge) potential
 libraries a developer may need to install to use a full deployment of
 OpenStack:

   * python-keystoneclient (identity)
   * python-glanceclient (image)
   * python-novaclient (compute)
   * python-troveclient (database)
   * python-neutronclient (network)
   * python-ironicclient (bare metal)
   * python-heatclient (orchestration)
   * python-cinderclient (block storage)
   * python-ceilometerclient (telemetry, metrics  billing)
   * python-swiftclient (object storage)
   * python-savannaclient (big data)
   * python-openstackclient (meta client package)
   * python-marconiclient (queueing)
   * python-tuskarclient (tripleo / management)
   * python-melangeclient (dead)
   * python-barbicanclient (secrets)
   * python-solumclient (ALM)
   * python-muranoclient (application catalog)
   * python-manilaclient (shared filesystems)
   * python-libraclient (load balancers)
   * python-climateclient (reservations)
   * python-designateclient (Moniker/DNS)

 If you exclude the above and look on PyPI:

 On PyPi (client libraries/SDKs only, excluding the above - not maintained
 by openstack):

  * hpcloud-auth-openstack 1.0
  * python-openstackclient 0.2.2
  * rackspace-auth-openstack 1.1
  * posthaste 0.2.2
  * pyrax 1.6.2
  * serverherald 0.0.1
  * warm 0.3.1
  * vaporize 0.3.2
  * swiftsc (https://github.com/mkouhei/swiftsc)
  * bookofnova 0.007
  * nova-adminclient 0.1.8
  * python-quantumclient 2.2.4.3
  * python-stackhelper 0.0.7.1.gcab1eb0
  * swift-bench 1.0
  * swiftly 1.12
  * txAWS 0.2.3
  * cfupload 0.5.1
  * python-reddwarfclient 0.1.2
  * python-automationclient 1.2.1
  * rackspace-glanceclient 0.9
  * rackspace-novaclient 1.4

 If you ignore PyPI and just want to install the base say - 7 services,
 each one of the python-** clients has its own dependency tree and may or
 may not build from one of the others. If a vendor wants to extend any of
 them, it’s basically a fork instead of a clean plugin system.

 On the CLI front - this would *greatly* simplify the work openstackclient
 needs to do - it would be able to import from the main SDK and simply
 provide the noun-verb command line 

Re: [openstack-dev] [gantt] Sync up patches

2014-01-16 Thread Vishvananda Ishaya

On Jan 16, 2014, at 6:46 AM, Joe Gordon joe.gord...@gmail.com wrote:

 
 
 
 On Wed, Jan 15, 2014 at 1:29 PM, Dugger, Donald D donald.d.dug...@intel.com 
 wrote:
 My thought was to try and get some parallel effort going, do the resync as a 
 continuing task as suffer a little ongoing pain versus a large amount of pain 
 at the end.  Given that the steps for a resync are the same no matter when we 
 do it waiting until the end is acceptable.
 
  
 
 From a `just do it’ perspective I think we’re in violent agreement on the top 
 level tasks, as long as your step 3, integration testing, is the same as what 
 I’ve been calling working functionality, e.g. have the nova scheduler use the 
 gantt source tree.
 
  
 
 PS:  How I resync.  What I’ve done is create a list with md5sums of all the 
 files in nova that we’ve duplicated in gantt.  I then update a nova git tree 
 and compare the current md5sums for those files with my list.  I use 
 format-patch to get the patches from the nova tree and grep for any patch 
 that applies to a gantt file.  I then use `git am’ to apply those patches to 
 the gantt tree, modifying any of the patches that are needed.
 
 
 So this sync won't work once we start the nova/gantt rename, so we need a 
 better approach.
 
 Syncing the gantt tree with nova sounds like a daunting task. Perhaps it 
 would be easier if we use the current gantt tree as a test to see what is 
 involved in getting gantt working, and then redo the fork after the icehouse 
 feature freeze with the aim of getting the gantt tree working by the start of 
 juno, so we can have the freeze nova-scheduler discussion. Syncing nova and 
 gantt during feature freeze should be significantly easier then doing it now.


I would personally just vote for the nuclear approach of freezing nova 
scheduler and doing work in gantt. If close to icehouse 3 we see that gantt is 
not going to be ready in time we can selectively backport stuff to 
nova-scheduler and push gantt to juno.

Vish

  
 
  
 
 It may sound a little cumbersome but I’ve got some automated scripts that 
 make it relatively easy and modifying the patches, the hard part, was going 
 to be necessary no matter how we do it.
 
  
 
 --
 
 Don Dugger
 
 Censeo Toto nos in Kansa esse decisse. - D. Gale
 
 Ph: 303/443-3786
 
  
 
 From: Joe Gordon [mailto:joe.gord...@gmail.com] 
 Sent: Wednesday, January 15, 2014 10:21 AM
 
 
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [gantt] Sync up patches
 
  
 
  
 
  
 
 On Wed, Jan 15, 2014 at 8:52 AM, Dugger, Donald D donald.d.dug...@intel.com 
 wrote:
 
 The current thought is that I will do the work to backport any change that 
 are made to the nova tree that overlap the gantt tree.  I don’t see this as 
 an impossible task.  Yes it will get harder as we make specific changes to 
 gantt but, given that our first goal is to make gantt a drop in replacement 
 for the nova scheduler there shouldn’t be that many gantt specific changes 
 that would make backporting difficult so I think this is a doable path.
 
  
 
 How are you tracking this today? I think its worth having a well documented 
 plan for this, as we will most likely have to keep syncing the two repos for 
 a while.
 
  
 
 If all that is needed to cherry-pick a patch from nova to gantt is a 
 nova=gantt rename these should be easy and a single +2 makes sense, but for 
 any patch that requires changes beyond that I think a full review should be 
 required.
 
  
 
  
 
 For the ordering, the unit tests and working functionality are indeed 
 effectively the same, highest priority, I don’t have an issue with getting 
 the unit tests working first.
 
  
 
 Great, so I would prefer to see gantt gating on unit tests before landing any 
 other patches.
 
  
 
 Whats the full plan for the steps to bootstrap?  It would be nice to have a 
 roadmap for this so we don't get bogged down in the weeds. Off the top of my 
 head I imagine it would be something like (I have a feeling I am missing a 
 few steps here): 
 
  
 
 1) Get unit tests working
 
 2) Trim repo
 
 3) Set up integration testing  (In parallel get gantt client working)
 
 4) Resync with nova
 
  
 
  
 
 As far as trimming is concerned I would still prefer to do that later, after 
 we have working functionality.  Since trimable files won’t have gantt 
 specific changes keeping them in sync with the nova tree is easy.  Until we 
 have working functionality we won’t really know that a file is not needed (I 
 am constantly surprised by code that doesn’t do what I expect) and deleting a 
 file after we are sure it’s not needed is easy.
 
  
 
 Fair enough, I moved trimming after get unit tests working in the list above.
 
  
 
  
 
 --
 
 Don Dugger
 
 Censeo Toto nos in Kansa esse decisse. - D. Gale
 
 Ph: 303/443-3786
 
  
 
 From: Joe Gordon [mailto:joe.gord...@gmail.com] 
 Sent: Wednesday, January 15, 2014 9:28 AM
 
 
 To: OpenStack Development Mailing List (not 

Re: [openstack-dev] a common client library

2014-01-16 Thread Jesse Noller

On Jan 16, 2014, at 9:54 AM, Alexei Kornienko 
alexei.kornie...@gmail.commailto:alexei.kornie...@gmail.com wrote:

On 01/16/2014 05:25 PM, Jesse Noller wrote:

On Jan 16, 2014, at 9:07 AM, Joe Gordon 
joe.gord...@gmail.commailto:joe.gord...@gmail.com wrote:




On Thu, Jan 16, 2014 at 9:45 AM, Jesse Noller 
jesse.nol...@rackspace.commailto:jesse.nol...@rackspace.com wrote:

On Jan 16, 2014, at 5:53 AM, Chmouel Boudjnah 
chmo...@enovance.commailto:chmo...@enovance.com wrote:


On Thu, Jan 16, 2014 at 12:38 PM, Chris Jones 
c...@tenshu.netmailto:c...@tenshu.net wrote:
Once a common library is in place, is there any intention to (or resistance 
against) collapsing the clients into a single project or even a single command 
(a la busybox)?


that's what openstackclient is here for 
https://github.com/openstack/python-openstackclient

After speaking with people working on OSC and looking at the code base in 
depth; I don’t think this addresses what Chris is implying: OSC wraps the 
individual CLIs built by each project today, instead of the inverse: a common 
backend that the individual CLIs can wrap - the latter is an important 
distinction as currently, building a single binary install of OSC for say, 
Windows is difficult given the dependency tree incurred by each of the wrapped 
CLIs, difference in dependencies, structure, etc.

Also, wrapping a series of inconsistent back end Client classes / functions / 
methods means that the layer that presents a consistent user interface (OSC) to 
the user is made more complex juggling names/renames/commands/etc.

In the inverted case of what we have today (single backend); as a developer of 
user interfaces (CLIs, Applications, Web apps (horizon)) you would be able to:

from openstack.common.api import Auth
from openstack.common.api import Compute
from openstack.common.util import cli_tools

my_cli = cli_tools.build(…)

def my_command(cli):
compute = Compute(Auth(cli.tentant…, connect=True))
compute.list_flavors()

This would mean that *even if the individual clients needed or wanted to keep 
their specific CLIs, they would be able to use a not “least common denominator” 
back end (each service can have a rich 
common.api.compute.pyhttp://common.api.compute.py/ or api.compute/client.py 
and extend where needed. However tools like horizon / openstackclient can 
choose not to leverage the “power user/operator/admin” components and present a 
simplified user interface.

I’m working on a wiki page + blueprint to brainstorm how we could accomplish 
this based off of what work is in flight today (see doug’s linked blueprint) 
and sussing out a layout / API strawman for discussion.

Some of the additions that came out of this email threads and others:

1. Common backend should provide / offer caching utilities
2. Auth retries need to be handled by the auth object, and each sub-project 
delegates to the auth object to manage that.
3. Verified Mocks / Stubs / Fakes must be provided for proper unit testing

I am happy to see this work being done, there is definitely a lot of work to be 
done on the clients.

This blueprint sounds like its still being fleshed out, so I am wondering what 
the value is of the current patches 
https://review.openstack.org/#/q/topic:bp/common-client-library-2,n,z

Those patches mainly sync cliutils and apiutils from oslo into the assorted 
clients. But if this blueprint is about the python API and not the CLI (as that 
would be the openstack-pythonclient), why sync in apiutils?

Also does this need to go through oslo-incubator or can this start out as a 
library? Making this a library earlier on will reduce the number of patches 
needed to get 20+ repositories to use this.


Alexei and others have at least started the first stage of a rollout - the 
blueprint(s) needs additional work, planning and discussion, but his work is a 
good first step (reduce the duplication of code) although I am worried that the 
libraries and APIs / namespaces will need to change if we continue these 
discussions which potentially means re-doing work.

If we take a step back, a rollout process might be:

1: Solidify the libraries / layout / naming conventions (blueprint)
2: Solidify the APIs exposed to consumers (blueprint)
3: Pick up on the common-client-library-2 work which is primarily a migration 
of common code into oslo today, into the structure defined by 1  2

So, I sort of agree: moving / collapsing code now might be premature. I do 
strongly agree it should stand on its own as a library rather than an oslo 
incubator however. We should start with a single, clean namespace / library 
rather than depending on oslo directly.
Knowing usual openstack workflow I'm afraid that #1,#2 with a waterfall 
approach may take years to be complete.
And after they'll be approved it will become clear that this architecture is 
already outdated.
We try to use iterative approach for clients refactoring.
We started our work from removing code duplication because it already gives 

Re: [openstack-dev] a common client library

2014-01-16 Thread Justin Hammond
My prioritization of noauth is rooted in the fact that we're finding that
the current pattern of hitting auth to validate a token is not scaling
well. Out current solution to this scale issue is:

- use noauth when possible between the services
- use normal auth for public services
- provide a method to create a 'trusted environment'

While this problem may not be prevalent in other deployments I will add
that support noauth in the client 'just makes sense' when the services
themselves support them.

For instance our setup looks like:

User - Auth to Nova - Nova/Computes - NoAuth to neutron in 'trusted
environment'

It saves quite a few calls to identity in this way and scales a lot better.

On 1/16/14 11:06 AM, Dean Troyer dtro...@gmail.com wrote:

On Thu, Jan 16, 2014 at 9:37 AM, Jesse Noller
jesse.nol...@rackspace.com wrote:

On Jan 16, 2014, at 9:26 AM, Justin Hammond
justin.hamm...@rackspace.com wrote:


I'm not sure if it was said, but which httplib using being used (urllib3
maybe?). Also I noticed many people were talking about supporting auth
properly, but are there any intentions to properly support 'noauth'
(python-neutronclient, for instance, doesn't support it properly as of
this writing)? 




Can you detail out noauth for me; and I would say the defacto httplib in
python today is python-requests - urllib3 is also good but I would say
from a *consumer* standpoint requests offers more in terms of usability /
extensibility 






requests is built on top of urllib3 so there's that...

The biggest reaon I favor using Jamie Lennox's new session layer stuff in
keystoneclient is that it better reflects the requests API instead of it
being stuffed in after the fact.  And as the one responsible for that
stuffing, it was pretty blunt and really needs to be cleaned up more than
Alessio did.

only a few libs (maybe just glance and swift?) don't use requests at this
point and I think the resistance there is the chunked transfers they both
do.

I'm really curious what 'noauth' means against APIs that have few, if
any, calls that operate without a valid token.

dt

-- 

Dean Troyer
dtro...@gmail.com


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] a common client library

2014-01-16 Thread Dean Troyer
On Thu, Jan 16, 2014 at 9:54 AM, Alexei Kornienko 
alexei.kornie...@gmail.com wrote:

 Knowing usual openstack workflow I'm afraid that #1,#2 with a waterfall
 approach may take years to be complete.
 And after they'll be approved it will become clear that this architecture
 is already outdated.
 We try to use iterative approach for clients refactoring.
 We started our work from removing code duplication because it already
 gives a direct positive effect on client projects.
 If you can show us better way of moving forward please help us by
 uploading patches on this topic.

 Talk is cheap. Show me the code. (c) Linus


python-keystoneclient has the Session bits commiited, and the auth bits in
flight.  So we'll all be using it one way or another.

OSC has a very-slimmed-down model for replacing the REST and API layers
already present underneath the object-store commands as an experiment to
see how slim I could make then and still be usable.   It isn't necessarily
what I want to ship everywhere, but the REST layer is eerily similar to
Jamie's in keystoneclient.  There's why my bias...

dt

-- 

Dean Troyer
dtro...@gmail.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] a common client library

2014-01-16 Thread Jay Pipes
On Thu, 2014-01-16 at 10:06 -0600, Dean Troyer wrote:
 On Thu, Jan 16, 2014 at 9:37 AM, Jesse Noller
 jesse.nol...@rackspace.com wrote:
 On Jan 16, 2014, at 9:26 AM, Justin Hammond
 justin.hamm...@rackspace.com wrote:
  I'm not sure if it was said, but which httplib using being
  used (urllib3
  maybe?). Also I noticed many people were talking about
  supporting auth
  properly, but are there any intentions to properly support
  'noauth'
  (python-neutronclient, for instance, doesn't support it
  properly as of
  this writing)? 
  
 Can you detail out noauth for me; and I would say the defacto
 httplib in python today is python-requests - urllib3 is also
 good but I would say from a *consumer* standpoint requests
 offers more in terms of usability / extensibility 
 
 
 requests is built on top of urllib3 so there's that...
 
 
 The biggest reaon I favor using Jamie Lennox's new session layer stuff
 in keystoneclient is that it better reflects the requests API instead
 of it being stuffed in after the fact.  And as the one responsible for
 that stuffing, it was pretty blunt and really needs to be cleaned up
 more than Alessio did.
 
 
 only a few libs (maybe just glance and swift?) don't use requests at
 this point and I think the resistance there is the chunked transfers
 they both do.

Right, but requests supports chunked-transfer encoding properly, so
really there's no reason those clients could not move to a
requests-based codebase.

Best,
-jay

 
 I'm really curious what 'noauth' means against APIs that have few, if
 any, calls that operate without a valid token.
 
 
 dt
 
 
 -- 
 
 Dean Troyer
 dtro...@gmail.com
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] instance migration strangeness in devstack

2014-01-16 Thread Dan Genin

Raw backed instance migration also works so this appears to be an LVM issue.

On 01/16/2014 11:04 AM, Dan Genin wrote:
Thank you for replying, Vish. I did sync and verified that the file 
was written to the host disk by mounting the LVM volume on the host.


When I tried live migration I got a Horizon blurb Error: Failed to 
live migrate instance to host but there were no errors in syslog.


I have been able to successfully migrate a Qcow2 backed instance.
Dan

On 01/16/2014 03:18 AM, Vishvananda Ishaya wrote:
This is probably more of a usage question, but I will go ahead and 
answer it.


If you are writing to the root drive you may need to run the sync 
command a few times to make sure that the data has been flushed to 
disk before you kick off the migration.


The confirm resize step should be deleting the old data, but there 
may be a bug in the lvm backend if this isn’t happening. Live(block) 
migration will probably be a bit more intuitive.


Vish
On Jan 15, 2014, at 2:40 PM, Dan Genin daniel.ge...@jhuapl.edu 
mailto:daniel.ge...@jhuapl.edu wrote:


I think this qualifies as a development question but please let me 
know if I am wrong.


I have been trying to test instance migration in devstack by setting 
up a multi-node devstack following directions at 
http://devstack.org/guides/multinode-lab.html. I tested that indeed 
there were multiple availability zones and that it was possible to 
create instances in each. The I tried migrating an instance from one 
compute node to another using the Horizon interface (I could not 
find a way to /confirm///migration, which is a necessary step, from 
the command line). I created a test file on the instance's ephemeral 
disk, before migrating it, to verify that the data was moved to the 
destination compute node. After migration, I observed an instance 
with the same id active on the destination node but the test file 
was not present.


Perhaps I misunderstand how migration is supposed to work but I 
expected that the data on the ephemeral disk would be migrated with 
the instance. I suppose it could take some time for the ephemeral 
disk to be copied but then I would not expect the instance to become 
active on the destination node before the copy operation was complete.


I also noticed that the ephemeral disk on the source compute node 
was not removed after the instance was migrated, although, the 
instance directory was. Nor was the disk removed after the instance 
was destroyed. I was using LVM backend for my tests.


I can provide more information about my setup but I just wanted to 
check whether I was doing (or expecting) something obviously stupid.


Thank you,
Dan
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org 
mailto:OpenStack-dev@lists.openstack.org

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev








smime.p7s
Description: S/MIME Cryptographic Signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] instance migration strangeness in devstack

2014-01-16 Thread Vishvananda Ishaya
In that case, this sounds like a bug to me related to lvm volumes. You should 
check the nova-compute.log from both hosts and the nova-conductor.log. If it 
isn’t obvious what the problem is, you should open a bug and attach as much 
info as possible.

Vish

On Jan 16, 2014, at 8:04 AM, Dan Genin daniel.ge...@jhuapl.edu wrote:

 Thank you for replying, Vish. I did sync and verified that the file was 
 written to the host disk by mounting the LVM volume on the host.
 
 When I tried live migration I got a Horizon blurb Error: Failed to live 
 migrate instance to host but there were no errors in syslog.
 
 I have been able to successfully migrate a Qcow2 backed instance.
 Dan
 
 On 01/16/2014 03:18 AM, Vishvananda Ishaya wrote:
 This is probably more of a usage question, but I will go ahead and answer it.
 
 If you are writing to the root drive you may need to run the sync command a 
 few times to make sure that the data has been flushed to disk before you 
 kick off the migration.
 
 The confirm resize step should be deleting the old data, but there may be a 
 bug in the lvm backend if this isn’t happening. Live(block) migration will 
 probably be a bit more intuitive.
 
 Vish
 On Jan 15, 2014, at 2:40 PM, Dan Genin daniel.ge...@jhuapl.edu wrote:
 
 I think this qualifies as a development question but please let me know if 
 I am wrong.
 
 I have been trying to test instance migration in devstack by setting up a 
 multi-node devstack following directions at 
 http://devstack.org/guides/multinode-lab.html. I tested that indeed there 
 were multiple availability zones and that it was possible to create 
 instances in each. The I tried migrating an instance from one compute node 
 to another using the Horizon interface (I could not find a way to confirm 
 migration, which is a necessary step, from the command line). I created a 
 test file on the instance's ephemeral disk, before migrating it, to verify 
 that the data was moved to the destination compute node. After migration, I 
 observed an instance with the same id active on the destination node but 
 the test file was not present.
 
 Perhaps I misunderstand how migration is supposed to work but I expected 
 that the data on the ephemeral disk would be migrated with the instance. I 
 suppose it could take some time for the ephemeral disk to be copied but 
 then I would not expect the instance to become active on the destination 
 node before the copy operation was complete.
 
 I also noticed that the ephemeral disk on the source compute node was not 
 removed after the instance was migrated, although, the instance directory 
 was. Nor was the disk removed after the instance was destroyed. I was using 
 LVM backend for my tests. 
 
 I can provide more information about my setup but I just wanted to check 
 whether I was doing (or expecting) something obviously stupid.
 
 Thank you,
 Dan
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gantt] Sync up patches

2014-01-16 Thread Russell Bryant
On 01/16/2014 11:18 AM, Vishvananda Ishaya wrote:
 
 On Jan 16, 2014, at 6:46 AM, Joe Gordon joe.gord...@gmail.com 
 mailto:joe.gord...@gmail.com wrote:
 
 
 
 
 On Wed, Jan 15, 2014 at 1:29 PM, Dugger, Donald D 
 donald.d.dug...@intel.com mailto:donald.d.dug...@intel.com
 wrote:
 
 My thought was to try and get some parallel effort going, do the 
 resync as a continuing task as suffer a little ongoing pain
 versus a large amount of pain at the end.  Given that the steps
 for a resync are the same no matter when we do it waiting until
 the end is acceptable.
 
 __ __
 
 From a `just do it’ perspective I think we’re in violent
 agreement on the top level tasks, as long as your step 3,
 integration testing, is the same as what I’ve been calling
 working functionality, e.g. have the nova scheduler use the gantt
 source tree.
 
 __ __
 
 PS:  How I resync.  What I’ve done is create a list with md5sums 
 of all the files in nova that we’ve duplicated in gantt.  I then 
 update a nova git tree and compare the current md5sums for those 
 files with my list.  I use format-patch to get the patches from 
 the nova tree and grep for any patch that applies to a gantt 
 file.  I then use `git am’ to apply those patches to the gantt 
 tree, modifying any of the patches that are needed.
 
 
 So this sync won't work once we start the nova/gantt rename, so
 we need a better approach.
 
 Syncing the gantt tree with nova sounds like a daunting task.
 Perhaps it would be easier if we use the current gantt tree as a
 test to see what is involved in getting gantt working, and then
 redo the fork after the icehouse feature freeze with the aim of
 getting the gantt tree working by the start of juno, so we can
 have the freeze nova-scheduler discussion. Syncing nova and gantt
 during feature freeze should be significantly easier then doing
 it now.
 
 
 I would personally just vote for the nuclear approach of freezing
 nova scheduler and doing work in gantt. If close to icehouse 3 we
 see that gantt is not going to be ready in time we can selectively
 backport stuff to nova-scheduler and push gantt to juno.

That sounds OK to me, but I would really just like to see gantt
running before we freeze nova-scheduler.

Joe's idea might work for this too, which would be something like:

1) Go through the exercise of making the current thing running using
the current repo (without keeping it in sync).  This includes devstack
integration.

2) Once we see it working and are ready for the nuclear freeze and
switch, re-generate the repo from nova master and apply everything
needed to make it work.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][vmware] VMwareAPI sub-team status update 2014-01-16

2014-01-16 Thread Shawn Hartsock
We're closing in on Icehouse-2 and only 4 of our blueprints were in
good enough shape to stay in the queue. Numbers on the BP indicate the
priority number (per project) I've arbitrarily assigned the BP based
on subteam feedback. This does mean priority 1, 2, and 3 got bumped
from i-2 to i-3 so... let's stay on those.

We're fighting performance problems on Minesweeper so I'd like to look
into detailed method level profiling if anyone has experience with
that help would be appreciated. In general, it seems, Nova would
benefit from a profiling and performance project. In general, the most
time is taken moving glance images around so our #1 priority in
icehouse is to fix that.

== Blueprint priorities ==
Icehouse-2
Nova
4. https://blueprints.launchpad.net/nova/+spec/autowsdl-repair - hartsocks
5. https://blueprints.launchpad.net/nova/+spec/improve-vmware-disk-usage
- Yaguang Tang
6. https://blueprints.launchpad.net/nova/+spec/vmware-hot-plug - gary

Glance
1. 
https://blueprints.launchpad.net/glance/+spec/vmware-datastore-storage-backend
- arnaude

Here's reviews based on fitness for reviews. We have a fair number
deemed fit enough for core-reviewers to look at. We have a *fair*
number just idling waiting for one more +2.

Ordered by fitness for review:

== needs one more +2/approval ==
* https://review.openstack.org/55038
title: 'VMware: bug fix for VM rescue when config drive is config...'
votes: +2:2, +1:5, -1:0, -2:0. +75 days in progress, revision: 6 is 4 days old
* https://review.openstack.org/55070
title: 'VMware: fix rescue with disks are not hot-addable'
votes: +2:1, +1:5, -1:0, -2:0. +74 days in progress, revision: 4 is 4 days old
* https://review.openstack.org/64598
title: 'VMware: unnecessary session reconnection'
votes: +2:2, +1:6, -1:0, -2:0. +15 days in progress, revision: 22 is 2 days old
* https://review.openstack.org/53648
title: 'VMware: fix image snapshot with attached volume'
votes: +2:1, +1:3, -1:0, -2:0. +84 days in progress, revision: 3 is 4 days old
* https://review.openstack.org/57519
title: 'VMware: use .get() to access 'summary.accessible''
votes: +2:1, +1:6, -1:0, -2:0. +57 days in progress, revision: 1 is 52 days old
* https://review.openstack.org/53990
title: 'VMware ESX: Boot from volume must not relocate vol'
votes: +2:1, +1:4, -1:0, -2:0. +82 days in progress, revision: 5 is 45 days old

== ready for core ==
* https://review.openstack.org/54361
title: 'VMware: fix datastore selection when token is returned'
votes: +2:0, +1:7, -1:0, -2:0. +79 days in progress, revision: 8 is 8 days old
* https://review.openstack.org/52557
title: 'VMware Driver update correct disk usage stat'
votes: +2:0, +1:5, -1:0, -2:0. +91 days in progress, revision: 7 is 9 days old
* https://review.openstack.org/59571
title: 'VMware: fix instance lookup against vSphere'
votes: +2:0, +1:6, -1:0, -2:0. +45 days in progress, revision: 12 is 14 days old
* https://review.openstack.org/49692
title: 'VMware: iscsi target discovery fails while attaching volumes'
votes: +2:0, +1:5, -1:0, -2:0. +104 days in progress, revision: 13 is
21 days old

== needs review ==
* https://review.openstack.org/60259
title: 'VMware: fix bug causing nova-compute CPU to spike to 100%'
votes: +2:0, +1:1, -1:0, -2:0. +42 days in progress, revision: 8 is 0 days old
* https://review.openstack.org/65306
title: 'VMware: fix race for datastore directory existence'
votes: +2:0, +1:1, -1:0, -2:0. +9 days in progress, revision: 2 is 0 days old
* https://review.openstack.org/63933
title: 'VMware: create datastore utility functions'
votes: +2:0, +1:1, -1:0, -2:0. +23 days in progress, revision: 8 is 0 days old
* https://review.openstack.org/63933
title: 'VMware: create datastore utility functions'
votes: +2:0, +1:1, -1:0, -2:0. +23 days in progress, revision: 8 is 0 days old
* https://review.openstack.org/54808
title: 'VMware: fix bug for exceptions thrown in _wait_for_task'
votes: +2:0, +1:3, -1:0, -2:0. +77 days in progress, revision: 6 is 0 days old
* https://review.openstack.org/62118
title: 'VMware: Only include connected hosts in cluster stats'
votes: +2:0, +1:4, -1:0, -2:0. +34 days in progress, revision: 5 is 9 days old
* https://review.openstack.org/58994
title: 'VMware: fix the VNC port allocation'
votes: +2:0, +1:4, -1:0, -2:0. +49 days in progress, revision: 10 is 6 days old
* https://review.openstack.org/60652
title: 'VMware: fix disk extend bug when no space on datastore'
votes: +2:0, +1:2, -1:0, -2:0. +40 days in progress, revision: 2 is 40 days old
* https://review.openstack.org/62820
title: 'VMWare: bug fix for Vim exception handling'
votes: +2:0, +1:2, -1:0, -2:0. +29 days in progress, revision: 1 is 29 days old
* https://review.openstack.org/62587
title: 'VMware: fix bug when more than one datacenter exists'
votes: +2:0, +1:1, -1:0, -2:0. +30 days in progress, revision: 1 is 30 days old
* https://review.openstack.org/62587
title: 'VMware: fix bug when more than one datacenter exists'
votes: +2:0, +1:1, -1:0, -2:0. +30 days in 

Re: [openstack-dev] instance migration strangeness in devstack

2014-01-16 Thread Dan Genin

OK, thank you for the sanity check.

Dan

On 01/16/2014 11:29 AM, Vishvananda Ishaya wrote:
In that case, this sounds like a bug to me related to lvm volumes. You 
should check the nova-compute.log from both hosts and the 
nova-conductor.log. If it isn’t obvious what the problem is, you 
should open a bug and attach as much info as possible.


Vish

On Jan 16, 2014, at 8:04 AM, Dan Genin daniel.ge...@jhuapl.edu 
mailto:daniel.ge...@jhuapl.edu wrote:


Thank you for replying, Vish. I did sync and verified that the file 
was written to the host disk by mounting the LVM volume on the host.


When I tried live migration I got a Horizon blurb Error: Failed to 
live migrate instance to host but there were no errors in syslog.


I have been able to successfully migrate a Qcow2 backed instance.
Dan

On 01/16/2014 03:18 AM, Vishvananda Ishaya wrote:
This is probably more of a usage question, but I will go ahead and 
answer it.


If you are writing to the root drive you may need to run the sync 
command a few times to make sure that the data has been flushed to 
disk before you kick off the migration.


The confirm resize step should be deleting the old data, but there 
may be a bug in the lvm backend if this isn’t happening. Live(block) 
migration will probably be a bit more intuitive.


Vish
On Jan 15, 2014, at 2:40 PM, Dan Genin daniel.ge...@jhuapl.edu 
mailto:daniel.ge...@jhuapl.edu wrote:


I think this qualifies as a development question but please let me 
know if I am wrong.


I have been trying to test instance migration in devstack by 
setting up a multi-node devstack following directions at 
http://devstack.org/guides/multinode-lab.html. I tested that indeed 
there were multiple availability zones and that it was possible to 
create instances in each. The I tried migrating an instance from 
one compute node to another using the Horizon interface (I could 
not find a way to /confirm///migration, which is a necessary step, 
from the command line). I created a test file on the instance's 
ephemeral disk, before migrating it, to verify that the data was 
moved to the destination compute node. After migration, I observed 
an instance with the same id active on the destination node but the 
test file was not present.


Perhaps I misunderstand how migration is supposed to work but I 
expected that the data on the ephemeral disk would be migrated with 
the instance. I suppose it could take some time for the ephemeral 
disk to be copied but then I would not expect the instance to 
become active on the destination node before the copy operation was 
complete.


I also noticed that the ephemeral disk on the source compute node 
was not removed after the instance was migrated, although, the 
instance directory was. Nor was the disk removed after the instance 
was destroyed. I was using LVM backend for my tests.


I can provide more information about my setup but I just wanted to 
check whether I was doing (or expecting) something obviously stupid.


Thank you,
Dan
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org 
mailto:OpenStack-dev@lists.openstack.org

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org 
mailto:OpenStack-dev@lists.openstack.org

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev






smime.p7s
Description: S/MIME Cryptographic Signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gantt] Sync up patches

2014-01-16 Thread Sylvain Bauza

Le 16/01/2014 17:18, Vishvananda Ishaya a écrit :


On Jan 16, 2014, at 6:46 AM, Joe Gordon joe.gord...@gmail.com 
mailto:joe.gord...@gmail.com wrote:






On Wed, Jan 15, 2014 at 1:29 PM, Dugger, Donald D 
donald.d.dug...@intel.com mailto:donald.d.dug...@intel.com wrote:


My thought was to try and get some parallel effort going, do the
resync as a continuing task as suffer a little ongoing pain
versus a large amount of pain at the end.  Given that the steps
for a resync are the same no matter when we do it waiting until
the end is acceptable.

From a `just do it' perspective I think we're in violent
agreement on the top level tasks, as long as your step 3,
integration testing, is the same as what I've been calling
working functionality, e.g. have the nova scheduler use the gantt
source tree.

PS: How I resync.  What I've done is create a list with md5sums
of all the files in nova that we've duplicated in gantt.  I then
update a nova git tree and compare the current md5sums for those
files with my list.  I use format-patch to get the patches from
the nova tree and grep for any patch that applies to a gantt
file.  I then use `git am' to apply those patches to the gantt
tree, modifying any of the patches that are needed.


So this sync won't work once we start the nova/gantt rename, so we 
need a better approach.


Syncing the gantt tree with nova sounds like a daunting task. Perhaps 
it would be easier if we use the current gantt tree as a test to see 
what is involved in getting gantt working, and then redo the fork 
after the icehouse feature freeze with the aim of getting the gantt 
tree working by the start of juno, so we can have the freeze 
nova-scheduler discussion. Syncing nova and gantt during feature 
freeze should be significantly easier then doing it now.



I would personally just vote for the nuclear approach of freezing nova 
scheduler and doing work in gantt. If close to icehouse 3 we see that 
gantt is not going to be ready in time we can selectively backport 
stuff to nova-scheduler and push gantt to juno.


Vish




I would also personnally prefer that we freeze Gantt, then do all 
necessary works, and only then backport all commits to Gantt (even if 
it's hard work).


The main issue is that we need to estimate the duration for all 
'necessary works', ie.

 - renaming nova to gantt in the gantt repo
 - having unittests in place
 - defining what the dependency with Nova will be : either having nova 
eggfile in requirements.txt, or (way more longest task) using novaclient 
for requesting Nova objects (like flavors or instances)


-Sylvain
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] a common client library

2014-01-16 Thread Chmouel Boudjnah
On Thu, Jan 16, 2014 at 5:23 PM, Jay Pipes jaypi...@gmail.com wrote:

 Right, but requests supports chunked-transfer encoding properly, so
 really there's no reason those clients could not move to a
 requests-based codebase.


We had that discussion for swiftclient and we are not against it but
unfortunately there is no way to disable compression using requests which
is a requirement for performances with swift (and security).

see: https://review.openstack.org/#/c/33473/

and: https://github.com/kennethreitz/requests/issues/1853

Chmouel.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] question of vcpu-memory-hotplug progress

2014-01-16 Thread Vishvananda Ishaya

On Jan 15, 2014, at 7:36 PM, Wangshen (Peter) pete...@huawei.com wrote:

 Hi, all
 We are interesting in vcpu-memory-hotplug under this link: 
 https://blueprints.launchpad.net/nova/+spec/vcpu-memory-hotplug
 And we have already finished vcpu hotplug and done the basic tests.
 
 But this blueprint seems not be reviewed for almost half year.
 We think this feature is important and useful, can we accelerate the progress 
 of this blueprint?
 If not, could someone tell me the reason.

Can you provide more information on how this is implemented? Support in kvm 
seems spotty and there are likely guests that don’t support it all. The memory 
hotplug idea doesn’t even seem to exist.

Note that you need to set a milestone before it will be put in the queue to be 
reviewed.

https://wiki.openstack.org/wiki/Blueprints#Blueprint_Review_Criteria

In this particular case I think there needs to be a lot more information about 
the plan for implementation before I would think it is valuable to spend time 
on this feature.

Vish

 
 Thinks!
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] a common client library

2014-01-16 Thread Joe Gordon
On Thu, Jan 16, 2014 at 10:25 AM, Jesse Noller
jesse.nol...@rackspace.comwrote:


  On Jan 16, 2014, at 9:07 AM, Joe Gordon joe.gord...@gmail.com wrote:




 On Thu, Jan 16, 2014 at 9:45 AM, Jesse Noller 
 jesse.nol...@rackspace.comwrote:


   On Jan 16, 2014, at 5:53 AM, Chmouel Boudjnah chmo...@enovance.com
 wrote:


 On Thu, Jan 16, 2014 at 12:38 PM, Chris Jones c...@tenshu.net wrote:

 Once a common library is in place, is there any intention to (or
 resistance against) collapsing the clients into a single project or even a
 single command (a la busybox)?



  that's what openstackclient is here for
 https://github.com/openstack/python-openstackclient


   After speaking with people working on OSC and looking at the code base
 in depth; I don’t think this addresses what Chris is implying: OSC wraps
 the individual CLIs built by each project today, instead of the inverse: a
 common backend that the individual CLIs can wrap - the latter is an
 important distinction as currently, building a single binary install of OSC
 for say, Windows is difficult given the dependency tree incurred by each of
 the wrapped CLIs, difference in dependencies, structure, etc.

  Also, wrapping a series of inconsistent back end Client classes /
 functions / methods means that the layer that presents a consistent user
 interface (OSC) to the user is made more complex juggling
 names/renames/commands/etc.

  In the inverted case of what we have today (single backend); as a
 developer of user interfaces (CLIs, Applications, Web apps (horizon)) you
 would be able to:

  from openstack.common.api import Auth
  from openstack.common.api import Compute
 from openstack.common.util import cli_tools

  my_cli = cli_tools.build(…)

  def my_command(cli):
 compute = Compute(Auth(cli.tentant…, connect=True))
 compute.list_flavors()

  This would mean that *even if the individual clients needed or wanted
 to keep their specific CLIs, they would be able to use a not “least common
 denominator” back end (each service can have a rich common.api.compute.pyor 
 api.compute/client.py and extend where needed. However tools like
 horizon / openstackclient can choose not to leverage the “power
 user/operator/admin” components and present a simplified user interface.

  I’m working on a wiki page + blueprint to brainstorm how we could
 accomplish this based off of what work is in flight today (see doug’s
 linked blueprint) and sussing out a layout / API strawman for discussion.

  Some of the additions that came out of this email threads and others:

  1. Common backend should provide / offer caching utilities
 2. Auth retries need to be handled by the auth object, and each
 sub-project delegates to the auth object to manage that.
 3. Verified Mocks / Stubs / Fakes must be provided for proper unit
 testing


  I am happy to see this work being done, there is definitely a lot of
 work to be done on the clients.

  This blueprint sounds like its still being fleshed out, so I am
 wondering what the value is of the current patches
 https://review.openstack.org/#/q/topic:bp/common-client-library-2,n,z

  Those patches mainly sync cliutils and apiutils from oslo into the
 assorted clients. But if this blueprint is about the python API and not the
 CLI (as that would be the openstack-pythonclient), why sync in apiutils?

  Also does this need to go through oslo-incubator or can this start out
 as a library? Making this a library earlier on will reduce the number of
 patches needed to get 20+ repositories to use this.


  Alexei and others have at least started the first stage of a rollout -
 the blueprint(s) needs additional work, planning and discussion, but his
 work is a good first step (reduce the duplication of code) although I am
 worried that the libraries and APIs / namespaces will need to change if we
 continue these discussions which potentially means re-doing work.

  If we take a step back, a rollout process might be:

  1: Solidify the libraries / layout / naming conventions (blueprint)
 2: Solidify the APIs exposed to consumers (blueprint)
 3: Pick up on the common-client-library-2 work which is primarily a
 migration of common code into oslo today, into the structure defined by 1 
 2

  So, I sort of agree: moving / collapsing code now might be premature. I
 do strongly agree it should stand on its own as a library rather than an
 oslo incubator however. We should start with a single, clean namespace /
 library rather than depending on oslo directly.


Sounds good to me, thanks.



  jesse



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Hat Tip to fungi

2014-01-16 Thread Thierry Carrez
Russell Bryant wrote:
 On 01/16/2014 08:44 AM, Anita Kuno wrote:
 I tip my hat to you, sir.
 
 +1

+1!

 This is some very well deserved recognition for hard work and dedication
 to OpenStack. :-)

Maybe we should restore the awards. I think last time we did them was...
the Bexar summit in San Antonio.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] a common client library

2014-01-16 Thread Alexei Kornienko

On 01/16/2014 06:15 PM, Jesse Noller wrote:


On Jan 16, 2014, at 9:54 AM, Alexei Kornienko 
alexei.kornie...@gmail.com mailto:alexei.kornie...@gmail.com wrote:



On 01/16/2014 05:25 PM, Jesse Noller wrote:


On Jan 16, 2014, at 9:07 AM, Joe Gordon joe.gord...@gmail.com 
mailto:joe.gord...@gmail.com wrote:






On Thu, Jan 16, 2014 at 9:45 AM, Jesse Noller 
jesse.nol...@rackspace.com mailto:jesse.nol...@rackspace.com wrote:



On Jan 16, 2014, at 5:53 AM, Chmouel Boudjnah
chmo...@enovance.com mailto:chmo...@enovance.com wrote:



On Thu, Jan 16, 2014 at 12:38 PM, Chris Jones c...@tenshu.net
mailto:c...@tenshu.net wrote:

Once a common library is in place, is there any intention
to (or resistance against) collapsing the clients into a
single project or even a single command (a la busybox)?



that's what openstackclient is here for
https://github.com/openstack/python-openstackclient


After speaking with people working on OSC and looking at the
code base in depth; I don't think this addresses what Chris is
implying: OSC wraps the individual CLIs built by each project
today, instead of the inverse: a common backend that the
individual CLIs can wrap - the latter is an important
distinction as currently, building a single binary install of
OSC for say, Windows is difficult given the dependency tree
incurred by each of the wrapped CLIs, difference in
dependencies, structure, etc.

Also, wrapping a series of inconsistent back end Client classes
/ functions / methods means that the layer that presents a
consistent user interface (OSC) to the user is made more
complex juggling names/renames/commands/etc.

In the inverted case of what we have today (single backend); as
a developer of user interfaces (CLIs, Applications, Web apps
(horizon)) you would be able to:

from openstack.common.api import Auth
from openstack.common.api import Compute
from openstack.common.util import cli_tools

my_cli = cli_tools.build(...)

def my_command(cli):
compute = Compute(Auth(cli.tentant..., connect=True))
compute.list_flavors()

This would mean that *even if the individual clients needed or
wanted to keep their specific CLIs, they would be able to use a
not least common denominator back end (each service can have
a rich common.api.compute.py http://common.api.compute.py/ or
api.compute/client.py and extend where needed. However tools
like horizon / openstackclient can choose not to leverage the
power user/operator/admin components and present a simplified
user interface.

I'm working on a wiki page + blueprint to brainstorm how we
could accomplish this based off of what work is in flight today
(see doug's linked blueprint) and sussing out a layout / API
strawman for discussion.

Some of the additions that came out of this email threads and
others:

1. Common backend should provide / offer caching utilities
2. Auth retries need to be handled by the auth object, and each
sub-project delegates to the auth object to manage that.
3. Verified Mocks / Stubs / Fakes must be provided for proper
unit testing


I am happy to see this work being done, there is definitely a lot 
of work to be done on the clients.


This blueprint sounds like its still being fleshed out, so I am 
wondering what the value is of the current patches 
https://review.openstack.org/#/q/topic:bp/common-client-library-2,n,z


Those patches mainly sync cliutils and apiutils from oslo into the 
assorted clients. But if this blueprint is about the python API and 
not the CLI (as that would be the openstack-pythonclient), why sync 
in apiutils?


Also does this need to go through oslo-incubator or can this start 
out as a library? Making this a library earlier on will reduce the 
number of patches needed to get 20+ repositories to use this.




Alexei and others have at least started the first stage of a rollout 
- the blueprint(s) needs additional work, planning and discussion, 
but his work is a good first step (reduce the duplication of code) 
although I am worried that the libraries and APIs / namespaces will 
need to change if we continue these discussions which potentially 
means re-doing work.


If we take a step back, a rollout process might be:

1: Solidify the libraries / layout / naming conventions (blueprint)
2: Solidify the APIs exposed to consumers (blueprint)
3: Pick up on the common-client-library-2 work which is primarily a 
migration of common code into oslo today, into the structure defined 
by 1  2


So, I sort of agree: moving / collapsing code now might be 
premature. I do strongly agree it should stand on its own as a 
library rather than an oslo incubator however. We should start with 
a single, clean namespace / library rather than depending on oslo 
directly.
Knowing usual openstack workflow I'm afraid that #1,#2 with a 
waterfall 

Re: [openstack-dev] [Neturon] firewall_driver and ML2 and vif_security discussion

2014-01-16 Thread Robert Kukura
On 01/16/2014 04:43 AM, Mathieu Rohon wrote:
 Hi,
 
 your proposals make sense. Having the firewall driver configuring so
 much things looks pretty stange.

Agreed. I fully support proposed fix 1, adding enable_security_group
config, at least for ml2. I'm not sure whether making this sort of
change go the openvswitch or linuxbridge plugins at this stage is needed.


 Enabling security group should be a plugin/MD decision, not a driver decision.

I'm not so sure I support proposed fix 2, removing firewall_driver
configuration. I think with proposed fix 1, firewall_driver becomes an
agent-only configuration variable, which seems fine to me, at least for
now. The people working on ovs-firewall-driver need something like this
to choose the between their new driver and the iptables driver. Each L2
agent could obviously revisit this later if needed.

 
 For ML2, in a first implementation, having vif security based on
 vif_type looks good too.

I'm not convinced to support proposed fix 3, basing ml2's vif_security
on the value of vif_type. It seems to me that if vif_type was all that
determines how nova handles security groups, there would be no need for
either the old capabilities or new vif_security port attribute.

I think each ML2 bound MechanismDriver should be able to supply whatever
vif_security (or capabilities) value it needs. It should be free to
determine that however it wants. It could be made configurable on the
server-side as Mathieu suggest below, or could be kept configurable in
the L2 agent and transmitted via agents_db RPC to the MechanismDriver in
the server as I have previously suggested.

As an initial step, until we really have multiple firewall drivers to
choose from, I think we can just hardwire each agent-based
MechanismDriver to return the correct vif_security value for its normal
firewall driver, as we currently do for the capabilities attribute.

Also note that I really like the extend_port_dict() MechanismDriver
methods in Nachi's current patch set. This is a much nicer way for the
bound MechanismDriver to return binding-specific attributes than what
ml2 currently does for vif_type and capabilities. I'm working on a patch
taking that part of Nachi's code, fixing a few things, and extending it
to handle the vif_type attribute as well as the current capabilities
attribute. I'm hoping to post at least a WIP version of this today.

I do support hardwiring the other plugins to return specific
vif_security values, but those values may need to depend on the value of
enable_security_group from proposal 1.

-Bob

 Once OVSfirewallDriver will be available, the firewall drivers that
 the operator wants to use should be in a MD config file/section and
 ovs MD could bind one of the firewall driver during
 port_create/update/get.
 
 Best,
 Mathieu
 
 On Wed, Jan 15, 2014 at 6:29 PM, Nachi Ueno na...@ntti3.com wrote:
 Hi folks

 Security group for OVS agent (ovs plugin or ML2) is being broken.
 so we need vif_security port binding to fix this
 (https://review.openstack.org/#/c/21946/)

 We got discussed about the architecture for ML2 on ML2 weekly meetings, and
 I wanna continue discussion in here.

 Here is my proposal for how to fix it.

 https://docs.google.com/presentation/d/1ktF7NOFY_0cBAhfqE4XjxVG9yyl88RU_w9JcNiOukzI/edit#slide=id.p

 Best
 Nachi

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] a common client library

2014-01-16 Thread Dean Troyer
On Thu, Jan 16, 2014 at 10:23 AM, Jay Pipes jaypi...@gmail.com wrote:

 Right, but requests supports chunked-transfer encoding properly, so
 really there's no reason those clients could not move to a
 requests-based codebase.


Absolutely...it was totally me chickening out at the time why they didn't
get changed.  I feel a bit braver now...  :)


dt

-- 

Dean Troyer
dtro...@gmail.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposal for dd disk i/o performance blueprint of cinder.

2014-01-16 Thread Fox, Kevin M
Yeah, I think the evil firmware issue is separate and should be solved 
separately.

Ideally, there should be a mode you can set the bare metal server into where 
firmware updates are not allowed. This is useful to more folks then just 
baremetal cloud admins. Something to ask the hardware vendors for.

Kevin


From: CARVER, PAUL [pc2...@att.com]
Sent: Thursday, January 16, 2014 5:34 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] Proposal for dd disk i/o performance   
blueprint   of cinder.

Clint Byrum wrote:

Is that really a path worth going down, given that tenant-A could just
drop evil firmware in any number of places, and thus all tenants afterward
are owned anyway?

I think a change of subject line is in order for this topic (assuming it hasn't 
been discussed in sufficient depth already). I propose [Ironic] Evil Firmware 
but I didn't change it on this message in case folks interested in this thread 
aren't reading Ironic threads.

Ensuring clean firmware is definitely something Ironic needs to account for. 
Unless you're intending to say that multi-tenant bare metal is a dead end that 
shouldn't be done at all.

As long as anyone is considering Ironic and bare metal in general as a viable 
project and service it is critically important that people are focused on how 
to ensure that a server released by one tenant is clean before being provided 
to another tenant.

It doesn't even have to be evil firmware. Simply providing a tenant with a 
server where the previous tenant screwed up a firmware update or messed with 
BIOS settings or whatever is a problem. If you're going to lease bare metal out 
on a short term basis you've GOT to have some sort of QC to ensure that when 
the hardware is reused for another tenant it's as good as new.

If not, it will be all too common for a tenant to receive a bare metal server 
that's been screwed up by a previous tenant through incompetence as much as 
through maliciousness.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] a common client library

2014-01-16 Thread Mark Washenberger
On Thu, Jan 16, 2014 at 8:06 AM, Dean Troyer dtro...@gmail.com wrote:

 On Thu, Jan 16, 2014 at 9:37 AM, Jesse Noller 
 jesse.nol...@rackspace.comwrote:

 On Jan 16, 2014, at 9:26 AM, Justin Hammond justin.hamm...@rackspace.com
 wrote:

 I'm not sure if it was said, but which httplib using being used (urllib3
 maybe?). Also I noticed many people were talking about supporting auth
 properly, but are there any intentions to properly support 'noauth'
 (python-neutronclient, for instance, doesn't support it properly as of
 this writing)?

 Can you detail out noauth for me; and I would say the defacto httplib in
 python today is python-requests - urllib3 is also good but I would say from
 a *consumer* standpoint requests offers more in terms of usability /
 extensibility


 requests is built on top of urllib3 so there's that...

 The biggest reaon I favor using Jamie Lennox's new session layer stuff in
 keystoneclient is that it better reflects the requests API instead of it
 being stuffed in after the fact.  And as the one responsible for that
 stuffing, it was pretty blunt and really needs to be cleaned up more than
 Alessio did.

 only a few libs (maybe just glance and swift?) don't use requests at this
 point and I think the resistance there is the chunked transfers they both
 do.


There's a few more items here that are needed for glance to be able to work
with requests (which we really really want).
1) Support for 100-expect-continue is probably going to be required in
glance as well as swift
2) Support for turning off tls/ssl compression (our streams are already
compressed)

I feel like we *must* have somebody here who is able and willing to add
these features to requests, which seems like the right approach.



 I'm really curious what 'noauth' means against APIs that have few, if any,
 calls that operate without a valid token.

 dt

 --

 Dean Troyer
 dtro...@gmail.com

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neturon] firewall_driver and ML2 and vif_security discussion

2014-01-16 Thread Nachi Ueno
Hi Mathieu, Bob

Thank you for your reply
OK let's do (A) - (C) for now.

(A) Remove firewall_driver from server side
 Remove Noop -- I'll write patch for this

(B) update ML2 with extend_port_dict -- Bob will push new review for this

(C) Fix vif_security patch using (1) and (2). -- I'll update the
patch after (A) and (B) merged
 # config is hardwired for each mech drivers for now

(Optional D) Rething firewall_driver config in the agent





2014/1/16 Robert Kukura rkuk...@redhat.com:
 On 01/16/2014 04:43 AM, Mathieu Rohon wrote:
 Hi,

 your proposals make sense. Having the firewall driver configuring so
 much things looks pretty stange.

 Agreed. I fully support proposed fix 1, adding enable_security_group
 config, at least for ml2. I'm not sure whether making this sort of
 change go the openvswitch or linuxbridge plugins at this stage is needed.


 Enabling security group should be a plugin/MD decision, not a driver 
 decision.

 I'm not so sure I support proposed fix 2, removing firewall_driver
 configuration. I think with proposed fix 1, firewall_driver becomes an
 agent-only configuration variable, which seems fine to me, at least for
 now. The people working on ovs-firewall-driver need something like this
 to choose the between their new driver and the iptables driver. Each L2
 agent could obviously revisit this later if needed.


 For ML2, in a first implementation, having vif security based on
 vif_type looks good too.

 I'm not convinced to support proposed fix 3, basing ml2's vif_security
 on the value of vif_type. It seems to me that if vif_type was all that
 determines how nova handles security groups, there would be no need for
 either the old capabilities or new vif_security port attribute.

 I think each ML2 bound MechanismDriver should be able to supply whatever
 vif_security (or capabilities) value it needs. It should be free to
 determine that however it wants. It could be made configurable on the
 server-side as Mathieu suggest below, or could be kept configurable in
 the L2 agent and transmitted via agents_db RPC to the MechanismDriver in
 the server as I have previously suggested.

 As an initial step, until we really have multiple firewall drivers to
 choose from, I think we can just hardwire each agent-based
 MechanismDriver to return the correct vif_security value for its normal
 firewall driver, as we currently do for the capabilities attribute.

 Also note that I really like the extend_port_dict() MechanismDriver
 methods in Nachi's current patch set. This is a much nicer way for the
 bound MechanismDriver to return binding-specific attributes than what
 ml2 currently does for vif_type and capabilities. I'm working on a patch
 taking that part of Nachi's code, fixing a few things, and extending it
 to handle the vif_type attribute as well as the current capabilities
 attribute. I'm hoping to post at least a WIP version of this today.

 I do support hardwiring the other plugins to return specific
 vif_security values, but those values may need to depend on the value of
 enable_security_group from proposal 1.

 -Bob

 Once OVSfirewallDriver will be available, the firewall drivers that
 the operator wants to use should be in a MD config file/section and
 ovs MD could bind one of the firewall driver during
 port_create/update/get.

 Best,
 Mathieu

 On Wed, Jan 15, 2014 at 6:29 PM, Nachi Ueno na...@ntti3.com wrote:
 Hi folks

 Security group for OVS agent (ovs plugin or ML2) is being broken.
 so we need vif_security port binding to fix this
 (https://review.openstack.org/#/c/21946/)

 We got discussed about the architecture for ML2 on ML2 weekly meetings, and
 I wanna continue discussion in here.

 Here is my proposal for how to fix it.

 https://docs.google.com/presentation/d/1ktF7NOFY_0cBAhfqE4XjxVG9yyl88RU_w9JcNiOukzI/edit#slide=id.p

 Best
 Nachi

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] a common client library

2014-01-16 Thread Mark Washenberger
On Wed, Jan 15, 2014 at 7:53 PM, Alexei Kornienko 
alexei.kornie...@gmail.com wrote:

 I did notice, however, that neutronclient is
 conspicuously absent from the Work Items in the blueprint's Whiteboard.
 It will surely be added later. We already working on several things in
 parallel and we will add neutronclient soon.


 I would love to see a bit more detail on the structure of the lib(s), the
 blueprint really doesn't discuss the design/organization/intended API of
 the libs.  For example, I would hope the distinction between the various
 layers of a client stack don't get lost, i.e. not mixing the low-level REST
 API bits with the higher-level CLI parsers and decorators.
 Does the long-term goals include a common caching layer?

 Distinction between client layers won't get lost and would only be
 improved. My basic idea is the following:
 1) Transport layer would handle all transport related stuff - HTTP, JSON
 encoding, auth, caching, etc.
 2) Model layer (Resource classes, BaseManager, etc.) will handle data
 representation, validation
 3) API layer will handle all project specific stuff - url mapping, etc.
 (This will be imported to use client in other applications)
 4) Cli level will handle all stuff related to cli mapping - argparse,
 argcomplete, etc.


I'm really excited about this. I think consolidating layers 1 and 4 will be
a huge benefit for deployers and users.

I'm hoping we can structure layers 2 and 3 a bit flexibly to allow for
existing project differences and proper ownership. For example, in Glance
we use jsonschema somewhat so our validation is a bit different. Also, I
consider the definition of resources and url mappings for images to be
something that should be owned by the Images program. I'm confident,
however, that we can figure out how to structure the libraries,
deliverables, and process to reflect that ownership.



 I believe the current effort referenced by the blueprint is focusing on
 moving existing code into the incubator for reuse, to make it easier to
 restructure later. Alexei, do I have that correct?
 You are right. The first thing we do is try to make all clients look/work
 in similar way. After we'll continue our work with improving overall
 structure.




 2014/1/16 Noorul Islam K M noo...@noorul.com

 Doug Hellmann doug.hellm...@dreamhost.com writes:

  Several people have mentioned to me that they are interested in, or
  actively working on, code related to a common client library --
 something
  meant to be reused directly as a basis for creating a common library for
  all of the openstack clients to use. There's a blueprint [1] in oslo,
 and I
  believe the keystone devs and unified CLI teams are probably interested
 in
  ensuring that the resulting API ends up meeting all of our various
  requirements.
 
  If you're interested in this effort, please subscribe to the blueprint
 and
  use that to coordinate efforts so we don't produce more than one common
  library. ;-)
 

 Solum is already using it https://review.openstack.org/#/c/58067/

 I would love to watch this space.

 Regards,
 Noorul

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] a common client library

2014-01-16 Thread Mark Washenberger
On Thu, Jan 16, 2014 at 12:03 AM, Flavio Percoco fla...@redhat.com wrote:

 On 15/01/14 21:35 +, Jesse Noller wrote:


 On Jan 15, 2014, at 1:37 PM, Doug Hellmann doug.hellm...@dreamhost.com
 wrote:

  Several people have mentioned to me that they are interested in, or
 actively working on, code related to a common client library -- something
 meant to be reused directly as a basis for creating a common library for
 all of the openstack clients to use. There's a blueprint [1] in oslo, and I
 believe the keystone devs and unified CLI teams are probably interested in
 ensuring that the resulting API ends up meeting all of our various
 requirements.

 If you're interested in this effort, please subscribe to the blueprint
 and use that to coordinate efforts so we don't produce more than one common
 library. ;-)

 Thanks,
 Doug


 [1] https://blueprints.launchpad.net/oslo/+spec/common-client-library-2


 *raises hand*

 Me me!

 I’ve been talking to many contributors about the Developer Experience
 stuff I emailed out prior to the holidays and I was starting blueprint
 work, but this is a great pointer. I’m going to have to sync up with Alexei.

 I think solving this for openstack developers and maintainers as the
 blueprint says is a big win in terms of code reuse / maintenance and
 consistent but more so for *end-user developers* consuming openstack clouds.

 Some background - there’s some terminology mismatch but the rough idea is
 the same:

 * A centralized “SDK” (Software Development Kit) would be built
 condensing the common code and logic and operations into a single namespace.

 * This SDK would be able to be used by “downstream” CLIs - essentially
 the CLIs become a specialized front end - and in some cases, only an
 argparse or cliff front-end to the SDK methods located in the (for example)
 openstack.client.api.compute

 * The SDK would handle Auth, re-auth (expired tokens, etc) for long-lived
 clients - all of the openstack.client.api.** classes would accept an Auth
 object to delegate management / mocking of the Auth / service catalog stuff
 to. This means developers building applications (say for example, horizon)
 don’t need to worry about token/expired authentication/etc.

 * Simplify the dependency graph  code for the existing tools to enable
 single binary installs (py2exe, py2app, etc) for end users of the command
 line tools.

 Short version: if a developer wants to consume an openstack cloud; the
 would have a single SDK with minimal dependencies and import from a single
 namespace. An example application might look like:

 from openstack.api import AuthV2
 from openstack.api import ComputeV2

 myauth = AuthV2(…., connect=True)
 compute = ComputeV2(myauth)

 compute.list_flavors()


 I know this is an example but, could we leave the version out of the
 class name? Having something like:

 from openstack.api.v2 import Compute

or

 from openstack.compute.v2 import Instance

 (just made that up)

 for marconi we're using the later.


Just throwing this out there because it seems relevant to client design.

As we've been looking at porting clients to using v2 of the Images API, its
seems more and more to me that including the *server* version in the main
import path is a real obstacle.

IMO any future client libs should write library interfaces based on the
peculiarities of user needs, not based on the vagaries of the server
version. So as a user of this library I would do something like:

  1 from openstack.api import images
  2 client = images.make_me_a_client(auth_url, etcetera) # all version
negotiation is happening here
  3 client.list_images()  # works more or less same no matter who I'm
talking to

Now, there would still likely be hidden implementation code that is
different per server version and which is instantiated in line 2 above, and
maybe that's the library path stuff you are talking about.




  This greatly improves the developer experience both internal to openstack
 and externally. Currently OpenStack has 22+ (counting stackforge) potential
 libraries a developer may need to install to use a full deployment of
 OpenStack:

  * python-keystoneclient (identity)
  * python-glanceclient (image)
  * python-novaclient (compute)
  * python-troveclient (database)
  * python-neutronclient (network)
  * python-ironicclient (bare metal)
  * python-heatclient (orchestration)
  * python-cinderclient (block storage)
  * python-ceilometerclient (telemetry, metrics  billing)
  * python-swiftclient (object storage)
  * python-savannaclient (big data)
  * python-openstackclient (meta client package)
  * python-marconiclient (queueing)
  * python-tuskarclient (tripleo / management)
  * python-melangeclient (dead)
  * python-barbicanclient (secrets)
  * python-solumclient (ALM)
  * python-muranoclient (application catalog)
  * python-manilaclient (shared filesystems)
  * python-libraclient (load balancers)
  * python-climateclient (reservations)
  * python-designateclient 

Re: [openstack-dev] a common client library

2014-01-16 Thread Joe Gordon
On Thu, Jan 16, 2014 at 12:07 PM, Alexei Kornienko 
alexei.kornie...@gmail.com wrote:

  On 01/16/2014 06:15 PM, Jesse Noller wrote:


  On Jan 16, 2014, at 9:54 AM, Alexei Kornienko alexei.kornie...@gmail.com
 wrote:

  On 01/16/2014 05:25 PM, Jesse Noller wrote:


  On Jan 16, 2014, at 9:07 AM, Joe Gordon joe.gord...@gmail.com wrote:




 On Thu, Jan 16, 2014 at 9:45 AM, Jesse Noller 
 jesse.nol...@rackspace.comwrote:


   On Jan 16, 2014, at 5:53 AM, Chmouel Boudjnah chmo...@enovance.com
 wrote:


 On Thu, Jan 16, 2014 at 12:38 PM, Chris Jones c...@tenshu.net wrote:

 Once a common library is in place, is there any intention to (or
 resistance against) collapsing the clients into a single project or even a
 single command (a la busybox)?



  that's what openstackclient is here for
 https://github.com/openstack/python-openstackclient


   After speaking with people working on OSC and looking at the code base
 in depth; I don’t think this addresses what Chris is implying: OSC wraps
 the individual CLIs built by each project today, instead of the inverse: a
 common backend that the individual CLIs can wrap - the latter is an
 important distinction as currently, building a single binary install of OSC
 for say, Windows is difficult given the dependency tree incurred by each of
 the wrapped CLIs, difference in dependencies, structure, etc.

  Also, wrapping a series of inconsistent back end Client classes /
 functions / methods means that the layer that presents a consistent user
 interface (OSC) to the user is made more complex juggling
 names/renames/commands/etc.

  In the inverted case of what we have today (single backend); as a
 developer of user interfaces (CLIs, Applications, Web apps (horizon)) you
 would be able to:

  from openstack.common.api import Auth
  from openstack.common.api import Compute
 from openstack.common.util import cli_tools

  my_cli = cli_tools.build(…)

  def my_command(cli):
 compute = Compute(Auth(cli.tentant…, connect=True))
 compute.list_flavors()

  This would mean that *even if the individual clients needed or wanted
 to keep their specific CLIs, they would be able to use a not “least common
 denominator” back end (each service can have a rich common.api.compute.pyor 
 api.compute/client.py and extend where needed. However tools like
 horizon / openstackclient can choose not to leverage the “power
 user/operator/admin” components and present a simplified user interface.

  I’m working on a wiki page + blueprint to brainstorm how we could
 accomplish this based off of what work is in flight today (see doug’s
 linked blueprint) and sussing out a layout / API strawman for discussion.

  Some of the additions that came out of this email threads and others:

  1. Common backend should provide / offer caching utilities
 2. Auth retries need to be handled by the auth object, and each
 sub-project delegates to the auth object to manage that.
 3. Verified Mocks / Stubs / Fakes must be provided for proper unit
 testing


  I am happy to see this work being done, there is definitely a lot of
 work to be done on the clients.

  This blueprint sounds like its still being fleshed out, so I am
 wondering what the value is of the current patches
 https://review.openstack.org/#/q/topic:bp/common-client-library-2,n,z

  Those patches mainly sync cliutils and apiutils from oslo into the
 assorted clients. But if this blueprint is about the python API and not the
 CLI (as that would be the openstack-pythonclient), why sync in apiutils?

  Also does this need to go through oslo-incubator or can this start out
 as a library? Making this a library earlier on will reduce the number of
 patches needed to get 20+ repositories to use this.


  Alexei and others have at least started the first stage of a rollout -
 the blueprint(s) needs additional work, planning and discussion, but his
 work is a good first step (reduce the duplication of code) although I am
 worried that the libraries and APIs / namespaces will need to change if we
 continue these discussions which potentially means re-doing work.

  If we take a step back, a rollout process might be:

  1: Solidify the libraries / layout / naming conventions (blueprint)
 2: Solidify the APIs exposed to consumers (blueprint)
 3: Pick up on the common-client-library-2 work which is primarily a
 migration of common code into oslo today, into the structure defined by 1 
 2

  So, I sort of agree: moving / collapsing code now might be premature. I
 do strongly agree it should stand on its own as a library rather than an
 oslo incubator however. We should start with a single, clean namespace /
 library rather than depending on oslo directly.

 Knowing usual openstack workflow I'm afraid that #1,#2 with a waterfall
 approach may take years to be complete.
 And after they'll be approved it will become clear that this architecture
 is already outdated.
 We try to use iterative approach for clients refactoring.
 We started our work from removing code 

Re: [openstack-dev] sqla 0.8 ... and sqla 0.9

2014-01-16 Thread Jeremy Stanley
On 2014-01-12 07:27:11 -0500 (-0500), Sean Dague wrote:
 With the taskflow update, the only thing between upping our sqla
 requirement to  0.8.99 is pbr's requirements integration test
 getting a work around for pip's behavior change (which will
 currently not install netifaces because it's not on pypi... also
 it's largely abandoned).

This should (hopefully) now be solved as of my patch to pypi-mirror
yesterday. If it's not, please let me know.
-- 
Jeremy Stanley

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] a common client library

2014-01-16 Thread Jesse Noller

On Jan 16, 2014, at 11:39 AM, Mark Washenberger 
mark.washenber...@markwash.netmailto:mark.washenber...@markwash.net wrote:




On Thu, Jan 16, 2014 at 8:06 AM, Dean Troyer 
dtro...@gmail.commailto:dtro...@gmail.com wrote:
On Thu, Jan 16, 2014 at 9:37 AM, Jesse Noller 
jesse.nol...@rackspace.commailto:jesse.nol...@rackspace.com wrote:
On Jan 16, 2014, at 9:26 AM, Justin Hammond 
justin.hamm...@rackspace.commailto:justin.hamm...@rackspace.com wrote:
I'm not sure if it was said, but which httplib using being used (urllib3
maybe?). Also I noticed many people were talking about supporting auth
properly, but are there any intentions to properly support 'noauth'
(python-neutronclient, for instance, doesn't support it properly as of
this writing)?
Can you detail out noauth for me; and I would say the defacto httplib in python 
today is python-requests - urllib3 is also good but I would say from a 
*consumer* standpoint requests offers more in terms of usability / extensibility

requests is built on top of urllib3 so there's that...

The biggest reaon I favor using Jamie Lennox's new session layer stuff in 
keystoneclient is that it better reflects the requests API instead of it being 
stuffed in after the fact.  And as the one responsible for that stuffing, it 
was pretty blunt and really needs to be cleaned up more than Alessio did.

only a few libs (maybe just glance and swift?) don't use requests at this point 
and I think the resistance there is the chunked transfers they both do.

There's a few more items here that are needed for glance to be able to work 
with requests (which we really really want).
1) Support for 100-expect-continue is probably going to be required in glance 
as well as swift
2) Support for turning off tls/ssl compression (our streams are already 
compressed)

I feel like we *must* have somebody here who is able and willing to add these 
features to requests, which seems like the right approach.

Let me talk to upstream about this; I know a lot of people involved. Patches 
from us probably needed, but I’ll ask



I'm really curious what 'noauth' means against APIs that have few, if any, 
calls that operate without a valid token.

dt

--

Dean Troyer
dtro...@gmail.commailto:dtro...@gmail.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] sqla 0.8 ... and sqla 0.9

2014-01-16 Thread Joshua Harlow
Also with:

https://review.openstack.org/#/c/66051/

On 1/16/14, 10:40 AM, Jeremy Stanley fu...@yuggoth.org wrote:

On 2014-01-12 07:27:11 -0500 (-0500), Sean Dague wrote:
 With the taskflow update, the only thing between upping our sqla
 requirement to  0.8.99 is pbr's requirements integration test
 getting a work around for pip's behavior change (which will
 currently not install netifaces because it's not on pypi... also
 it's largely abandoned).

This should (hopefully) now be solved as of my patch to pypi-mirror
yesterday. If it's not, please let me know.
-- 
Jeremy Stanley

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] a common client library

2014-01-16 Thread Alexei Kornienko

Hello Joe,

continuous refactoring and syncing across 22+ repositories sounds like 
a nightmare, one that I would like to avoid.

You are right this is not easy.

However I have several reasons to do that:
The hardest part is to bring basic stuff in sync across all projects 
(That's what we are doing now). Later we'll work directly with oslo lib 
and just sync changes from it.


We could introduce a standalone library to avoid the need to sync oslo 
code across all projects but it brings additional problems:


1) We would have to maintain rationale versioning and backwards 
compatibility of this library. If we start library from scratch we'll 
have to add/change lots of stuff before we'll reach some stability period.


2)Another option would be to follow waterfall process and create a solid 
library interface before including it to all client projects. However 
such approach this period can take unknown amount of time and can be 
easily failed during integration stage cause requirements change or some 
other reason.


Please let me know what you think.

Best Regards,
Alexei

On 01/16/2014 08:16 PM, Joe Gordon wrote:




On Thu, Jan 16, 2014 at 12:07 PM, Alexei Kornienko 
alexei.kornie...@gmail.com mailto:alexei.kornie...@gmail.com wrote:


On 01/16/2014 06:15 PM, Jesse Noller wrote:


On Jan 16, 2014, at 9:54 AM, Alexei Kornienko
alexei.kornie...@gmail.com mailto:alexei.kornie...@gmail.com
wrote:


On 01/16/2014 05:25 PM, Jesse Noller wrote:


On Jan 16, 2014, at 9:07 AM, Joe Gordon joe.gord...@gmail.com
mailto:joe.gord...@gmail.com wrote:





On Thu, Jan 16, 2014 at 9:45 AM, Jesse Noller
jesse.nol...@rackspace.com
mailto:jesse.nol...@rackspace.com wrote:


On Jan 16, 2014, at 5:53 AM, Chmouel Boudjnah
chmo...@enovance.com mailto:chmo...@enovance.com wrote:



On Thu, Jan 16, 2014 at 12:38 PM, Chris Jones
c...@tenshu.net mailto:c...@tenshu.net wrote:

Once a common library is in place, is there any
intention to (or resistance against) collapsing the
clients into a single project or even a single
command (a la busybox)?



that's what openstackclient is here for
https://github.com/openstack/python-openstackclient


After speaking with people working on OSC and looking at
the code base in depth; I don't think this addresses what
Chris is implying: OSC wraps the individual CLIs built by
each project today, instead of the inverse: a common
backend that the individual CLIs can wrap - the latter is
an important distinction as currently, building a single
binary install of OSC for say, Windows is difficult given
the dependency tree incurred by each of the wrapped CLIs,
difference in dependencies, structure, etc.

Also, wrapping a series of inconsistent back end Client
classes / functions / methods means that the layer that
presents a consistent user interface (OSC) to the user is
made more complex juggling names/renames/commands/etc.

In the inverted case of what we have today (single
backend); as a developer of user interfaces (CLIs,
Applications, Web apps (horizon)) you would be able to:

from openstack.common.api import Auth
from openstack.common.api import Compute
from openstack.common.util import cli_tools

my_cli = cli_tools.build(...)

def my_command(cli):
compute = Compute(Auth(cli.tentant..., connect=True))
compute.list_flavors()

This would mean that *even if the individual clients
needed or wanted to keep their specific CLIs, they would
be able to use a not least common denominator back end
(each service can have a rich common.api.compute.py
http://common.api.compute.py/ or api.compute/client.py
and extend where needed. However tools like horizon /
openstackclient can choose not to leverage the power
user/operator/admin components and present a simplified
user interface.

I'm working on a wiki page + blueprint to brainstorm how
we could accomplish this based off of what work is in
flight today (see doug's linked blueprint) and sussing out
a layout / API strawman for discussion.

Some of the additions that came out of this email threads
and others:

1. Common backend should provide / offer caching utilities
2. Auth retries need to be handled by the auth object, and
each sub-project delegates to the auth object to manage that.
3. Verified Mocks / Stubs / Fakes must be provided for
proper unit testing


I am happy to see this work being done, there is definitely a
lot of work to be done on the clients.

This blueprint sounds like its still being fleshed out, so I
am wondering what the value is of the current patches

[openstack-dev] [savanna] team meeting minutes Jan 16

2014-01-16 Thread Sergey Lukjanov
Thanks everyone who have joined Savanna meeting.

Here are the logs from the meeting:

Minutes: 
savanna.2014-01-16-18.06.htmlhttp://eavesdrop.openstack.org/meetings/savanna/2014/savanna.2014-01-16-18.06.html
Log: 
savanna.2014-01-16-18.06.log.htmlhttp://eavesdrop.openstack.org/meetings/savanna/2014/savanna.2014-01-16-18.06.log.html

-- 
Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neturon] firewall_driver and ML2 and vif_security discussion

2014-01-16 Thread Amir Sadoughi
Hi all,

I just want to make sure I understand the plan and its consequences. I’m on 
board with the YAGNI principle of hardwiring mechanism drivers to return their 
firewall_driver types for now. 

However, after (A), (B), and (C) are completed, to allow for Open vSwitch-based 
security groups (blueprint ovs-firewall-driver) is it correct to say: we’ll 
need to implement a method such that the ML2 mechanism driver is aware of its 
agents and each of the agents' configured firewall_driver? i.e. additional RPC 
communication?

From yesterday’s meeting: 
http://eavesdrop.openstack.org/meetings/networking_ml2/2014/networking_ml2.2014-01-15-16.00.log.html

16:44:17 rkukura I've suggested that the L2 agent could get the vif_security 
info from its firewall_driver, and include this in its agents_db info
16:44:39 rkukura then the bound MD would return this as the vif_security for 
the port
16:45:47 rkukura existing agents_db RPC would send it from agent to server 
and store it in the agents_db table

Does the above suggestion change with the plan as-is now? From Nachi’s 
response, it seemed like maybe we should support concurrent firewall_driver 
instances in a single agent. i.e. don’t statically configure firewall_driver in 
the agent, but let the MD choose the firewall_driver for the port based on what 
firewall_drivers the agent supports. 

Thanks,

Amir


On Jan 16, 2014, at 11:42 AM, Nachi Ueno na...@ntti3.com wrote:

 Hi Mathieu, Bob
 
 Thank you for your reply
 OK let's do (A) - (C) for now.
 
 (A) Remove firewall_driver from server side
 Remove Noop -- I'll write patch for this
 
 (B) update ML2 with extend_port_dict -- Bob will push new review for this
 
 (C) Fix vif_security patch using (1) and (2). -- I'll update the
 patch after (A) and (B) merged
 # config is hardwired for each mech drivers for now
 
 (Optional D) Rething firewall_driver config in the agent
 
 
 
 
 
 2014/1/16 Robert Kukura rkuk...@redhat.com:
 On 01/16/2014 04:43 AM, Mathieu Rohon wrote:
 Hi,
 
 your proposals make sense. Having the firewall driver configuring so
 much things looks pretty stange.
 
 Agreed. I fully support proposed fix 1, adding enable_security_group
 config, at least for ml2. I'm not sure whether making this sort of
 change go the openvswitch or linuxbridge plugins at this stage is needed.
 
 
 Enabling security group should be a plugin/MD decision, not a driver 
 decision.
 
 I'm not so sure I support proposed fix 2, removing firewall_driver
 configuration. I think with proposed fix 1, firewall_driver becomes an
 agent-only configuration variable, which seems fine to me, at least for
 now. The people working on ovs-firewall-driver need something like this
 to choose the between their new driver and the iptables driver. Each L2
 agent could obviously revisit this later if needed.
 
 
 For ML2, in a first implementation, having vif security based on
 vif_type looks good too.
 
 I'm not convinced to support proposed fix 3, basing ml2's vif_security
 on the value of vif_type. It seems to me that if vif_type was all that
 determines how nova handles security groups, there would be no need for
 either the old capabilities or new vif_security port attribute.
 
 I think each ML2 bound MechanismDriver should be able to supply whatever
 vif_security (or capabilities) value it needs. It should be free to
 determine that however it wants. It could be made configurable on the
 server-side as Mathieu suggest below, or could be kept configurable in
 the L2 agent and transmitted via agents_db RPC to the MechanismDriver in
 the server as I have previously suggested.
 
 As an initial step, until we really have multiple firewall drivers to
 choose from, I think we can just hardwire each agent-based
 MechanismDriver to return the correct vif_security value for its normal
 firewall driver, as we currently do for the capabilities attribute.
 
 Also note that I really like the extend_port_dict() MechanismDriver
 methods in Nachi's current patch set. This is a much nicer way for the
 bound MechanismDriver to return binding-specific attributes than what
 ml2 currently does for vif_type and capabilities. I'm working on a patch
 taking that part of Nachi's code, fixing a few things, and extending it
 to handle the vif_type attribute as well as the current capabilities
 attribute. I'm hoping to post at least a WIP version of this today.
 
 I do support hardwiring the other plugins to return specific
 vif_security values, but those values may need to depend on the value of
 enable_security_group from proposal 1.
 
 -Bob
 
 Once OVSfirewallDriver will be available, the firewall drivers that
 the operator wants to use should be in a MD config file/section and
 ovs MD could bind one of the firewall driver during
 port_create/update/get.
 
 Best,
 Mathieu
 
 On Wed, Jan 15, 2014 at 6:29 PM, Nachi Ueno na...@ntti3.com wrote:
 Hi folks
 
 Security group for OVS agent (ovs plugin or ML2) is being broken.
 so we need vif_security port 

  1   2   >